Building an AI Agent for Value Investing

A step-by-step guide for beginners

Step 5: Implement the Agent Workflow

Now that you've defined your value investing criteria, collected and preprocessed data, and engineered relevant features, it's time to implement the core workflow of your AI agent. This step involves creating a structured pipeline that ties together all the previous components into a cohesive system.

What is an Agent Workflow?

An agent workflow is a sequence of operations that your AI system performs to accomplish its task. For a value investing agent, this typically includes fetching data, performing calculations, making evaluations, and providing investment recommendations.

Key Components of the Agent Workflow

  • Data Retrieval: Fetching the latest financial data for a given ticker
  • Feature Calculation: Computing value investing metrics and scores
  • Analysis: Interpreting results based on predefined thresholds
  • Recommendation Generation: Providing investment suggestions with explanations

Detailed Explanations

Data Retrieval: This component handles connecting to financial data sources (APIs, databases) and fetching the required information for analysis. It should include error handling for API failures and data validation to ensure quality.

Feature Calculation: This component applies the feature engineering techniques from Step 4 to calculate value investing metrics like intrinsic value, financial ratios, and composite scores.

Analysis: This component evaluates the calculated features against your predefined value investing criteria to determine if a stock is undervalued, fairly valued, or overvalued.

Recommendation Generation: This component translates the analysis results into actionable investment recommendations, including explanations of the reasoning behind each suggestion.

Building the Agent with Python

Let's implement a basic value investing agent using Python. We'll create a modular structure that separates the different components of the workflow:

Python: Value Investing Agent Class Structure
# value_investing_agent.py

import pandas as pd
import numpy as np
import yfinance as yf
from textblob import TextBlob
import matplotlib.pyplot as plt
import requests
from datetime import datetime, timedelta
import os

class ValueInvestingAgent:
    """
    An AI agent for value investing analysis.
    """
    
    def __init__(self, value_criteria=None, data_cache_dir='./data_cache'):
        """
        Initialize the value investing agent.
        
        Parameters:
        -----------
        value_criteria : dict, optional
            Dictionary defining value investing criteria and thresholds
        data_cache_dir : str, optional
            Directory to cache financial data
        """
        # Set default value criteria if none provided
        self.value_criteria = value_criteria or {
            'pe_ratio': {'max': 15, 'weight': 0.15, 'better': 'lower'},
            'pb_ratio': {'max': 3, 'weight': 0.15, 'better': 'lower'},
            'roe': {'min': 0.15, 'weight': 0.15, 'better': 'higher'},
            'debt_to_equity': {'max': 1.0, 'weight': 0.1, 'better': 'lower'},
            'fcf_yield': {'min': 0.02, 'weight': 0.15, 'better': 'higher'},
            'dividend_yield': {'min': 0.01, 'weight': 0.1, 'better': 'higher'},
            'sentiment_score': {'min': 0.5, 'weight': 0.2, 'better': 'higher'}
        }
        
        # Create cache directory if it doesn't exist
        self.data_cache_dir = data_cache_dir
        os.makedirs(data_cache_dir, exist_ok=True)
        
        # Initialize data storage
        self.company_data = {}
        self.analysis_results = {}
    
    def fetch_data(self, ticker):
        """
        Fetch financial data for a given ticker.
        
        Parameters:
        -----------
        ticker : str
            Stock ticker symbol
            
        Returns:
        --------
        dict
            Dictionary containing financial data
        """
        print(f"Fetching data for {ticker}...")
        
        try:
            # Check if we have cached data less than 24 hours old
            cache_file = os.path.join(self.data_cache_dir, f"{ticker}_data.csv")
            if os.path.exists(cache_file):
                file_age = datetime.now() - datetime.fromtimestamp(os.path.getmtime(cache_file))
                if file_age < timedelta(hours=24):
                    print(f"Using cached data for {ticker}")
                    return pd.read_csv(cache_file).to_dict('records')[0]
            
            # Fetch data from Yahoo Finance
            stock = yf.Ticker(ticker)
            info = stock.info
            
            # Extract relevant financial metrics
            financial_data = {
                'ticker': ticker,
                'name': info.get('longName', 'Unknown'),
                'sector': info.get('sector', 'Unknown'),
                'industry': info.get('industry', 'Unknown'),
                'pe_ratio': info.get('trailingPE', np.nan),
                'forward_pe': info.get('forwardPE', np.nan),
                'pb_ratio': info.get('priceToBook', np.nan),
                'roe': info.get('returnOnEquity', np.nan),
                'debt_to_equity': info.get('debtToEquity', np.nan) / 100 if info.get('debtToEquity') else np.nan,
                'dividend_yield': info.get('dividendYield', 0),
                'market_cap': info.get('marketCap', np.nan),
                'price': info.get('currentPrice', np.nan),
                'fifty_two_week_high': info.get('fiftyTwoWeekHigh', np.nan),
                'fifty_two_week_low': info.get('fiftyTwoWeekLow', np.nan)
            }
            
            # Calculate FCF Yield (if available)
            if 'freeCashflow' in info and info['freeCashflow'] and 'marketCap' in info and info['marketCap']:
                financial_data['fcf_yield'] = info['freeCashflow'] / info['marketCap']
            else:
                financial_data['fcf_yield'] = np.nan
            
            # Fetch news sentiment (simplified example)
            financial_data['sentiment_score'] = self._fetch_sentiment(ticker)
            
            # Cache the data
            pd.DataFrame([financial_data]).to_csv(cache_file, index=False)
            
            return financial_data
            
        except Exception as e:
            print(f"Error fetching data for {ticker}: {e}")
            return None
    
    def _fetch_sentiment(self, ticker):
        """
        Fetch and analyze sentiment for a given ticker.
        
        Parameters:
        -----------
        ticker : str
            Stock ticker symbol
            
        Returns:
        --------
        float
            Sentiment score between 0 and 1
        """
        # In a real implementation, this would fetch news articles and analyze sentiment
        # For this example, we'll simulate sentiment with a random value
        # weighted slightly by the ticker to make it deterministic
        
        # In a production system, you would:
        # 1. Fetch recent news articles about the company
        # 2. Use NLP to analyze sentiment in those articles
        # 3. Aggregate the sentiment scores
        
        # Simplified simulation for demonstration
        ticker_sum = sum(ord(c) for c in ticker)
        base_sentiment = (ticker_sum % 20) / 20  # Value between 0 and 1
        random_factor = np.random.normal(0, 0.1)  # Small random variation
        sentiment = min(1, max(0, base_sentiment + 0.5 + random_factor))  # Center around 0.5
        
        return sentiment
    
    def calculate_features(self, financial_data):
        """
        Calculate value investing features from financial data.
        
        Parameters:
        -----------
        financial_data : dict
            Dictionary containing financial data
            
        Returns:
        --------
        dict
            Dictionary containing calculated features
        """
        if not financial_data:
            return None
        
        features = financial_data.copy()
        
        # Calculate additional features if needed
        
        # Example: Calculate Graham Number (a simplified intrinsic value estimate)
        if not np.isnan(features.get('pe_ratio', np.nan)) and not np.isnan(features.get('pb_ratio', np.nan)):
            features['graham_number'] = np.sqrt(22.5 * features['pe_ratio'] * features['pb_ratio'])
        else:
            features['graham_number'] = np.nan
        
        # Example: Calculate margin of safety based on 52-week high
        if not np.isnan(features.get('fifty_two_week_high', np.nan)) and not np.isnan(features.get('price', np.nan)):
            features['margin_of_safety'] = (features['fifty_two_week_high'] - features['price']) / features['fifty_two_week_high']
        else:
            features['margin_of_safety'] = np.nan
        
        return features
    
    def analyze(self, ticker):
        """
        Perform value investing analysis for a given ticker.
        
        Parameters:
        -----------
        ticker : str
            Stock ticker symbol
            
        Returns:
        --------
        dict
            Analysis results including value score and recommendation
        """
        # Fetch data
        financial_data = self.fetch_data(ticker)
        if not financial_data:
            return {"error": f"Could not fetch data for {ticker}"}
        
        # Calculate features
        features = self.calculate_features(financial_data)
        
        # Calculate value score
        criterion_scores, value_score = self._calculate_value_score(features)
        
        # Generate recommendation
        recommendation = self._generate_recommendation(features, value_score)
        
        # Store results
        self.company_data[ticker] = features
        self.analysis_results[ticker] = {
            'value_score': value_score,
            'criterion_scores': criterion_scores,
            'recommendation': recommendation
        }
        
        return self.analysis_results[ticker]
    
    def _calculate_value_score(self, features):
        """
        Calculate value score based on features and criteria.
        
        Parameters:
        -----------
        features : dict
            Dictionary containing calculated features
            
        Returns:
        --------
        tuple
            (criterion_scores, value_score)
        """
        criterion_scores = {}
        total_score = 0
        max_possible_score = 0
        
        for criterion, details in self.value_criteria.items():
            if criterion in features and not np.isnan(features[criterion]):
                max_possible_score += details['weight']
                
                # For metrics where lower is better (like P/E ratio)
                if details['better'] == 'lower' and 'max' in details:
                    if features[criterion] <= details['max']:
                        # Scale the score based on how much below the max it is
                        reasonable_min = details['max'] * 0.2  # Assume 20% of max is reasonable minimum
                        normalized = 1 - max(0, min(1, (features[criterion] - reasonable_min) / (details['max'] - reasonable_min)))
                        criterion_scores[criterion] = normalized * details['weight']
                        total_score += criterion_scores[criterion]
                    else:
                        criterion_scores[criterion] = 0
                
                # For metrics where higher is better (like ROE)
                elif details['better'] == 'higher' and 'min' in details:
                    if features[criterion] >= details['min']:
                        # Scale the score based on how much above the min it is
                        reasonable_max = details['min'] * 3  # Assume 3x min is reasonable maximum
                        normalized = min(1, (features[criterion] - details['min']) / (reasonable_max - details['min']))
                        criterion_scores[criterion] = normalized * details['weight']
                        total_score += criterion_scores[criterion]
                    else:
                        criterion_scores[criterion] = 0
            else:
                criterion_scores[criterion] = 0
        
        # Normalize to 0-100 scale
        if max_possible_score > 0:
            value_score = (total_score / max_possible_score) * 100
        else:
            value_score = 0
        
        return criterion_scores, value_score
    
    def _generate_recommendation(self, features, value_score):
        """
        Generate investment recommendation based on analysis.
        
        Parameters:
        -----------
        features : dict
            Dictionary containing calculated features
        value_score : float
            Overall value score
            
        Returns:
        --------
        dict
            Recommendation details
        """
        # Define recommendation thresholds
        if value_score >= 70:
            rating = "Strong Buy"
            explanation = f"{features['name']} appears significantly undervalued based on value investing criteria."
        elif value_score >= 60:
            rating = "Buy"
            explanation = f"{features['name']} appears moderately undervalued based on value investing criteria."
        elif value_score >= 40:
            rating = "Hold"
            explanation = f"{features['name']} appears fairly valued based on value investing criteria."
        elif value_score >= 30:
            rating = "Sell"
            explanation = f"{features['name']} appears moderately overvalued based on value investing criteria."
        else:
            rating = "Strong Sell"
            explanation = f"{features['name']} appears significantly overvalued based on value investing criteria."
        
        # Add specific insights based on individual criteria
        insights = []
        
        # Check P/E ratio
        if 'pe_ratio' in features and not np.isnan(features['pe_ratio']):
            if features['pe_ratio'] <= self.value_criteria['pe_ratio']['max']:
                insights.append(f"P/E ratio of {features['pe_ratio']:.2f} is below the threshold of {self.value_criteria['pe_ratio']['max']}, suggesting potential undervaluation.")
            else:
                insights.append(f"P/E ratio of {features['pe_ratio']:.2f} is above the threshold of {self.value_criteria['pe_ratio']['max']}, suggesting potential overvaluation.")
        
        # Check ROE
        if 'roe' in features and not np.isnan(features['roe']):
            if features['roe'] >= self.value_criteria['roe']['min']:
                insights.append(f"Return on Equity (ROE) of {features['roe']*100:.2f}% is above the threshold of {self.value_criteria['roe']['min']*100:.2f}%, indicating strong profitability.")
            else:
                insights.append(f"Return on Equity (ROE) of {features['roe']*100:.2f}% is below the threshold of {self.value_criteria['roe']['min']*100:.2f}%, indicating potential profitability concerns.")
        
        # Check sentiment
        if 'sentiment_score' in features:
            if features['sentiment_score'] >= 0.6:
                insights.append(f"Market sentiment is positive ({features['sentiment_score']:.2f}), which may support price appreciation.")
            elif features['sentiment_score'] <= 0.4:
                insights.append(f"Market sentiment is negative ({features['sentiment_score']:.2f}), which may present contrarian opportunities if fundamentals are strong.")
            else:
                insights.append(f"Market sentiment is neutral ({features['sentiment_score']:.2f}).")
        
        return {
            'rating': rating,
            'explanation': explanation,
            'insights': insights
        }
    
    def generate_report(self, ticker, output_dir='./reports'):
        """
        Generate a detailed analysis report for a ticker.
        
        Parameters:
        -----------
        ticker : str
            Stock ticker symbol
        output_dir : str, optional
            Directory to save the report
            
        Returns:
        --------
        str
            Path to the generated report
        """
        # Ensure the ticker has been analyzed
        if ticker not in self.analysis_results:
            self.analyze(ticker)
        
        if ticker not in self.company_data or ticker not in self.analysis_results:
            return f"Error: Could not generate report for {ticker}"
        
        # Create output directory if it doesn't exist
        os.makedirs(output_dir, exist_ok=True)
        
        # Get data and results
        features = self.company_data[ticker]
        results = self.analysis_results[ticker]
        
        # Create report filename
        report_file = os.path.join(output_dir, f"{ticker}_value_analysis.html")
        
        # Generate HTML report
        html_content = f"""
        
        
        
            Value Investing Analysis: {ticker}
            
        
        
            

Value Investing Analysis Report

{features['name']} ({ticker})

Sector: {features['sector']} | Industry: {features['industry']}

Current Price: ${features['price']:.2f}

Value Score: {results['value_score']:.2f}/100

Recommendation: {results['recommendation']['rating']}

{results['recommendation']['explanation']}

Key Metrics

P/E Ratio

{features.get('pe_ratio', 'N/A'):.2f if not np.isnan(features.get('pe_ratio', np.nan)) else 'N/A'}

Threshold: ≤ {self.value_criteria['pe_ratio']['max']}

P/B Ratio

{features.get('pb_ratio', 'N/A'):.2f if not np.isnan(features.get('pb_ratio', np.nan)) else 'N/A'}

Threshold: ≤ {self.value_criteria['pb_ratio']['max']}

Return on Equity

{features.get('roe', 'N/A')*100:.2f}% if not np.isnan(features.get('roe', np.nan)) else 'N/A'

Threshold: ≥ {self.value_criteria['roe']['min']*100:.2f}%

Debt-to-Equity

{features.get('debt_to_equity', 'N/A'):.2f if not np.isnan(features.get('debt_to_equity', np.nan)) else 'N/A'}

Threshold: ≤ {self.value_criteria['debt_to_equity']['max']}

FCF Yield

{features.get('fcf_yield', 'N/A')*100:.2f}% if not np.isnan(features.get('fcf_yield', np.nan)) else 'N/A'

Threshold: ≥ {self.value_criteria['fcf_yield']['min']*100:.2f}%

Dividend Yield

{features.get('dividend_yield', 'N/A')*100:.2f}% if not np.isnan(features.get('dividend_yield', np.nan)) else 'N/A'

Threshold: ≥ {self.value_criteria['dividend_yield']['min']*100:.2f}%

Sentiment Score

{features.get('sentiment_score', 'N/A'):.2f if not np.isnan(features.get('sentiment_score', np.nan)) else 'N/A'}

Threshold: ≥ {self.value_criteria['sentiment_score']['min']}

Insights

    {''.join(f'
  • {insight}
  • ' for insight in results['recommendation']['insights'])}

Disclaimer

This analysis is for educational purposes only and should not be considered investment advice. Always conduct your own research before making investment decisions.

Generated on {datetime.now().strftime('%Y-%m-%d %H:%M:%S')} by Value Investing AI Agent

""" # Write HTML to file with open(report_file, 'w') as f: f.write(html_content) return report_file # Example usage if __name__ == "__main__": # Create the agent agent = ValueInvestingAgent() # Analyze a stock ticker = "AAPL" results = agent.analyze(ticker) # Print results print(f"\nValue Investing Analysis for {ticker}:") print(f"Value Score: {results['value_score']:.2f}/100") print(f"Recommendation: {results['recommendation']['rating']}") print(f"Explanation: {results['recommendation']['explanation']}") print("\nInsights:") for insight in results['recommendation']['insights']: print(f"- {insight}") # Generate a report report_path = agent.generate_report(ticker) print(f"\nDetailed report saved to: {report_path}")

Using LangChain for Enhanced Agent Capabilities

For more advanced capabilities, you can use frameworks like LangChain to build your value investing agent. LangChain provides tools for creating agents that can combine structured data analysis with natural language processing:

Python: LangChain-based Value Investing Agent
# langchain_value_agent.py

# Install required libraries (run this once)
# pip install langchain openai yfinance pandas numpy

from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent
from langchain.prompts import StringPromptTemplate
from langchain import OpenAI, LLMChain
from langchain.tools import BaseTool
from typing import Dict, List, Any, Optional
import yfinance as yf
import pandas as pd
import numpy as np
from pydantic import BaseModel, Field
import json

# Note: This example uses OpenAI's API. You would need an API key to run it.
# For educational purposes, we'll show the code structure without requiring an actual key.

# Define tools for the agent

class StockDataTool(BaseTool):
    name = "fetch_stock_data"
    description = "Fetch financial data for a given stock ticker"
    
    def _run(self, ticker: str) -> str:
        try:
            stock = yf.Ticker(ticker)
            info = stock.info
            
            # Extract key financial metrics
            data = {
                "name": info.get("longName", "Unknown"),
                "sector": info.get("sector", "Unknown"),
                "industry": info.get("industry", "Unknown"),
                "pe_ratio": info.get("trailingPE", "N/A"),
                "pb_ratio": info.get("priceToBook", "N/A"),
                "roe": info.get("returnOnEquity", "N/A"),
                "debt_to_equity": info.get("debtToEquity", "N/A"),
                "dividend_yield": info.get("dividendYield", "N/A"),
                "price": info.get("currentPrice", "N/A"),
                "market_cap": info.get("marketCap", "N/A"),
                "52_week_high": info.get("fiftyTwoWeekHigh", "N/A"),
                "52_week_low": info.get("fiftyTwoWeekLow", "N/A")
            }
            
            return json.dumps(data)
        except Exception as e:
            return f"Error fetching data for {ticker}: {str(e)}"
    
    def _arun(self, ticker: str) -> str:
        # For async implementation
        raise NotImplementedError("Async not implemented")

class ValueScoreTool(BaseTool):
    name = "calculate_value_score"
    description = "Calculate value investing score based on financial metrics"
    
    def _run(self, financial_data_json: str) -> str:
        try:
            # Parse the financial data
            data = json.loads(financial_data_json)
            
            # Define value criteria
            value_criteria = {
                "pe_ratio": {"max": 15, "weight": 0.25, "better": "lower"},
                "pb_ratio": {"max": 3, "weight": 0.25, "better": "lower"},
                "roe": {"min": 0.15, "weight": 0.2, "better": "higher"},
                "debt_to_equity": {"max": 1.0, "weight": 0.15, "better": "lower"},
                "dividend_yield": {"min": 0.01, "weight": 0.15, "better": "higher"}
            }
            
            # Calculate score
            total_score = 0
            max_possible_score = 0
            criterion_scores = {}
            
            for criterion, details in value_criteria.items():
                if criterion in data and data[criterion] != "N/A":
                    value = float(data[criterion])
                    max_possible_score += details["weight"]
                    
                    if details["better"] == "lower" and "max" in details:
                        if value <= details["max"]:
                            normalized = 1 - min(1, value / details["max"])
                            criterion_scores[criterion] = normalized * details["weight"]
                            total_score += criterion_scores[criterion]
                        else:
                            criterion_scores[criterion] = 0
                    
                    elif details["better"] == "higher" and "min" in details:
                        if value >= details["min"]:
                            normalized = min(1, value / (details["min"] * 3))
                            criterion_scores[criterion] = normalized * details["weight"]
                            total_score += criterion_scores[criterion]
                        else:
                            criterion_scores[criterion] = 0
            
            # Normalize to 0-100 scale
            if max_possible_score > 0:
                value_score = (total_score / max_possible_score) * 100
            else:
                value_score = 0
            
            # Generate recommendation
            if value_score >= 70:
                rating = "Strong Buy"
            elif value_score >= 60:
                rating = "Buy"
            elif value_score >= 40:
                rating = "Hold"
            elif value_score >= 30:
                rating = "Sell"
            else:
                rating = "Strong Sell"
            
            result = {
                "value_score": value_score,
                "criterion_scores": criterion_scores,
                "recommendation": rating
            }
            
            return json.dumps(result)
        
        except Exception as e:
            return f"Error calculating value score: {str(e)}"
    
    def _arun(self, financial_data_json: str) -> str:
        # For async implementation
        raise NotImplementedError("Async not implemented")

class NewsAnalysisTool(BaseTool):
    name = "analyze_news_sentiment"
    description = "Analyze news sentiment for a given stock ticker"
    
    def _run(self, ticker: str) -> str:
        # In a real implementation, this would fetch and analyze news
        # For this example, we'll return simulated sentiment
        import random
        
        sentiment = random.uniform(0.3, 0.8)
        sentiment_category = "positive" if sentiment > 0.6 else "neutral" if sentiment > 0.4 else "negative"
        
        result = {
            "sentiment_score": sentiment,
            "sentiment_category": sentiment_category,
            "news_count": random.randint(5, 20)
        }
        
        return json.dumps(result)
    
    def _arun(self, ticker: str) -> str:
        # For async implementation
        raise NotImplementedError("Async not implemented")

# Define the prompt template
class ValueInvestingPromptTemplate(StringPromptTemplate):
    template: str
    tools: List[BaseTool]
    
    def format(self, **kwargs) -> str:
        # Get the intermediate steps (AgentAction, Observation tuples)
        intermediate_steps = kwargs.pop("intermediate_steps")
        
        # Format the observations as a string
        history = ""
        for action, observation in intermediate_steps:
            history += f"Action: {action.tool}\nAction Input: {action.tool_input}\nObservation: {observation}\n"
        
        # Set the agent_scratchpad variable to the history
        kwargs["agent_scratchpad"] = history
        
        # Create a list of tool names and descriptions
        tools_str = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])
        kwargs["tools"] = tools_str
        
        # Create a list of tool names
        tool_names = ", ".join([tool.name for tool in self.tools])
        kwargs["tool_names"] = tool_names
        
        return self.template.format(**kwargs)

# Define the LangChain agent
def create_value_investing_agent():
    # Define the tools
    tools = [
        StockDataTool(),
        ValueScoreTool(),
        NewsAnalysisTool()
    ]
    
    # Define the prompt template
    template = """
    You are a value investing AI agent. Your goal is to analyze stocks based on value investing principles and provide investment recommendations.

    You have access to the following tools:
    {tools}

    Use the following format:
    Question: the input question you must answer
    Thought: you should always think about what to do
    Action: the action to take, should be one of [{tool_names}]
    Action Input: the input to the action
    Observation: the result of the action
    ... (this Thought/Action/Action Input/Observation can repeat N times)
    Thought: I now know the final answer
    Final Answer: the final answer to the original input question

    Begin!

    Question: {input}
    {agent_scratchpad}
    """
    
    prompt = ValueInvestingPromptTemplate(
        template=template,
        tools=tools,
        input_variables=["input", "intermediate_steps"]
    )
    
    # Define the LLM
    llm = OpenAI(temperature=0)
    
    # Define the LLM chain
    llm_chain = LLMChain(llm=llm, prompt=prompt)
    
    # Define the agent
    agent = LLMSingleActionAgent(
        llm_chain=llm_chain,
        output_parser=None,  # You would need to implement a custom output parser
        stop=["\nObservation:"],
        allowed_tools=[tool.name for tool in tools]
    )
    
    # Define the agent executor
    agent_executor = AgentExecutor.from_agent_and_tools(
        agent=agent,
        tools=tools,
        verbose=True
    )
    
    return agent_executor

# Example usage
if __name__ == "__main__":
    # Create the agent
    agent = create_value_investing_agent()
    
    # Run the agent
    result = agent.run("Analyze Apple (AAPL) from a value investing perspective and provide a recommendation.")
    
    print(result)

Testing Your Agent

Before moving on to the next step, it's important to test your agent to ensure it's working correctly:

Testing Strategies

Here are some approaches to test your value investing agent:

1. Single Stock Analysis

Test your agent on well-known stocks with clear value characteristics:


# Test with a classic value stock
agent.analyze("BRK-B")  # Berkshire Hathaway

# Test with a growth stock (likely lower value score)
agent.analyze("TSLA")   # Tesla
                    

2. Comparative Analysis

Compare multiple stocks in the same industry to verify relative rankings:


# Test with multiple banks
results = {}
for ticker in ["JPM", "BAC", "WFC", "C"]:
    results[ticker] = agent.analyze(ticker)

# Compare value scores
for ticker, result in results.items():
    print(f"{ticker}: {result['value_score']:.2f} - {result['recommendation']['rating']}")
                    

3. Edge Cases

Test your agent with edge cases to ensure robust error handling:


# Test with invalid ticker
agent.analyze("INVALID")

# Test with a stock that has missing data
agent.analyze("SMALLCAP")  # A small cap stock likely missing some metrics
                    

4. Historical Validation

Test your agent against historical data to see if it would have identified value opportunities:


# Modify your agent to accept historical data points
# Then test with data from different time periods
agent.analyze_historical("AAPL", date="2008-01-01")  # Before the iPhone boom
agent.analyze_historical("MSFT", date="2013-01-01")  # Before cloud growth
                    

Knowledge Check

What is the primary purpose of implementing an agent workflow for value investing?

  • To predict short-term stock price movements
  • To create a structured pipeline that fetches data, calculates metrics, and provides investment recommendations
  • To automatically execute trades based on technical indicators
  • To generate random stock picks for diversification

Which of the following is NOT typically a component of a value investing agent workflow?

  • Data retrieval from financial sources
  • Feature calculation based on value investing principles
  • High-frequency trading algorithms
  • Recommendation generation with explanations
55%
Introduction Completed
Steps 1-4 Completed
Step 5 Current
Steps 6-9 Pending