Agentic AI Design Patterns

Master advanced architectural patterns to build sophisticated AI agents for complex tasks

Understanding Agentic Design Patterns

Agentic design patterns are reusable architectural solutions to common problems in AI agent development. These patterns provide proven approaches for structuring agents to handle complex tasks, improve reliability, and enhance performance.

Key Insight

Just as software design patterns revolutionised traditional programming, agentic design patterns provide a shared vocabulary and set of best practices for building AI agents that can reason, plan, and act effectively in diverse environments.

Why Design Patterns Matter for AI Agents

Design patterns offer several critical benefits for AI agent development:

Design Pattern Categories

Category Focus Examples
Cognitive Patterns How agents think and reason Reflection, Chain-of-Thought, Tree-of-Thought
Architectural Patterns How agents are structured Controller-Worker, Hierarchical, Multi-Agent
Interaction Patterns How agents communicate Request-Response, Publish-Subscribe, Negotiation
Execution Patterns How agents perform tasks Plan-Execute-Reflect, Try-Catch-Retry, Progressive Refinement

Cognitive Design Patterns

Cognitive design patterns focus on how agents process information, reason, and make decisions.

1. Reflection Pattern

The Reflection pattern enables agents to evaluate their own outputs, identify errors or weaknesses, and improve their responses.

Reflection Pattern Structure:

  1. Initial Generation: Produce a first-draft response
  2. Self-Critique: Analyse the response for errors, omissions, or improvements
  3. Refinement: Generate an improved response based on the critique
  4. (Optional) Iteration: Repeat the critique and refinement steps until satisfied
def reflection_pattern(llm, prompt, max_iterations=2):
    """
    Implement the Reflection pattern for improved responses.
    
    Args:
        llm: Language model for generation
        prompt: Initial user prompt
        max_iterations: Maximum number of reflection cycles
        
    Returns:
        Final refined response
    """
    # Step 1: Initial Generation
    initial_response = llm.predict(f"""
    Respond to the following prompt:
    {prompt}
    """)
    
    current_response = initial_response
    
    # Steps 2-3: Self-Critique and Refinement (with optional iteration)
    for i in range(max_iterations):
        # Self-Critique
        critique = llm.predict(f"""
        Analyse the following response to the prompt: "{prompt}"
        
        Response:
        {current_response}
        
        Provide a detailed critique identifying:
        1. Factual errors or inaccuracies
        2. Logical flaws or inconsistencies
        3. Missing information or incomplete explanations
        4. Unclear or confusing language
        5. Areas for improvement
        
        Be specific and constructive in your critique.
        """)
        
        # Refinement
        refined_response = llm.predict(f"""
        Original prompt: "{prompt}"
        
        Your previous response:
        {current_response}
        
        Critique of your response:
        {critique}
        
        Based on this critique, provide an improved response that addresses the identified issues.
        """)
        
        # Update current response
        current_response = refined_response
    
    return current_response

When to Use Reflection

2. Tree-of-Thought Pattern

The Tree-of-Thought pattern enables agents to explore multiple reasoning paths simultaneously, evaluating different approaches to a problem.

Tree-of-Thought Pattern Structure:

  1. Branch Generation: Create multiple different approaches to the problem
  2. Branch Exploration: Develop each approach for several steps
  3. Evaluation: Assess the promise of each branch
  4. Selection or Expansion: Either select the best branch or further explore promising branches
def tree_of_thought_pattern(llm, problem, num_branches=3, depth=2):
    """
    Implement the Tree-of-Thought pattern for complex problem solving.
    
    Args:
        llm: Language model for generation
        problem: Problem description
        num_branches: Number of initial approaches to generate
        depth: How many steps to explore each branch
        
    Returns:
        Best solution found
    """
    # Step 1: Branch Generation
    branches_prompt = f"""
    Consider the following problem:
    {problem}
    
    Generate {num_branches} different approaches to solve this problem.
    For each approach:
    1. Give it a descriptive name
    2. Explain the key idea
    3. Outline the first step
    
    Format each approach as:
    Approach X: [Name]
    Key Idea: [Explanation]
    First Step: [Initial step]
    """
    
    branches_response = llm.predict(branches_prompt)
    
    # Parse branches (simplified parsing)
    branch_texts = []
    current_branch = ""
    for line in branches_response.split('\n'):
        if line.startswith("Approach ") and current_branch:
            branch_texts.append(current_branch)
            current_branch = line
        else:
            current_branch += line + "\n"
    
    if current_branch:
        branch_texts.append(current_branch)
    
    # Ensure we have the requested number of branches
    branch_texts = branch_texts[:num_branches]
    
    # Step 2: Branch Exploration
    explored_branches = []
    
    for i, branch in enumerate(branch_texts):
        current_state = branch
        
        # Explore this branch to the specified depth
        for d in range(depth):
            exploration_prompt = f"""
            Problem: {problem}
            
            Current approach and progress:
            {current_state}
            
            Continue developing this approach. What is the next step? 
            Provide detailed reasoning and be specific about what to do next.
            """
            
            next_step = llm.predict(exploration_prompt)
            current_state += f"\nStep {d+2}: {next_step}"
        
        explored_branches.append(current_state)
    
    # Step 3: Evaluation
    evaluation_prompt = f"""
    Problem: {problem}
    
    Evaluate the following solution approaches:
    
    {'-' * 40}
    {'-' * 40}
    
    {'-' * 40}
    
    For each approach, rate it from 1-10 and explain your reasoning.
    Then identify which approach is most promising and why.
    """.format(*explored_branches)
    
    evaluation = llm.predict(evaluation_prompt)
    
    # Step 4: Selection
    selection_prompt = f"""
    Based on your evaluation:
    {evaluation}
    
    Select the best approach and develop it into a complete solution for the original problem:
    {problem}
    
    Provide a detailed, step-by-step solution using the selected approach.
    """
    
    final_solution = llm.predict(selection_prompt)
    
    return final_solution

When to Use Tree-of-Thought

3. Verification Pattern

The Verification pattern ensures accuracy by independently checking generated outputs against reliable sources or through logical validation.

Verification Pattern Structure:

  1. Generation: Produce an initial response or solution
  2. Fact Extraction: Identify specific claims or statements to verify
  3. Verification: Check each claim against reliable sources or through logical reasoning
  4. Correction: Modify the response based on verification results
def verification_pattern(llm, prompt, search_tool):
    """
    Implement the Verification pattern for factual accuracy.
    
    Args:
        llm: Language model for generation
        prompt: User prompt
        search_tool: Function to search for information
        
    Returns:
        Verified response
    """
    # Step 1: Generation
    initial_response = llm.predict(f"""
    Respond to the following prompt:
    {prompt}
    
    Include specific facts and information in your response.
    """)
    
    # Step 2: Fact Extraction
    fact_extraction_prompt = f"""
    Extract the key factual claims from the following response:
    
    {initial_response}
    
    List each distinct factual claim on a new line, prefixed with "FACT: ".
    Focus only on objective, verifiable claims (dates, statistics, definitions, etc.).
    """
    
    facts_text = llm.predict(fact_extraction_prompt)
    
    # Parse facts
    facts = []
    for line in facts_text.split('\n'):
        if line.startswith("FACT: "):
            facts.append(line[6:].strip())
    
    # Step 3: Verification
    verification_results = []
    
    for fact in facts:
        # Search for information to verify this fact
        search_results = search_tool(fact)
        
        verification_prompt = f"""
        Verify this claim: "{fact}"
        
        Search results:
        {search_results}
        
        Based solely on these search results, classify the claim as:
        - VERIFIED: The search results clearly confirm the claim
        - REFUTED: The search results contradict the claim
        - UNCERTAIN: The search results don't provide enough information
        
        Explain your reasoning and provide the correct information if the claim is refuted.
        """
        
        verification = llm.predict(verification_prompt)
        verification_results.append({"fact": fact, "verification": verification})
    
    # Step 4: Correction
    correction_prompt = f"""
    Original response:
    {initial_response}
    
    Fact verification results:
    {verification_results}
    
    Create an improved version of the original response that:
    1. Corrects any refuted facts with accurate information
    2. Adds qualifiers to uncertain facts (e.g., "reportedly", "according to some sources")
    3. Maintains the verified facts
    4. Preserves the overall structure and tone of the original response
    
    The improved response should be factually accurate while remaining helpful and informative.
    """
    
    corrected_response = llm.predict(correction_prompt)
    
    return corrected_response

When to Use Verification

Architectural Design Patterns

Architectural design patterns define the overall structure and organisation of AI agents.

1. Controller-Worker Pattern

The Controller-Worker pattern separates high-level decision making from specialised task execution, creating a more modular and efficient agent architecture.

Controller-Worker Pattern Structure:

class ControllerWorkerAgent:
    def __init__(self, llm):
        self.llm = llm
        self.workers = {
            "researcher": self._research_worker,
            "writer": self._writing_worker,
            "calculator": self._calculation_worker,
            "planner": self._planning_worker
        }
    
    def process_request(self, user_request):
        """Process a user request using the Controller-Worker pattern."""
        # Controller: Analyse request and create plan
        analysis = self._analyse_request(user_request)
        plan = self._create_execution_plan(analysis, user_request)
        
        # Execute plan by delegating to appropriate workers
        results = {}
        for step in plan["steps"]:
            worker_name = step["worker"]
            task = step["task"]
            
            # Check if we have the required worker
            if worker_name not in self.workers:
                results[step["id"]] = {"error": f"Worker '{worker_name}' not available"}
                continue
            
            # Execute worker with task and any previous results needed
            worker = self.workers[worker_name]
            try:
                step_result = worker(task, results)
                results[step["id"]] = {"result": step_result}
            except Exception as e:
                results[step["id"]] = {"error": str(e)}
        
        # Controller: Integrate results into final response
        final_response = self._integrate_results(user_request, plan, results)
        return final_response
    
    def _analyse_request(self, user_request):
        """Analyse the user request to determine required workers and approach."""
        analysis_prompt = f"""
        Analyse this user request: "{user_request}"
        
        Determine:
        1. The primary objective
        2. Required types of work (research, writing, calculation, planning)
        3. Any constraints or special requirements
        
        Format your response as JSON with these fields:
        {{
            "objective": "string",
            "required_workers": ["string"],
            "constraints": ["string"]
        }}
        """
        
        analysis_response = self.llm.predict(analysis_prompt)
        
        # In a real implementation, parse the JSON response
        # For simplicity, we'll return a mock analysis
        return {
            "objective": "Determine objective from request",
            "required_workers": ["researcher", "writer"],
            "constraints": []
        }
    
    def _create_execution_plan(self, analysis, user_request):
        """Create a step-by-step execution plan based on the analysis."""
        planning_prompt = f"""
        User request: "{user_request}"
        
        Analysis:
        {analysis}
        
        Create a step-by-step execution plan with these available workers:
        - researcher: Finds information on topics
        - writer: Creates well-structured content
        - calculator: Performs numerical calculations
        - planner: Breaks down complex tasks
        
        For each step specify:
        1. Step ID and description
        2. Which worker to use
        3. The specific task for that worker
        4. Dependencies on previous steps (if any)
        
        Format your response as JSON with a "steps" array.
        """
        
        planning_response = self.llm.predict(planning_prompt)
        
        # In a real implementation, parse the JSON response
        # For simplicity, we'll return a mock plan
        return {
            "steps": [
                {
                    "id": "step1",
                    "description": "Research the topic",
                    "worker": "researcher",
                    "task": "Find information about the topic",
                    "dependencies": []
                },
                {
                    "id": "step2",
                    "description": "Write content based on research",
                    "worker": "writer",
                    "task": "Create content using research results",
                    "dependencies": ["step1"]
                }
            ]
        }
    
    def _integrate_results(self, user_request, plan, results):
        """Integrate worker results into a cohesive response."""
        integration_prompt = f"""
        User request: "{user_request}"
        
        Execution results:
        {results}
        
        Create a comprehensive response that integrates all the results
        into a cohesive answer to the user's request.
        """
        
        return self.llm.predict(integration_prompt)
    
    # Worker implementations
    def _research_worker(self, task, previous_results):
        """Worker that performs research tasks."""
        research_prompt = f"""
        Research task: {task}
        
        Perform comprehensive research on this topic and return
        the key findings and information.
        """
        return self.llm.predict(research_prompt)
    
    def _writing_worker(self, task, previous_results):
        """Worker that performs writing tasks."""
        # Get research results if available
        research = ""
        for step_id, step_result in previous_results.items():
            if "result" in step_result and "research" in task.lower():
                research = step_result["result"]
        
        writing_prompt = f"""
        Writing task: {task}
        
        Research information:
        {research}
        
        Create well-structured, engaging content based on the task
        and research information.
        """
        return self.llm.predict(writing_prompt)
    
    def _calculation_worker(self, task, previous_results):
        """Worker that performs calculation tasks."""
        calculation_prompt = f"""
        Calculation task: {task}
        
        Perform the necessary calculations, showing your work step-by-step.
        Ensure numerical accuracy and provide the final result.
        """
        return self.llm.predict(calculation_prompt)
    
    def _planning_worker(self, task, previous_results):
        """Worker that performs planning tasks."""
        planning_prompt = f"""
        Planning task: {task}
        
        Break down this task into a detailed plan with specific steps,
        timelines, and resource requirements.
        """
        return self.llm.predict(planning_prompt)

When to Use Controller-Worker

2. Hierarchical Agent Pattern

The Hierarchical Agent pattern organises agents into a management hierarchy, with higher-level agents delegating to and coordinating specialised sub-agents.

Hierarchical Agent Pattern Structure:

class HierarchicalAgent:
    def __init__(self, llm):
        self.llm = llm
        
        # Initialise sub-agents
        self.sub_agents = {
            "research": ResearchAgent(llm),
            "writing": WritingAgent(llm),
            "coding": CodingAgent(llm),
            "data_analysis": DataAnalysisAgent(llm)
        }
    
    def process_request(self, user_request):
        """Process a user request using the Hierarchical Agent pattern."""
        # Manager: Analyse request and create delegation plan
        delegation_plan = self._create_delegation_plan(user_request)
        
        # Manager: Delegate tasks to sub-agents
        sub_agent_results = {}
        for task in delegation_plan["tasks"]:
            sub_agent_name = task["agent"]
            task_description = task["description"]
            task_id = task["id"]
            
            # Check if we have the required sub-agent
            if sub_agent_name not in self.sub_agents:
                sub_agent_results[task_id] = {
                    "status": "failed",
                    "error": f"Sub-agent '{sub_agent_name}' not available"
                }
                continue
            
            # Delegate to sub-agent
            sub_agent = self.sub_agents[sub_agent_name]
            try:
                result = sub_agent.execute_task(task_description, task.get("context", {}))
                sub_agent_results[task_id] = {
                    "status": "completed",
                    "result": result
                }
            except Exception as e:
                sub_agent_results[task_id] = {
                    "status": "failed",
                    "error": str(e)
                }
        
        # Manager: Handle any escalations
        escalations = self._identify_escalations(sub_agent_results)
        if escalations:
            self._handle_escalations(escalations, sub_agent_results)
        
        # Manager: Integrate results
        final_response = self._integrate_results(user_request, delegation_plan, sub_agent_results)
        return final_response
    
    def _create_delegation_plan(self, user_request):
        """Create a plan for delegating tasks to sub-agents."""
        delegation_prompt = f"""
        As a manager agent, analyse this user request and create a delegation plan:
        "{user_request}"
        
        Available sub-agents:
        - research: Finds and summarises information
        - writing: Creates well-structured content
        - coding: Writes and explains code
        - data_analysis: Analyses data and creates visualisations
        
        For each task in your plan, specify:
        1. Task ID
        2. Which sub-agent should handle it
        3. A clear description of the task
        4. Any context or constraints
        5. Dependencies on other tasks (if any)
        
        Format your response as JSON with a "tasks" array.
        """
        
        delegation_response = self.llm.predict(delegation_prompt)
        
        # In a real implementation, parse the JSON response
        # For simplicity, we'll return a mock plan
        return {
            "tasks": [
                {
                    "id": "task1",
                    "agent": "research",
                    "description": "Research the topic",
                    "context": {},
                    "dependencies": []
                },
                {
                    "id": "task2",
                    "agent": "writing",
                    "description": "Write content based on research",
                    "context": {},
                    "dependencies": ["task1"]
                }
            ]
        }
    
    def _identify_escalations(self, sub_agent_results):
        """Identify tasks that need manager intervention."""
        escalations = []
        
        for task_id, result in sub_agent_results.items():
            # Failed tasks need escalation
            if result["status"] == "failed":
                escalations.append({
                    "task_id": task_id,
                    "reason": "failure",
                    "details": result.get("error", "Unknown error")
                })
            
            # Tasks that explicitly request escalation
            elif result["status"] == "completed" and isinstance(result["result"], dict):
                if result["result"].get("needs_escalation", False):
                    escalations.append({
                        "task_id": task_id,
                        "reason": "requested",
                        "details": result["result"].get("escalation_reason", "")
                    })
        
        return escalations
    
    def _handle_escalations(self, escalations, sub_agent_results):
        """Handle tasks that were escalated to the manager."""
        for escalation in escalations:
            task_id = escalation["task_id"]
            reason = escalation["reason"]
            details = escalation["details"]
            
            escalation_prompt = f"""
            A task was escalated to you as the manager:
            
            Task ID: {task_id}
            Escalation Reason: {reason}
            Details: {details}
            
            All sub-agent results so far:
            {sub_agent_results}
            
            As the manager, resolve this escalation by:
            1. Analysing the issue
            2. Providing a solution or alternative approach
            3. Giving specific instructions for next steps
            """
            
            resolution = self.llm.predict(escalation_prompt)
            
            # Update the results with the manager's resolution
            sub_agent_results[task_id] = {
                "status": "resolved_by_manager",
                "original_issue": details,
                "manager_resolution": resolution
            }
    
    def _integrate_results(self, user_request, delegation_plan, sub_agent_results):
        """Integrate sub-agent results into a cohesive response."""
        integration_prompt = f"""
        As the manager agent, create a comprehensive response to this user request:
        "{user_request}"
        
        Sub-agent results:
        {sub_agent_results}
        
        Your task is to:
        1. Synthesise the information from all sub-agents
        2. Ensure the response is coherent and well-structured
        3. Address all aspects of the user's request
        4. Provide a unified voice that hides the multi-agent architecture
        
        Create a complete, polished response that appears as if from a single expert.
        """
        
        return self.llm.predict(integration_prompt)

# Example sub-agent implementation
class ResearchAgent:
    def __init__(self, llm):
        self.llm = llm
    
    def execute_task(self, task_description, context=None):
        """Execute a research task."""
        research_prompt = f"""
        Research task: {task_description}
        
        Additional context: {context if context else 'None provided'}
        
        Conduct thorough research on this topic and provide:
        1. Key findings and information
        2. Sources and references
        3. Any areas where information is limited or uncertain
        
        If you encounter any issues that require manager intervention,
        include a section with "needs_escalation: true" and explain why.
        """
        
        return self.llm.predict(research_prompt)

# Other sub-agent classes would be implemented similarly

When to Use Hierarchical Agents

3. Multi-Agent System Pattern

The Multi-Agent System pattern creates a collaborative environment where multiple autonomous agents interact to solve problems collectively.

Multi-Agent System Pattern Structure:

class MultiAgentSystem:
    def __init__(self, llm):
        self.llm = llm
        
        # Initialise agents with different roles and perspectives
        self.agents = {
            "domain_expert": Agent(llm, "domain_expert", "You are an expert in the subject matter with deep technical knowledge."),
            "critic": Agent(llm, "critic", "You critically analyse information, identifying flaws and limitations."),
            "creative": Agent(llm, "creative", "You think outside the box and generate innovative ideas and approaches."),
            "pragmatist": Agent(llm, "pragmatist", "You focus on practical implementation and real-world constraints."),
            "coordinator": Agent(llm, "coordinator", "You facilitate discussion and synthesise perspectives.")
        }
        
        # Shared environment for collaborative work
        self.shared_workspace = {
            "problem_statement": "",
            "discussion": [],
            "current_solution": "",
            "final_solution": ""
        }
    
    def solve_problem(self, problem_statement, max_iterations=3):
        """Solve a problem using the Multi-Agent System pattern."""
        # Initialise the workspace
        self.shared_workspace["problem_statement"] = problem_statement
        self.shared_workspace["discussion"] = []
        self.shared_workspace["current_solution"] = ""
        self.shared_workspace["final_solution"] = ""
        
        # Initial problem analysis by domain expert
        expert_analysis = self.agents["domain_expert"].generate_message(
            "Analyse the problem and provide initial thoughts",
            {"problem": problem_statement}
        )
        
        self.shared_workspace["discussion"].append({
            "agent": "domain_expert",
            "message": expert_analysis
        })
        
        # Collaborative iteration
        for i in range(max_iterations):
            # Each agent contributes based on current state
            for agent_name, agent in self.agents.items():
                # Skip coordinator until the end of each iteration
                if agent_name == "coordinator":
                    continue
                
                # Generate contribution based on current workspace
                contribution = agent.generate_message(
                    f"Contribute to solving the problem (iteration {i+1})",
                    self.shared_workspace
                )
                
                # Add to discussion
                self.shared_workspace["discussion"].append({
                    "agent": agent_name,
                    "message": contribution
                })
            
            # Coordinator synthesises progress and updates current solution
            synthesis = self.agents["coordinator"].generate_message(
                f"Synthesise the discussion and update the solution (iteration {i+1})",
                self.shared_workspace
            )
            
            self.shared_workspace["discussion"].append({
                "agent": "coordinator",
                "message": synthesis
            })
            
            # Update current solution
            solution_prompt = f"""
            Based on the coordinator's synthesis:
            {synthesis}
            
            Extract the current solution to the problem.
            """
            
            current_solution = self.llm.predict(solution_prompt)
            self.shared_workspace["current_solution"] = current_solution
        
        # Final solution refinement
        final_solution_prompt = f"""
        Problem: {problem_statement}
        
        Current solution after {max_iterations} iterations:
        {self.shared_workspace["current_solution"]}
        
        Full discussion:
        {self.shared_workspace["discussion"]}
        
        Create a final, polished solution that:
        1. Addresses the original problem comprehensively
        2. Incorporates the best insights from all agents
        3. Resolves any contradictions or open issues
        4. Is clear, coherent, and ready for implementation
        """
        
        final_solution = self.llm.predict(final_solution_prompt)
        self.shared_workspace["final_solution"] = final_solution
        
        return {
            "final_solution": final_solution,
            "discussion": self.shared_workspace["discussion"]
        }

class Agent:
    def __init__(self, llm, role, system_prompt):
        self.llm = llm
        self.role = role
        self.system_prompt = system_prompt
    
    def generate_message(self, task, context):
        """Generate a message based on the agent's role and current context."""
        prompt = f"""
        {self.system_prompt}
        
        Your role: {self.role}
        
        Task: {task}
        
        Current context:
        {context}
        
        Generate a contribution that:
        1. Reflects your unique role and perspective
        2. Builds on the current discussion
        3. Helps advance toward a solution
        4. Provides specific insights, critiques, or suggestions
        
        Your contribution:
        """
        
        return self.llm.predict(prompt)

When to Use Multi-Agent Systems

Execution Design Patterns

Execution design patterns focus on how agents carry out tasks and handle the execution flow.

1. Plan-Execute-Reflect Pattern

The Plan-Execute-Reflect pattern breaks task execution into three distinct phases: planning the approach, executing the steps, and reflecting on the results.

Plan-Execute-Reflect Pattern Structure:

  1. Planning Phase: Analyse the task and create a detailed execution plan
  2. Execution Phase: Carry out the plan step by step, using appropriate tools
  3. Reflection Phase: Evaluate the results, identify improvements, and refine if needed
def plan_execute_reflect_pattern(llm, task, tools, max_reflection_iterations=1):
    """
    Implement the Plan-Execute-Reflect pattern for task execution.
    
    Args:
        llm: Language model for generation
        task: Task description
        tools: Dictionary of available tools
        max_reflection_iterations: Maximum number of reflection cycles
        
    Returns:
        Final result and execution record
    """
    execution_record = []
    
    # Phase 1: Planning
    planning_prompt = f"""
    Task: {task}
    
    Available tools:
    {tools.keys()}
    
    Create a detailed step-by-step plan to accomplish this task.
    For each step, specify:
    1. The action to take
    2. Which tool to use (if any)
    3. Expected outcome
    
    Your plan should be comprehensive and consider potential challenges.
    """
    
    plan = llm.predict(planning_prompt)
    execution_record.append({"phase": "planning", "output": plan})
    
    # Phase 2: Execution
    execution_prompt = f"""
    Task: {task}
    
    Your plan:
    {plan}
    
    Execute this plan step by step. For each step:
    1. Describe the action you're taking
    2. If using a tool, specify the tool name and parameters
    3. After each action, describe the result before moving to the next step
    
    Format tool usage as:
    TOOL: [tool_name]
    PARAMS: [parameters in JSON format]
    
    Then wait for the tool result before continuing.
    """
    
    # Simulate execution with tool calls
    execution_dialogue = []
    execution_result = ""
    
    # In a real implementation, this would be an interactive process
    # where the agent makes tool calls and receives results
    # For simplicity, we'll simulate a basic execution
    execution_result = llm.predict(execution_prompt)
    execution_record.append({"phase": "execution", "output": execution_result})
    
    current_result = execution_result
    
    # Phase 3: Reflection and Refinement
    for i in range(max_reflection_iterations):
        reflection_prompt = f"""
        Task: {task}
        
        Original plan:
        {plan}
        
        Execution result:
        {current_result}
        
        Reflect on the execution by addressing:
        1. What worked well?
        2. What didn't work as expected?
        3. Were there any errors or issues?
        4. How could the approach be improved?
        5. Is the task fully completed or are additional steps needed?
        
        Provide a detailed reflection and specific recommendations for improvement.
        """
        
        reflection = llm.predict(reflection_prompt)
        execution_record.append({"phase": f"reflection_{i+1}", "output": reflection})
        
        # Check if refinement is needed
        refinement_check_prompt = f"""
        Based on this reflection:
        {reflection}
        
        Does the task require additional work or refinement?
        Answer with YES or NO, followed by a brief explanation.
        """
        
        refinement_needed = llm.predict(refinement_check_prompt)
        
        if "YES" in refinement_needed:
            refinement_prompt = f"""
            Task: {task}
            
            Original result:
            {current_result}
            
            Reflection:
            {reflection}
            
            Based on this reflection, refine and improve the result.
            Provide a complete, revised version that addresses the identified issues.
            """
            
            refined_result = llm.predict(refinement_prompt)
            execution_record.append({"phase": f"refinement_{i+1}", "output": refined_result})
            current_result = refined_result
        else:
            break
    
    # Final result
    return {
        "final_result": current_result,
        "execution_record": execution_record
    }

When to Use Plan-Execute-Reflect

2. Try-Catch-Retry Pattern

The Try-Catch-Retry pattern implements robust error handling by detecting failures, diagnosing issues, and attempting alternative approaches.

Try-Catch-Retry Pattern Structure:

  1. Try: Attempt to execute a task or operation
  2. Catch: Detect and diagnose failures or errors
  3. Retry: Implement an alternative approach based on the error diagnosis
def try_catch_retry_pattern(llm, task, tools, max_retries=3):
    """
    Implement the Try-Catch-Retry pattern for robust task execution.
    
    Args:
        llm: Language model for generation
        task: Task description
        tools: Dictionary of available tools
        max_retries: Maximum number of retry attempts
        
    Returns:
        Result and execution record
    """
    execution_record = []
    
    # Initial attempt
    try_prompt = f"""
    Task: {task}
    
    Available tools:
    {tools.keys()}
    
    Execute this task using the appropriate tools.
    Be thorough and careful in your approach.
    """
    
    attempt_result = llm.predict(try_prompt)
    execution_record.append({"phase": "initial_attempt", "output": attempt_result})
    
    # Check for success
    success_check_prompt = f"""
    Task: {task}
    
    Result:
    {attempt_result}
    
    Was this task completed successfully? Answer with:
    - SUCCESS if the task was completed correctly and completely
    - FAILURE if there were any errors, issues, or incomplete aspects
    
    Followed by a brief explanation of your assessment.
    """
    
    success_assessment = llm.predict(success_check_prompt)
    execution_record.append({"phase": "success_check", "output": success_assessment})
    
    # If successful, return the result
    if "SUCCESS" in success_assessment:
        return {
            "status": "success",
            "result": attempt_result,
            "execution_record": execution_record
        }
    
    # Retry loop
    current_result = attempt_result
    for retry_num in range(max_retries):
        # Catch: Diagnose the issue
        diagnosis_prompt = f"""
        Task: {task}
        
        Previous attempt:
        {current_result}
        
        Success assessment:
        {success_assessment}
        
        Diagnose the specific issues or errors in the previous attempt.
        Be detailed and precise about what went wrong and why.
        """
        
        diagnosis = llm.predict(diagnosis_prompt)
        execution_record.append({"phase": f"diagnosis_{retry_num+1}", "output": diagnosis})
        
        # Retry: Attempt an alternative approach
        retry_prompt = f"""
        Task: {task}
        
        Previous attempt:
        {current_result}
        
        Diagnosis of issues:
        {diagnosis}
        
        This is retry attempt #{retry_num+1}.
        
        Based on the diagnosis, implement an alternative approach to complete the task.
        Address all identified issues and be more careful in problematic areas.
        Provide a complete solution, not just the fixes.
        """
        
        retry_result = llm.predict(retry_prompt)
        execution_record.append({"phase": f"retry_{retry_num+1}", "output": retry_result})
        
        # Check if retry was successful
        retry_check_prompt = f"""
        Task: {task}
        
        Latest attempt:
        {retry_result}
        
        Was this task completed successfully? Answer with:
        - SUCCESS if the task was completed correctly and completely
        - FAILURE if there were any errors, issues, or incomplete aspects
        
        Followed by a brief explanation of your assessment.
        """
        
        retry_assessment = llm.predict(retry_check_prompt)
        execution_record.append({"phase": f"retry_check_{retry_num+1}", "output": retry_assessment})
        
        # If successful, return the result
        if "SUCCESS" in retry_assessment:
            return {
                "status": "success_after_retry",
                "retry_count": retry_num + 1,
                "result": retry_result,
                "execution_record": execution_record
            }
        
        # Update current result for next iteration
        current_result = retry_result
        success_assessment = retry_assessment
    
    # If we've exhausted retries, return the best attempt
    best_attempt_prompt = f"""
    Task: {task}
    
    After {max_retries} retry attempts, the task could not be completed perfectly.
    
    Execution record:
    {execution_record}
    
    Which attempt produced the best result, even if imperfect?
    Provide the attempt number and a brief explanation.
    """
    
    best_attempt_assessment = llm.predict(best_attempt_prompt)
    
    return {
        "status": "partial_success",
        "result": current_result,  # Last attempt
        "best_attempt_assessment": best_attempt_assessment,
        "execution_record": execution_record
    }

When to Use Try-Catch-Retry

3. Progressive Refinement Pattern

The Progressive Refinement pattern creates outputs through iterative improvement, starting with a rough draft and gradually enhancing it.

Progressive Refinement Pattern Structure:

  1. Initial Draft: Create a basic version focusing on structure and core content
  2. Targeted Improvements: Enhance specific aspects in focused iterations
  3. Integration: Combine improvements into a cohesive whole
  4. Polishing: Make final adjustments for quality and coherence
def progressive_refinement_pattern(llm, task, refinement_aspects, max_iterations=4):
    """
    Implement the Progressive Refinement pattern for high-quality content creation.
    
    Args:
        llm: Language model for generation
        task: Task description (e.g., "Write an article about climate change")
        refinement_aspects: List of aspects to refine (e.g., ["clarity", "evidence", "structure"])
        max_iterations: Maximum number of refinement iterations
        
    Returns:
        Final refined content and refinement history
    """
    refinement_history = []
    
    # Step 1: Create initial draft
    draft_prompt = f"""
    Task: {task}
    
    Create an initial draft that focuses on:
    1. Basic structure and organisation
    2. Core content and main points
    3. Overall flow and coherence
    
    This is a first draft that will be refined, so focus on getting the fundamentals right
    rather than perfection.
    """
    
    initial_draft = llm.predict(draft_prompt)
    refinement_history.append({"stage": "initial_draft", "content": initial_draft})
    
    current_version = initial_draft
    
    # Step 2: Targeted improvements for each aspect
    for i, aspect in enumerate(refinement_aspects):
        if i >= max_iterations:
            break
            
        refinement_prompt = f"""
        Task: {task}
        
        Current version:
        {current_version}
        
        Focus on improving this specific aspect: {aspect}
        
        Guidelines for this refinement:
        1. Maintain the overall structure and content
        2. Make targeted improvements related to {aspect}
        3. Be specific and detailed in your changes
        4. Ensure changes integrate well with the existing content
        
        Provide a complete revised version with improvements to {aspect}.
        """
        
        refined_version = llm.predict(refinement_prompt)
        refinement_history.append({"stage": f"refine_{aspect}", "content": refined_version})
        
        # Update current version
        current_version = refined_version
    
    # Step 3: Final polishing
    polish_prompt = f"""
    Task: {task}
    
    Current version after targeted refinements:
    {current_version}
    
    Perform a final polish to:
    1. Ensure consistency throughout the content
    2. Improve transitions between sections
    3. Enhance overall flow and readability
    4. Fix any remaining issues or awkward phrasing
    5. Ensure the content fully addresses the original task
    
    Provide the final polished version.
    """
    
    final_version = llm.predict(polish_prompt)
    refinement_history.append({"stage": "final_polish", "content": final_version})
    
    return {
        "final_content": final_version,
        "refinement_history": refinement_history
    }

When to Use Progressive Refinement

Interaction Design Patterns

Interaction design patterns focus on how agents communicate with users and other systems.

1. Guided Conversation Pattern

The Guided Conversation pattern structures interactions to lead users through complex processes with appropriate guidance and context.

Guided Conversation Pattern Structure:

class GuidedConversationAgent:
    def __init__(self, llm):
        self.llm = llm
        self.conversation_state = {
            "stage": "initial",
            "context": {},
            "history": [],
            "next_steps": []
        }
    
    def process_message(self, user_message):
        """Process a user message using the Guided Conversation pattern."""
        # Add user message to history
        self.conversation_state["history"].append({
            "role": "user",
            "content": user_message
        })
        
        # Analyse message and update state
        self._update_conversation_state(user_message)
        
        # Generate response based on current state
        response = self._generate_response()
        
        # Add response to history
        self.conversation_state["history"].append({
            "role": "agent",
            "content": response
        })
        
        return response
    
    def _update_conversation_state(self, user_message):
        """Update the conversation state based on user message and current stage."""
        current_stage = self.conversation_state["stage"]
        context = self.conversation_state["context"]
        history = self.conversation_state["history"]
        
        # Analyse the message in context
        analysis_prompt = f"""
        Current conversation stage: {current_stage}
        
        Conversation context:
        {context}
        
        Recent conversation history:
        {history[-5:] if len(history) > 5 else history}
        
        User message:
        {user_message}
        
        Analyse this message to:
        1. Determine what information was provided
        2. Identify what the user needs next
        3. Decide if we should move to a different conversation stage
        
        Format your response as JSON with these fields:
        {{
            "extracted_info": {{}},
            "next_stage": "string",
            "missing_info": ["string"],
            "next_steps": ["string"]
        }}
        """
        
        analysis_response = self.llm.predict(analysis_prompt)
        
        # In a real implementation, parse the JSON response
        # For simplicity, we'll use a mock analysis
        if current_stage == "initial":
            analysis = {
                "extracted_info": {"topic": "Extract from message"},
                "next_stage": "information_gathering",
                "missing_info": ["specific requirements", "timeline"],
                "next_steps": ["ask about requirements", "explain process"]
            }
        elif current_stage == "information_gathering":
            analysis = {
                "extracted_info": {"requirements": "Extract from message"},
                "next_stage": "solution_presentation",
                "missing_info": [],
                "next_steps": ["present solution options", "ask for preferences"]
            }
        else:
            analysis = {
                "extracted_info": {"preferences": "Extract from message"},
                "next_stage": "conclusion",
                "missing_info": [],
                "next_steps": ["summarise decisions", "outline next actions"]
            }
        
        # Update conversation state
        self.conversation_state["stage"] = analysis["next_stage"]
        self.conversation_state["next_steps"] = analysis["next_steps"]
        
        # Update context with extracted information
        for key, value in analysis["extracted_info"].items():
            self.conversation_state["context"][key] = value
    
    def _generate_response(self):
        """Generate a response based on the current conversation state."""
        stage = self.conversation_state["stage"]
        context = self.conversation_state["context"]
        history = self.conversation_state["history"]
        next_steps = self.conversation_state["next_steps"]
        
        # Generate response based on current stage and next steps
        response_prompt = f"""
        Current conversation stage: {stage}
        
        Conversation context:
        {context}
        
        Recent conversation history:
        {history[-5:] if len(history) > 5 else history}
        
        Next steps to guide the conversation:
        {next_steps}
        
        Generate a response that:
        1. Acknowledges the user's message
        2. Provides relevant information for the current stage
        3. Guides the conversation according to the next steps
        4. Maintains a helpful, conversational tone
        
        Your response should feel natural while subtly directing the conversation.
        """
        
        return self.llm.predict(response_prompt)
    
    def start_conversation(self, topic):
        """Start a new guided conversation on a specific topic."""
        # Reset conversation state
        self.conversation_state = {
            "stage": "initial",
            "context": {"topic": topic},
            "history": [],
            "next_steps": ["introduce purpose", "explain process", "ask initial questions"]
        }
        
        # Generate initial message
        initial_prompt = f"""
        You're starting a guided conversation on: {topic}
        
        Create an opening message that:
        1. Introduces the purpose of the conversation
        2. Briefly explains how the process will work
        3. Asks 1-2 initial questions to get started
        4. Sets a helpful, conversational tone
        
        The message should be welcoming while establishing a clear structure.
        """
        
        initial_message = self.llm.predict(initial_prompt)
        
        # Add to history
        self.conversation_state["history"].append({
            "role": "agent",
            "content": initial_message
        })
        
        return initial_message

When to Use Guided Conversation

2. Adaptive Response Pattern

The Adaptive Response pattern tailors agent responses based on user characteristics, context, and interaction history.

Adaptive Response Pattern Structure:

class AdaptiveResponseAgent:
    def __init__(self, llm):
        self.llm = llm
        
        # Initialise user model with default values
        self.user_model = {
            "expertise_level": "unknown",  # novice, intermediate, expert
            "communication_preference": "unknown",  # concise, detailed, visual
            "tone_preference": "unknown",  # formal, casual, technical
            "previous_topics": [],
            "interaction_patterns": {
                "response_to_technical": "unknown",
                "response_to_suggestions": "unknown",
                "typical_question_length": "unknown"
            }
        }
        
        self.conversation_history = []
    
    def process_message(self, user_message):
        """Process a user message using the Adaptive Response pattern."""
        # Add message to history
        self.conversation_history.append({
            "role": "user",
            "content": user_message
        })
        
        # Update user model based on new message
        self._update_user_model(user_message)
        
        # Generate adaptive response
        response = self._generate_adaptive_response(user_message)
        
        # Add response to history
        self.conversation_history.append({
            "role": "agent",
            "content": response
        })
        
        # Update model based on full interaction
        self._update_interaction_patterns(user_message, response)
        
        return response
    
    def _update_user_model(self, user_message):
        """Update the user model based on the latest message."""
        # Analyse message for user characteristics
        analysis_prompt = f"""
        Analyse this user message:
        "{user_message}"
        
        Current user model:
        {self.user_model}
        
        Identify:
        1. Indicators of expertise level (terminology, concepts, questions)
        2. Communication preferences (detail level, format, structure)
        3. Tone preferences (formal/casual, technical/simple)
        4. Topics mentioned or referenced
        
        Format your response as JSON with updates to the user model.
        Only include fields where you have new information.
        """
        
        analysis_response = self.llm.predict(analysis_prompt)
        
        # In a real implementation, parse the JSON response and update the model
        # For simplicity, we'll use a mock update based on message length
        if len(user_message) < 50:
            self.user_model["communication_preference"] = "concise"
        elif len(user_message) > 200:
            self.user_model["communication_preference"] = "detailed"
        
        # Add any new topics
        if "topic" in self.user_model:
            if self.user_model["topic"] not in self.user_model["previous_topics"]:
                self.user_model["previous_topics"].append(self.user_model["topic"])
    
    def _generate_adaptive_response(self, user_message):
        """Generate a response adapted to the user model."""
        # Select appropriate response parameters
        expertise_level = self.user_model["expertise_level"]
        communication_pref = self.user_model["communication_preference"]
        tone_pref = self.user_model["tone_preference"]
        
        # Default values for unknown preferences
        if expertise_level == "unknown":
            expertise_level = "intermediate"
        if communication_pref == "unknown":
            communication_pref = "balanced"
        if tone_pref == "unknown":
            tone_pref = "casual"
        
        # Generate adaptive response
        response_prompt = f"""
        User message: "{user_message}"
        
        User model:
        - Expertise level: {expertise_level}
        - Communication preference: {communication_pref}
        - Tone preference: {tone_pref}
        - Previous topics: {self.user_model["previous_topics"]}
        
        Generate a response that is adapted to this user by:
        
        1. Matching their expertise level:
           - For novice: Explain concepts simply, avoid jargon
           - For intermediate: Balance explanations with advanced content
           - For expert: Use technical terminology, focus on nuance
        
        2. Matching their communication preference:
           - For concise: Be brief and to the point
           - For detailed: Provide comprehensive information
           - For visual: Describe visual elements or diagrams
        
        3. Matching their tone preference:
           - For formal: Use professional, structured language
           - For casual: Use conversational, friendly language
           - For technical: Focus on precise, technical language
        
        4. Referencing relevant previous topics when appropriate
        
        Your response:
        """
        
        return self.llm.predict(response_prompt)
    
    def _update_interaction_patterns(self, user_message, agent_response):
        """Update interaction patterns based on the full interaction."""
        # In a real implementation, this would analyse patterns over time
        # For simplicity, we'll use a mock update
        
        # Update typical question length
        if "?" in user_message:
            words = len(user_message.split())
            if words < 10:
                self.user_model["interaction_patterns"]["typical_question_length"] = "short"
            elif words < 30:
                self.user_model["interaction_patterns"]["typical_question_length"] = "medium"
            else:
                self.user_model["interaction_patterns"]["typical_question_length"] = "long"
        
        # Check if this was a technical response
        if "technical" in agent_response.lower() or "code" in agent_response.lower():
            # In a real implementation, would analyse user's next message for satisfaction
            self.user_model["interaction_patterns"]["response_to_technical"] = "positive"
    
    def get_user_model(self):
        """Return the current user model."""
        return self.user_model

When to Use Adaptive Response

Implementing Design Patterns in Practice

Successfully implementing design patterns requires careful consideration of your specific use case and requirements.

Pattern Selection Framework

How to Choose the Right Patterns:

  1. Identify Core Requirements: Determine the fundamental needs of your agent
  2. Analyse Task Characteristics: Consider complexity, domain, and interaction model
  3. Evaluate Pattern Tradeoffs: Assess benefits and limitations for your context
  4. Consider Pattern Combinations: Determine how patterns can work together
  5. Prototype and Test: Implement simplified versions to validate approach

Pattern Selection Matrix

If you need... Consider these patterns
Improved reasoning quality Reflection, Tree-of-Thought, Verification
Complex task handling Controller-Worker, Hierarchical Agent, Plan-Execute-Reflect
Diverse perspectives Multi-Agent System, Tree-of-Thought
Error resilience Try-Catch-Retry, Verification, Reflection
Content quality Progressive Refinement, Reflection, Verification
User adaptation Adaptive Response, Guided Conversation

Pattern Composition

Design patterns are most powerful when combined to address complex requirements.

Common Pattern Combinations:

class CompositePatternAgent:
    def __init__(self, llm):
        self.llm = llm
        
        # Components for different patterns
        self.controller = ControllerComponent(llm)
        self.reflection = ReflectionComponent(llm)
        self.error_handler = ErrorHandlingComponent(llm)
        self.user_adapter = UserAdaptationComponent(llm)
    
    def process_request(self, user_request):
        """Process a request using multiple composed patterns."""
        # Adaptive Response: Analyse user and adapt approach
        user_profile = self.user_adapter.analyse_user(user_request)
        
        try:
            # Controller-Worker: Break down task and delegate
            execution_plan = self.controller.create_plan(user_request, user_profile)
            raw_result = self.controller.execute_plan(execution_plan)
            
            # Reflection: Improve the initial result
            improved_result = self.reflection.improve_result(
                user_request, raw_result, user_profile
            )
            
            # Final adaptive formatting
            final_response = self.user_adapter.format_for_user(
                improved_result, user_profile
            )
            
            return final_response
            
        except Exception as e:
            # Try-Catch-Retry: Handle errors
            return self.error_handler.handle_error(e, user_request, user_profile)

# Example component implementations
class ControllerComponent:
    def __init__(self, llm):
        self.llm = llm
        self.workers = {
            "research": self._research_worker,
            "writing": self._writing_worker,
            "calculation": self._calculation_worker
        }
    
    def create_plan(self, user_request, user_profile):
        """Create an execution plan based on the request and user profile."""
        # Implementation details omitted for brevity
        return {"steps": [{"worker": "research", "task": "Find information"}]}
    
    def execute_plan(self, plan):
        """Execute the plan by delegating to workers."""
        # Implementation details omitted for brevity
        return "Raw result from executing the plan"
    
    # Worker implementations omitted for brevity
    def _research_worker(self, task):
        return "Research results"
    
    def _writing_worker(self, task):
        return "Written content"
    
    def _calculation_worker(self, task):
        return "Calculation results"

class ReflectionComponent:
    def __init__(self, llm):
        self.llm = llm
    
    def improve_result(self, original_request, raw_result, user_profile):
        """Improve a result through reflection and refinement."""
        # Implementation details omitted for brevity
        return "Improved result after reflection"

class ErrorHandlingComponent:
    def __init__(self, llm):
        self.llm = llm
    
    def handle_error(self, error, original_request, user_profile):
        """Handle errors with appropriate retry strategies."""
        # Implementation details omitted for brevity
        return "Error recovery response"

class UserAdaptationComponent:
    def __init__(self, llm):
        self.llm = llm
    
    def analyse_user(self, user_request):
        """Analyse the user request to build a user profile."""
        # Implementation details omitted for brevity
        return {"expertise": "intermediate", "preferences": "detailed"}
    
    def format_for_user(self, content, user_profile):
        """Format content according to user preferences."""
        # Implementation details omitted for brevity
        return "User-adapted final response"

Next Steps in Your AI Journey

Now that you understand the key design patterns for AI agents, you're ready to build your own agents from scratch, applying these patterns to create sophisticated, reliable systems.

Key Takeaways from This Section:

In the next section, we'll dive into Building AI Agents from Scratch, where you'll learn how to implement these patterns in complete, functional agent systems.

Continue to Building an AI Agent from Scratch →