Basic Coding for AI

Essential programming skills to enhance your AI agent development capabilities

Why Learn to Code for AI Development?

While no-code platforms offer impressive capabilities, understanding basic coding unlocks a new level of customisation, control, and capability in AI agent development. Even minimal coding skills can dramatically expand what you can accomplish.

Key Insight

You don't need to become a full-stack developer to leverage code in AI development. A focused approach on specific AI-relevant coding skills yields the highest return on your learning investment.

The No-Code to Code Spectrum

Where You Should Focus Based on Your Goals:

Skill Level Capabilities Best For
Pure No-Code Using platforms as designed with existing integrations Quick implementation, standard use cases
Light Coding Basic API calls, simple scripts, minor customisations Extending platform capabilities, custom integrations
Moderate Coding Custom tools, data processing, advanced integrations Specialised agents, unique workflows, data manipulation
Advanced Coding Custom models, complex systems, full-stack applications Novel applications, research, enterprise solutions

This section focuses on the "Light to Moderate" coding skills that provide the highest ROI for most AI agent developers. These skills allow you to extend no-code platforms while avoiding the steep learning curve of advanced AI development.

Python: The Essential Language for AI

Python has emerged as the dominant language for AI development due to its readability, extensive libraries, and strong community support. Even basic Python skills can significantly enhance your AI capabilities.

Why Python for AI?

Python Fundamentals for AI Development

Focus on these core Python concepts that are most relevant to AI agent development:

1. Variables and Data Types

# Basic variable assignment
user_query = "Tell me about AI agents"
max_tokens = 500
temperature = 0.7
include_sources = True

# Common data types
# Strings - for text
model_name = "gpt-4"

# Integers and floats - for numerical values
max_results = 5
confidence_threshold = 0.85

# Booleans - for true/false values
is_verified = False

# Lists - for ordered collections
search_results = ["result1", "result2", "result3"]

# Dictionaries - for key-value pairs (extremely important for API work)
parameters = {
    "model": "gpt-4",
    "temperature": 0.7,
    "max_tokens": 500,
    "top_p": 1.0
}

2. Control Structures

# Conditional logic
if confidence_score > 0.9:
    response = "I'm highly confident that..."
elif confidence_score > 0.7:
    response = "I believe that..."
else:
    response = "I'm not sure, but..."

# Loops for iteration
for result in search_results:
    print(f"Processing result: {result}")
    
# While loops for condition-based iteration
while api_calls < max_attempts and not success:
    try:
        response = make_api_call()
        success = True
    except Exception as e:
        api_calls += 1
        print(f"Attempt {api_calls} failed: {e}")

3. Functions

# Basic function definition
def generate_response(prompt, model="gpt-3.5-turbo", temperature=0.7):
    """
    Generate a response using an LLM.
    
    Args:
        prompt (str): The input prompt
        model (str): The model to use
        temperature (float): Controls randomness (0.0-1.0)
        
    Returns:
        str: The generated response
    """
    # Function implementation here
    response = "This is a dummy response."
    return response

# Using the function
answer = generate_response(
    prompt="Explain AI agents simply", 
    temperature=0.5
)

Python Learning Shortcut

Instead of taking a comprehensive Python course, focus on learning through AI-specific examples. Start with simple scripts that call AI APIs, then gradually add complexity as you build practical projects.

Recommended approach: Learn enough Python to understand and modify existing AI code examples, then expand your knowledge as needed for specific projects.

API Integration: The Core Skill

The most valuable coding skill for AI agent development is the ability to integrate with APIs (Application Programming Interfaces). This allows your agents to communicate with AI models and other services.

Understanding API Basics

Key API Concepts:

  • Endpoints: URLs that accept requests for specific services
  • HTTP Methods: GET, POST, PUT, DELETE for different operations
  • Headers: Metadata for requests, including authentication
  • Request Body: Data sent to the API (often in JSON format)
  • Response: Data returned by the API (typically JSON)
  • Status Codes: Numeric codes indicating success or failure (200 OK, 404 Not Found, etc.)

Making API Calls in Python

The requests library is the standard for making HTTP requests in Python:

import requests
import os # Need to import os for getenv

# Assume api_key is defined or fetched securely, e.g., from environment variables
api_key = os.getenv("YOUR_API_KEY", "default_key_if_not_set")

# Basic GET request
try:
    response = requests.get('https://api.example.com/data')
    response.raise_for_status() # Raises an HTTPError for bad responses (4XX or 5XX)
    data = response.json()
    print(data)
except requests.exceptions.RequestException as e:
    print(f"Request failed: {e}")

# POST request with JSON data
headers = {
    'Authorization': f'Bearer {api_key}',
    'Content-Type': 'application/json'
}

data = {
    'prompt': 'Explain AI agents',
    'max_tokens': 500,
    'temperature': 0.7
}

try:
    response = requests.post(
        'https://api.example.com/generate',
        headers=headers,
        json=data
    )
    response.raise_for_status() # Check for HTTP errors
    result = response.json()
    # Safely access nested keys
    generated_text = result.get('choices', [{}])[0].get('text', 'No text found')
    print(generated_text)
except requests.exceptions.RequestException as e:
    print(f"Request failed: {e}")
except (KeyError, IndexError) as e:
    print(f"Failed to parse response: {e}")

Working with OpenAI's API

The OpenAI API is one of the most commonly used for AI agent development. Here's how to use it with the official Python library:

import openai
import os
from dotenv import load_dotenv

# Load API key from .env file (best practice for security)
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

def get_completion(prompt, model="gpt-3.5-turbo", temperature=0.7):
    """
    Get a completion from the OpenAI API
    """
    try:
        messages = [{"role": "user", "content": prompt}]
        response = openai.ChatCompletion.create(
            model=model,
            messages=messages,
            temperature=temperature,
        )
        return response.choices[0].message["content"]
    except openai.error.OpenAIError as e:
        print(f"OpenAI API error: {e}")
        return None
    except Exception as e:
        print(f"An unexpected error occurred: {e}")
        return None

# Example usage
prompt = "Explain how AI agents work in simple terms."
response = get_completion(prompt)
if response:
    print(response)

# More complex conversation with system message
def chat_completion(messages, model="gpt-3.5-turbo", temperature=0.7):
    """
    Get a chat completion from the OpenAI API
    """
    try:
        response = openai.ChatCompletion.create(
            model=model,
            messages=messages,
            temperature=temperature,
        )
        return response.choices[0].message["content"]
    except openai.error.OpenAIError as e:
        print(f"OpenAI API error: {e}")
        return None
    except Exception as e:
        print(f"An unexpected error occurred: {e}")
        return None

# Example chat usage
messages = [
    {"role": "system", "content": "You are a helpful AI assistant specialised in explaining technical concepts simply."},
    {"role": "user", "content": "What is the difference between supervised and unsupervised learning?"}
]

response = chat_completion(messages)
if response:
    print(response)
    # Add the response to the conversation history for follow-up
    messages.append({"role": "assistant", "content": response})
    # Example follow-up question
    messages.append({"role": "user", "content": "Give me an example of unsupervised learning."})
    follow_up_response = chat_completion(messages)
    if follow_up_response:
        print(follow_up_response)

Security Alert: API Keys

Never hardcode API keys directly into your scripts. Use environment variables (like shown with os.getenv() and dotenv) or secure key management systems.

Working with Data (JSON)

AI development heavily involves exchanging data with APIs, typically in JSON (JavaScript Object Notation) format. Python's built-in json library makes this easy.

Key JSON Concepts

Parsing and Generating JSON in Python

import json

# Example JSON string received from an API
json_string = '{
    "agent_name": "Research Assistant",
    "status": "active",
    "capabilities": ["web_search", "summarization", "email_report"],
    "parameters": {
        "max_search_results": 10,
        "summary_length": "concise"
    }
}'

# Parse JSON string into a Python dictionary
data = json.loads(json_string)
print(f"Agent Name: {data['agent_name']}")
print(f"Capabilities: {data['capabilities']}")
print(f"Summary Length: {data['parameters']['summary_length']}")

# Modify the data
data['status'] = "inactive"
data['parameters']['summary_length'] = "detailed"

# Convert Python dictionary back into a JSON string
updated_json_string = json.dumps(data, indent=4) # indent for pretty printing
print("\nUpdated JSON:")
print(updated_json_string)

Handling Potential Errors

When accessing data from JSON, always anticipate that keys might be missing or have unexpected types. Use .get() for safer dictionary access and include error handling (try-except blocks).

# Safer dictionary access
agent_name = data.get('agent_name', 'Unknown Agent') # Provides default value

try:
    # Access potentially missing key
    report_format = data['settings']['report_format']
except KeyError:
    print("Report format setting not found, using default.")
    report_format = "standard"
except TypeError:
    print("Settings format incorrect, using default report format.")
    report_format = "standard"

Essential Python Libraries for AI

Beyond built-in features, specific Python libraries greatly simplify common AI tasks:

Library Primary Use Why It's Useful
requests Making HTTP requests (API calls) Simplifies interacting with web services
openai Interacting with OpenAI APIs Official library, handles complexity
dotenv Managing environment variables Securely handling API keys and configuration
json Working with JSON data Standard format for API communication
pandas Data manipulation and analysis Handling structured data, useful for RAG
BeautifulSoup4 / Scrapy Web scraping Extracting information from websites

For basic agent development extending no-code platforms, focusing on requests, openai, dotenv, and json is often sufficient.

Practical Example: A Simple Python Agent Tool

Let's create a simple Python script that could be used as a custom tool within an agent workflow (e.g., called by a no-code platform that supports custom code execution).

Objective: Currency Converter Tool

A tool that takes an amount, a source currency, and a target currency, then returns the converted amount using a free currency conversion API.

import requests
import json

def convert_currency(amount, from_currency, to_currency):
    """
    Converts an amount from one currency to another using a free API.
    Note: Uses exchangerate-api.com which offers a free tier.
    Requires sign-up for an API key.
    Replace YOUR_API_KEY with your actual key or use environment variable.
    """
    # In a real application, get API key securely (e.g., os.getenv)
    api_key = "YOUR_API_KEY" 
    base_url = f"https://v6.exchangerate-api.com/v6/{api_key}/latest/{from_currency}"
    
    try:
        response = requests.get(base_url)
        response.raise_for_status() # Check for HTTP errors
        data = response.json()
        
        if data['result'] == 'success':
            conversion_rate = data['conversion_rates'].get(to_currency)
            if conversion_rate:
                converted_amount = amount * conversion_rate
                return {
                    "success": True,
                    "from_currency": from_currency,
                    "to_currency": to_currency,
                    "original_amount": amount,
                    "converted_amount": round(converted_amount, 2),
                    "rate": conversion_rate
                }
            else:
                return {"success": False, "error": f"Target currency '{to_currency}' not found."}
        else:
            # Handle API-specific errors if documentation provides details
            error_type = data.get('error-type', 'Unknown API error')
            return {"success": False, "error": f"API Error: {error_type}"}
            
    except requests.exceptions.RequestException as e:
        return {"success": False, "error": f"Network error: {e}"}
    except json.JSONDecodeError:
        return {"success": False, "error": "Failed to decode API response."}
    except Exception as e:
        # Catch any other unexpected errors
        return {"success": False, "error": f"An unexpected error occurred: {e}"}

# Example usage:
result = convert_currency(100, "USD", "GBP")
print(json.dumps(result, indent=4))

result_error = convert_currency(100, "USD", "XYZ") # Invalid currency
print(json.dumps(result_error, indent=4))

# This script can be saved as a .py file and potentially called by an agent.
# Input parameters (amount, from_currency, to_currency) would typically be 
# passed to the script, and the JSON output would be returned to the agent flow.

Integration with No-Code Platforms

Platforms like Zapier, Make.com, or even some specialized agent builders allow executing custom Python or JavaScript code. You could deploy this function (e.g., as a serverless function) and have your no-code agent call its endpoint, or paste the script directly if the platform supports it.

Next Steps: Understanding LLMs

With a foundation in basic coding and API interaction, you're ready to dive deeper into the core technology powering modern AI agents: Large Language Models (LLMs).

Key Takeaways from This Section:

  • Basic Python coding significantly enhances AI agent development capabilities
  • Focus on Python fundamentals, API integration (using requests), and JSON handling
  • Securely manage API keys using environment variables
  • Key libraries include requests, openai, dotenv, and json
  • Simple Python scripts can act as custom tools within larger agent workflows

The next section explores LLM essentials, providing the knowledge needed to effectively leverage language models in your agents, whether built with code or no-code platforms.

Continue to LLM Essentials →

API Key Security

Never hardcode API keys in your scripts or commit them to version control! Use environment variables or secure vaults to store sensitive credentials.

The .env file approach shown above is a simple method for local development:

  1. Create a file named .env in your project directory
  2. Add your API keys: OPENAI_API_KEY=sk-your-key-here
  3. Add .env to your .gitignore file
  4. Use the python-dotenv package to load the variables

Working with JSON

JSON (JavaScript Object Notation) is the standard data format for most AI APIs. Understanding how to work with JSON is essential for AI development.

JSON Basics

JSON Structure:

Working with JSON in Python

import json

# Converting Python dictionary to JSON string
data = {
    "model": "gpt-4",
    "messages": [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Tell me about AI agents."}
    ],
    "temperature": 0.7,
    "max_tokens": 500
}

json_string = json.dumps(data, indent=2)
print(json_string)

# Converting JSON string back to Python dictionary
parsed_data = json.loads(json_string)
print(parsed_data["messages"][1]["content"])

# Reading JSON from a file
with open('config.json', 'r') as file:
    config = json.load(file)
    
# Writing JSON to a file
with open('output.json', 'w') as file:
    json.dump(data, file, indent=2)

Handling Complex JSON Responses

AI APIs often return complex nested JSON structures. Here's how to navigate and extract the data you need:

# Example of parsing a complex OpenAI API response
response_json = {
    "id": "chatcmpl-123",
    "object": "chat.completion",
    "created": 1677858242,
    "model": "gpt-3.5-turbo-0613",
    "usage": {
        "prompt_tokens": 13,
        "completion_tokens": 7,
        "total_tokens": 20
    },
    "choices": [
        {
            "message": {
                "role": "assistant",
                "content": "This is the response text."
            },
            "finish_reason": "stop",
            "index": 0
        }
    ]
}

# Extracting specific information
response_text = response_json["choices"][0]["message"]["content"]
token_usage = response_json["usage"]["total_tokens"]
model_used = response_json["model"]

print(f"Response: {response_text}")
print(f"Token usage: {token_usage}")
print(f"Model: {model_used}")

# Safely extracting data with get() to avoid KeyError
# This returns None if the key doesn't exist
function_call = response_json.get("choices", [{}])[0].get("message", {}).get("function_call")

# Or with a default value
function_call = response_json.get("choices", [{}])[0].get("message", {}).get("function_call", "No function call")

JSON Debugging Tip

When working with complex JSON structures, use json.dumps(data, indent=2) to print the data in a readable format for debugging. This simple technique can save hours of troubleshooting.

Error Handling Best Practices

Robust error handling is critical for AI agents, as they often interact with external services that can fail in various ways.

Basic Try-Except Pattern

try:
    # Code that might raise an exception
    response = openai.ChatCompletion.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message["content"]
except openai.error.RateLimitError:
    # Handle rate limiting
    print("Rate limit exceeded. Waiting before retry...")
    time.sleep(60)  # Wait 60 seconds
    return get_completion(prompt)  # Retry
except openai.error.APIError as e:
    # Handle API error
    print(f"OpenAI API error: {e}")
    return "I'm having trouble connecting to my knowledge source."
except Exception as e:
    # Catch-all for unexpected errors
    print(f"Unexpected error: {e}")
    return "I encountered an unexpected error."

Advanced Error Handling with Retries

import time
from functools import wraps

def retry_with_exponential_backoff(
    initial_delay=1,
    exponential_base=2,
    jitter=True,
    max_retries=10
):
    """Retry a function with exponential backoff."""
    
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            # Initialize variables
            num_retries = 0
            delay = initial_delay
            
            # Loop until a successful response or max_retries
            while True:
                try:
                    return func(*args, **kwargs)
                
                # Retry on specific errors
                except (openai.error.APIError,
                        openai.error.RateLimitError,
                        openai.error.ServiceUnavailableError,
                        openai.error.Timeout) as e:
                    
                    # Check if max retries reached
                    num_retries += 1
                    if num_retries > max_retries:
                        raise Exception(f"Maximum number of retries ({max_retries}) exceeded.")
                    
                    # Increment the delay
                    delay *= exponential_base * (1 + jitter * random.random())
                    
                    # Log the retry
                    print(f"Error: {e}. Retrying in {delay:.2f} seconds...")
                    
                    # Sleep and retry
                    time.sleep(delay)
                
                # Raise exceptions we don't want to retry
                except Exception as e:
                    raise e
                    
        return wrapper
    
    return decorator

# Usage
@retry_with_exponential_backoff()
def get_completion_with_retries(prompt, model="gpt-3.5-turbo"):
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message["content"]

Common API Errors to Handle

Building a Complete AI Agent in Python

Let's put everything together to build a simple but functional AI agent in Python. This example demonstrates how to create a research assistant that can search for information and provide summaries.

Project Structure

research_agent/
├── .env                  # API keys and configuration
├── agent.py              # Main agent code
├── tools/
│   ├── __init__.py
│   ├── search.py         # Web search functionality
│   └── summarize.py      # Text summarization
└── requirements.txt      # Dependencies

1. Setting Up Dependencies

First, let's create our requirements.txt file:

openai==0.28.0
requests==2.28.2
python-dotenv==1.0.0
duckduckgo-search==3.8.3

Install the dependencies:

pip install -r requirements.txt

2. Environment Configuration (.env)

OPENAI_API_KEY=your_openai_api_key_here

3. Creating the Search Tool (tools/search.py)

from duckduckgo_search import DDGS

def web_search(query, num_results=5):
    """
    Search the web for information using DuckDuckGo.
    
    Args:
        query (str): The search query
        num_results (int): Number of results to return
        
    Returns:
        list: List of search results with title, link, and snippet
    """
    try:
        with DDGS() as ddgs:
            results = list(ddgs.text(query, max_results=num_results))
            return results
    except Exception as e:
        print(f"Search error: {e}")
        return []

4. Creating the Summarization Tool (tools/summarize.py)

import openai
import os
from dotenv import load_dotenv

# Load environment variables
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

def summarize_text(text, max_tokens=150):
    """
    Summarize text using OpenAI's GPT model.
    
    Args:
        text (str): The text to summarize
        max_tokens (int): Maximum length of the summary
        
    Returns:
        str: The summarized text
    """
    try:
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "system", "content": "You are a helpful assistant that summarizes information accurately and concisely."},
                {"role": "user", "content": f"Please summarize the following text in about {max_tokens} words:\n\n{text}"}
            ],
            max_tokens=max_tokens,
            temperature=0.5
        )
        return response.choices[0].message["content"]
    except Exception as e:
        print(f"Summarization error: {e}")
        return "Failed to generate summary."

5. Building the Main Agent (agent.py)

import openai
import os
import json
from dotenv import load_dotenv
from tools.search import web_search
from tools.summarize import summarize_text

# Load environment variables
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

class ResearchAgent:
    def __init__(self):
        self.conversation_history = []
        self.add_message("system", "You are a helpful research assistant. You can search for information and provide summaries on various topics.")
    
    def add_message(self, role, content):
        """Add a message to the conversation history."""
        self.conversation_history.append({"role": role, "content": content})
    
    def get_completion(self, temperature=0.7):
        """Get a completion from the OpenAI API based on the conversation history."""
        try:
            response = openai.ChatCompletion.create(
                model="gpt-3.5-turbo",
                messages=self.conversation_history,
                temperature=temperature
            )
            return response.choices[0].message["content"]
        except Exception as e:
            print(f"Error getting completion: {e}")
            return "I encountered an error while processing your request."
    
    def search_and_summarize(self, query, num_results=3):
        """Search for information and provide a summary."""
        # Inform the user that we're searching
        self.add_message("assistant", f"Searching for information about '{query}'...")
        
        # Perform the search
        search_results = web_search(query, num_results)
        
        if not search_results:
            self.add_message("assistant", "I couldn't find any information on that topic.")
            return "I couldn't find any information on that topic."
        
        # Compile the search results
        compiled_info = f"Search results for '{query}':\n\n"
        for i, result in enumerate(search_results, 1):
            compiled_info += f"{i}. {result['title']}\n"
            compiled_info += f"   URL: {result['href']}\n"
            compiled_info += f"   {result['body']}\n\n"
        
        # Summarize the information
        summary_prompt = f"Based on the following search results, provide a comprehensive summary about '{query}':\n\n{compiled_info}"
        self.add_message("user", summary_prompt)
        
        summary = self.get_completion()
        self.add_message("assistant", summary)
        
        return summary
    
    def process_query(self, user_query):
        """Process a user query and determine the appropriate action."""
        self.add_message("user", user_query)
        
        # Check if this is a research question
        if any(keyword in user_query.lower() for keyword in ["search", "find", "look up", "research", "information about", "tell me about"]):
            # Extract the search query
            search_query = user_query.split("about", 1)[-1].strip() if "about" in user_query else user_query
            return self.search_and_summarize(search_query)
        else:
            # For non-research questions, just use the standard completion
            response = self.get_completion()
            self.add_message("assistant", response)
            return response

# Example usage
if __name__ == "__main__":
    agent = ResearchAgent()
    
    while True:
        user_input = input("\nYour question (or 'exit' to quit): ")
        if user_input.lower() == 'exit':
            break
            
        response = agent.process_query(user_input)
        print(f"\nAgent: {response}")

Running the Agent

To run the agent, execute the agent.py file:

python agent.py

Example Interaction:

Your question: Tell me about AI agents and their applications

Agent: Searching for information about 'AI agents and their applications'...

Agent: AI agents are autonomous or semi-autonomous software entities that can perceive their environment, make decisions, and take actions to achieve specific goals. They combine artificial intelligence technologies like machine learning, natural language processing, and decision-making algorithms to perform tasks with varying degrees of independence.

Key applications of AI agents include:

  1. Customer Service: AI agents serve as virtual assistants and chatbots that handle customer inquiries, provide support, and process requests across various industries.
  2. Personal Assistants: Virtual assistants like Siri, Alexa, and Google Assistant help users manage schedules, answer questions, control smart home devices, and perform various tasks.
  3. Business Process Automation: AI agents automate repetitive tasks in areas like data entry, document processing, and workflow management, increasing efficiency and reducing errors.
  4. Healthcare: Medical AI agents assist with patient triage, appointment scheduling, medication management, and even preliminary diagnoses.
  5. Finance: AI agents perform fraud detection, algorithmic trading, personalized financial advice, and automated accounting tasks.
  6. E-commerce: Recommendation systems, inventory management, and personalized shopping assistants enhance the online shopping experience.

The most advanced AI agents incorporate multiple AI technologies and can handle complex tasks with minimal human intervention. They typically feature capabilities like natural language understanding, learning from interactions, integrating with various systems, and making decisions based on available data.

As AI technology continues to evolve, we're seeing the emergence of more sophisticated multi-agent systems where multiple AI agents collaborate to solve complex problems, further expanding their potential applications across industries.

Extending Your AI Agent Capabilities

Once you've mastered the basics, you can extend your agent with additional capabilities:

1. Adding Memory

import json
import os

class AgentMemory:
    def __init__(self, memory_file="agent_memory.json"):
        self.memory_file = memory_file
        self.memory = self.load_memory()
    
    def load_memory(self):
        """Load memory from file or create new if doesn't exist."""
        if os.path.exists(self.memory_file):
            try:
                with open(self.memory_file, 'r') as f:
                    return json.load(f)
            except:
                return {"conversations": {}, "facts": {}}
        else:
            return {"conversations": {}, "facts": {}}
    
    def save_memory(self):
        """Save memory to file."""
        with open(self.memory_file, 'w') as f:
            json.dump(self.memory, f, indent=2)
    
    def store_conversation(self, user_id, conversation):
        """Store a conversation history."""
        if user_id not in self.memory["conversations"]:
            self.memory["conversations"][user_id] = []
        
        self.memory["conversations"][user_id].append(conversation)
        self.save_memory()
    
    def get_conversations(self, user_id, limit=5):
        """Get recent conversations for a user."""
        if user_id in self.memory["conversations"]:
            return self.memory["conversations"][user_id][-limit:]
        return []
    
    def store_fact(self, category, key, value):
        """Store a fact in memory."""
        if category not in self.memory["facts"]:
            self.memory["facts"][category] = {}
        
        self.memory["facts"][category][key] = value
        self.save_memory()
    
    def get_fact(self, category, key):
        """Retrieve a fact from memory."""
        if category in self.memory["facts"] and key in self.memory["facts"][category]:
            return self.memory["facts"][category][key]
        return None
    
    def get_facts_by_category(self, category):
        """Get all facts in a category."""
        if category in self.memory["facts"]:
            return self.memory["facts"][category]
        return {}

2. Adding More Tools

# Weather tool example
import requests

def get_weather(location):
    """
    Get current weather for a location.
    
    Args:
        location (str): City name or coordinates
        
    Returns:
        dict: Weather information
    """
    api_key = os.getenv("WEATHER_API_KEY")
    url = f"https://api.openweathermap.org/data/2.5/weather?q={location}&appid={api_key}&units=metric"
    
    try:
        response = requests.get(url)
        data = response.json()
        
        if response.status_code == 200:
            weather_info = {
                "location": data["name"],
                "country": data["sys"]["country"],
                "temperature": data["main"]["temp"],
                "feels_like": data["main"]["feels_like"],
                "condition": data["weather"][0]["main"],
                "description": data["weather"][0]["description"],
                "humidity": data["main"]["humidity"],
                "wind_speed": data["wind"]["speed"]
            }
            return weather_info
        else:
            return {"error": f"Error: {data.get('message', 'Unknown error')}"}
    except Exception as e:
        return {"error": f"Failed to get weather: {str(e)}"}

3. Implementing Tool Selection Logic

def select_tool(query, available_tools):
    """
    Select the appropriate tool based on the user query.
    
    Args:
        query (str): User's query
        available_tools (dict): Dictionary of available tools
        
    Returns:
        tuple: (tool_name, tool_function)
    """
    # Create a prompt for the LLM to select the appropriate tool
    tools_description = "\n".join([f"- {name}: {func.__doc__.strip().split('.')[0]}" 
                                  for name, func in available_tools.items()])
    
    prompt = f"""
    Based on the user query, select the most appropriate tool from the following options:
    
    {tools_description}
    
    User query: "{query}"
    
    Respond with just the tool name, nothing else.
    """
    
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": prompt}],
        temperature=0.2,
        max_tokens=20
    )
    
    selected_tool = response.choices[0].message["content"].strip().lower()
    
    # Match the selected tool with available tools
    for tool_name, tool_func in available_tools.items():
        if tool_name.lower() in selected_tool:
            return tool_name, tool_func
    
    # Default to search if no match
    return "search", available_tools.get("search")

Next Steps in Your AI Journey

Now that you've learned the essential coding skills for AI agent development, you're ready to explore more advanced topics in the Core Technologies Phase.

Key Takeaways from This Section:

In the next section, we'll dive into LLM Essentials, where you'll learn how to get the most out of large language models for your AI agents.

Continue to LLM Essentials →