Skip to main content
This cookbook shows how to add Maxim observability and tracing to your LlamaIndex based applications. You’ll learn how to instrument LlamaIndex agents, create function tools, handle multi-modal requests, and coordinate multi-agent workflows with full observability in the Maxim dashboard.

Prerequisites

1. Set Up Environment Variables

Create a .env file in your project root:
# Maxim API Configuration
MAXIM_API_KEY=your_maxim_api_key_here
MAXIM_LOG_REPO_ID=your_maxim_repo_id_here

# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key_here

2. Import Required Libraries

import os
import asyncio
from dotenv import load_dotenv
from maxim import Config, Maxim
from maxim.logger import LoggerConfig
from maxim.logger.llamaindex import instrument_llamaindex
from llama_index.core.agent import FunctionAgent
from llama_index.core.agent.workflow import AgentWorkflow
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI

3. Initialize Maxim and Instrument LlamaIndex

# Load environment variables from .env file
load_dotenv()

# Initialize Maxim logger
maxim = Maxim(Config(api_key=os.getenv("MAXIM_API_KEY")))
logger = maxim.logger(LoggerConfig(id=os.getenv("MAXIM_LOG_REPO_ID")))

# Instrument LlamaIndex with Maxim observability
# Set debug=True to see detailed logs during development
instrument_llamaindex(logger, debug=True)

print("✅ Maxim instrumentation enabled for LlamaIndex")

4. Create Simple Function Agent

Build a calculator agent with custom tools:
# Define calculator tools
def add_numbers(a: float, b: float) -> float:
    """Add two numbers together."""
    return a + b

def multiply_numbers(a: float, b: float) -> float:
    """Multiply two numbers together."""
    return a * b

def divide_numbers(a: float, b: float) -> float:
    """Divide first number by second number."""
    if b == 0:
        raise ValueError("Cannot divide by zero")
    return a / b

# Create function tools
add_tool = FunctionTool.from_defaults(fn=add_numbers)
multiply_tool = FunctionTool.from_defaults(fn=multiply_numbers)
divide_tool = FunctionTool.from_defaults(fn=divide_numbers)

# Initialize LLM
llm = OpenAI(model="gpt-4o-mini", temperature=0)

# Create FunctionAgent
agent = FunctionAgent(
    tools=[add_tool, multiply_tool, divide_tool],
    llm=llm,
    verbose=True,
    system_prompt="""You are a helpful calculator assistant.
    Use the provided tools to perform mathematical calculations.
    Always explain your reasoning step by step."""
)

5. Test Function Agent

async def test_function_agent():
    print("🔍 Testing FunctionAgent with Maxim observability...")
    
    query = "What is (15 + 25) multiplied by 2, then divided by 8?"
    
    print(f"\n📝 Query: {query}")
    
    # This will be automatically logged by Maxim instrumentation
    response = await agent.run(query)
    
    print(f"\n🤖 Response: {response}")
    print("\n✅ Check your Maxim dashboard for detailed trace information!")

# Run the async function
await test_function_agent()

6. Multi-Modal Agent Setup

Create an agent that can handle both text and images:
from llama_index.core.llms import ChatMessage, ImageBlock, TextBlock
import requests
from PIL import Image
import io
import base64

# Tool for image analysis
def describe_image_content(description: str) -> str:
    """Analyze and describe what's in an image based on the model's vision."""
    return f"Image analysis complete: {description}"

# Math tools for the agent
def add(a: int, b: int) -> int:
    """Add two numbers together."""
    return a + b

def multiply(a: int, b: int) -> int:
    """Multiply two numbers together."""
    return a * b

# Create multi-modal agent with vision-capable model
multimodal_llm = OpenAI(model="gpt-4o-mini")  # Vision-capable model

multimodal_agent = FunctionAgent(
    tools=[add, multiply, describe_image_content],
    llm=multimodal_llm,
    system_prompt="You are a helpful assistant that can analyze images and perform calculations."
)

7. Test Multi-Modal Agent

async def test_multimodal_agent():
    print("🔍 Testing Multi-Modal Agent with Maxim observability...")

    try:
        # Example with image URL
        msg = ChatMessage(
            role="user",
            blocks=[
                TextBlock(text="What do you see in this image? If there are numbers, perform calculations."),
                ImageBlock(url="https://example.com/math-equation.jpg"),  # Replace with actual image URL
            ],
        )
        response = await multimodal_agent.run(msg)
        print(f"\n🤖 Multi-Modal Response: {response}")

    except Exception as e:
        print(f"Note: Multi-modal features require actual image files. Error: {e}")
        print("The agent structure is set up correctly for when you have images to process!")

    print("\n✅ Check Maxim dashboard for multi-modal agent traces!")

# Run the test
await test_multimodal_agent()

8. Multi-Agent Workflow

Create a sophisticated multi-agent system for research and analysis:
# Research agent tools
def research_topic(topic: str) -> str:
    """Research a given topic and return key findings."""
    # Mock research results - in production, this would call real APIs
    research_data = {
        "climate change": "Climate change refers to long-term shifts in global temperatures and weather patterns, primarily caused by human activities since the 1800s.",
        "renewable energy": "Renewable energy comes from sources that are naturally replenishing like solar, wind, hydro, and geothermal power.",
        "artificial intelligence": "AI involves creating computer systems that can perform tasks typically requiring human intelligence.",
        "sustainability": "Sustainability involves meeting present needs without compromising the ability of future generations to meet their needs."
    }

    topic_lower = topic.lower()
    for key, info in research_data.items():
        if key in topic_lower:
            return f"Research findings on {topic}: {info} Additional context includes recent developments and policy implications."

    return f"Research completed on {topic}. This is an emerging area requiring further investigation and analysis."

# Analysis agent tools
def analyze_data(research_data: str) -> str:
    """Analyze research data and provide insights."""
    if "climate change" in research_data.lower():
        return "Analysis indicates climate change requires immediate action through carbon reduction, renewable energy adoption, and international cooperation."
    elif "renewable energy" in research_data.lower():
        return "Analysis shows renewable energy is becoming cost-competitive with fossil fuels and offers long-term economic and environmental benefits."
    elif "artificial intelligence" in research_data.lower():
        return "Analysis reveals AI has transformative potential across industries but requires careful consideration of ethical implications and regulation."
    else:
        return "Analysis suggests this topic has significant implications requiring strategic planning and stakeholder engagement."

# Report writing agent tools
def write_report(analysis: str, topic: str) -> str:
    """Write a comprehensive report based on analysis."""
    return f"""
═══════════════════════════════════════
COMPREHENSIVE RESEARCH REPORT: {topic.upper()}
═══════════════════════════════════════

EXECUTIVE SUMMARY:
{analysis}

KEY FINDINGS:
- Evidence-based analysis indicates significant implications
- Multiple stakeholder perspectives must be considered
- Implementation requires coordinated approach
- Long-term monitoring and evaluation necessary

RECOMMENDATIONS:
1. Develop comprehensive strategy framework
2. Engage key stakeholders early in process
3. Establish clear metrics and milestones
4. Create feedback mechanisms for continuous improvement
5. Allocate appropriate resources and timeline

NEXT STEPS:
- Schedule stakeholder consultations
- Develop detailed implementation plan
- Establish monitoring and evaluation framework
- Begin pilot program if applicable

This report provides a foundation for informed decision-making and strategic planning.
"""

9. Create Multi-Agent Workflow

# Initialize LLM
llm = OpenAI(model="gpt-4o-mini", temperature=0)

# Create individual agents
research_agent = FunctionAgent(
    name="research_agent",
    description="This agent researches a given topic and returns key findings.",
    tools=[FunctionTool.from_defaults(fn=research_topic)],
    llm=llm,
    system_prompt="You are a research specialist. Use the research tool to gather comprehensive information on requested topics."
)

analysis_agent = FunctionAgent(
    name="analysis_agent",
    description="This agent analyzes research data and provides actionable insights.",
    tools=[FunctionTool.from_defaults(fn=analyze_data)],
    llm=llm,
    system_prompt="You are a data analyst. Analyze research findings and provide actionable insights."
)

report_agent = FunctionAgent(
    name="report_agent",
    description="This agent creates comprehensive, well-structured reports based on analysis.",
    tools=[FunctionTool.from_defaults(fn=write_report)],
    llm=llm,
    system_prompt="You are a report writer. Create comprehensive, well-structured reports based on analysis."
)

# Create AgentWorkflow
multi_agent_workflow = AgentWorkflow(
    agents=[research_agent, analysis_agent, report_agent],
    root_agent="research_agent"
)

10. Test Multi-Agent Workflow

async def test_agent_workflow():
    print("🔍 Testing AgentWorkflow with Maxim observability...")
    
    query = """I need a comprehensive report on renewable energy.
    Please research the current state of renewable energy,
    analyze the key findings, and create a structured report
    with recommendations for implementation."""
    
    print(f"\n📝 Query: {query}")
    print("🔄 This will coordinate multiple agents...")
    
    # This will create a complex trace showing:
    # - Multi-agent coordination
    # - Agent handoffs and communication
    # - Sequential tool execution
    # - Individual agent performances
    response = await multi_agent_workflow.run(query)
    
    print(f"\n🤖 Multi-Agent Response:\n{response}")
    print("\n✅ Check Maxim dashboard for comprehensive multi-agent workflow traces!")

# Run the async function
await test_agent_workflow()

11. Advanced Configuration

Custom System Prompts

# Create specialized agents with custom prompts
financial_agent = FunctionAgent(
    name="financial_agent",
    description="Specialized in financial analysis and calculations.",
    tools=[add_tool, multiply_tool, divide_tool],
    llm=llm,
    system_prompt="""You are a financial analyst with expertise in:
    - Financial calculations and projections
    - Risk assessment and analysis
    - Market trend evaluation
    - Investment recommendations
    
    Always provide detailed explanations and consider multiple scenarios."""
)

Error Handling

async def safe_agent_run(agent, query):
    """Run agent with error handling and logging."""
    try:
        response = await agent.run(query)
        return response
    except Exception as e:
        print(f"Error running agent: {e}")
        return f"Error: {str(e)}"

# Usage
response = await safe_agent_run(agent, "Calculate 100 / 0")

12. Production Considerations

Resource Cleanup

async def main():
    try:
        # Your agent operations here
        response = await multi_agent_workflow.run("Your query here")
        print(response)
    finally:
        # Ensure proper cleanup
        maxim.cleanup()

# Run the main function
await main()

Environment Validation

def validate_environment():
    """Validate that all required environment variables are set."""
    required_vars = ["MAXIM_API_KEY", "MAXIM_LOG_REPO_ID", "OPENAI_API_KEY"]
    
    for var in required_vars:
        if not os.getenv(var):
            raise ValueError(f"{var} environment variable is required")
    
    print("✅ All environment variables are set correctly")

# Call validation
validate_environment()

13. Observability with Maxim

All agent interactions, tool calls, and multi-agent workflows are automatically traced and can be visualized in your Maxim dashboard. This provides deep insights into:
  • Agent Performance: Monitor individual agent execution times and success rates
  • Tool Usage: Track which tools are used most frequently and their effectiveness
  • Multi-Agent Coordination: Visualize agent handoffs and workflow orchestration
  • Error Analysis: Identify and debug issues across the entire agent pipeline
  • Cost Optimization: Monitor token usage and optimize for cost efficiency

14. Visualize in Maxim Dashboard

After running your LlamaIndex application:
  • Log in to your Maxim Dashboard
  • Navigate to your repository
  • View detailed traces including:
    • Function agent interactions and tool calls
    • Multi-modal request processing
    • Multi-agent workflow orchestration
    • Performance metrics and costs
    • Error logs and debugging information

llamaindex_traces.gif For more details, see the LlamaIndex documentation and the Maxim Python SDK documentation.

Resources

I