Docs
Guides & Tutorials
First Agent

Monitor Your First Agent

Step-by-step guide to instrument your first AI agent with AgenticAnts in under 10 minutes.

Prerequisites

Before you begin, make sure you have:

  • AgenticAnts account - Sign up at agenticants.ai (opens in a new tab)
  • API Key - Get your API key from the dashboard
  • Python 3.8+ or Node.js 16+
  • Basic knowledge of AI frameworks (LangChain, OpenAI, etc.)

Step 1: Install AgenticAnts SDK

Python Installation

pip install agenticants

Node.js Installation

npm install @agenticants/sdk

Step 2: Initialize AgenticAnts

Python Setup

import os
from agenticants import AgenticAnts
 
# Initialize AgenticAnts
ants = AgenticAnts(api_key=os.getenv('AGENTICANTS_API_KEY'))
 
print("AgenticAnts initialized successfully!")

Node.js Setup

import { AgenticAnts } from '@agenticants/sdk'
 
// Initialize AgenticAnts
const ants = new AgenticAnts({
  apiKey: process.env.AGENTICANTS_API_KEY
})
 
console.log('AgenticAnts initialized successfully!')

Step 3: Create Your First Agent

Let's create a simple customer support agent using LangChain:

Python Example

from langchain import OpenAI
from langchain.prompts import PromptTemplate
from agenticants.integrations import langchain
 
# Enable automatic instrumentation
langchain.auto_instrument(ants)
 
# Create your agent
llm = OpenAI(temperature=0.7)
 
prompt = PromptTemplate(
    input_variables=["question"],
    template="You are a helpful customer support agent. Answer this question: {question}"
)
 
def customer_support_agent(question: str):
    # Create a trace for this interaction
    trace = ants.trace.create(
        name="customer-support",
        input=question,
        metadata={
            "agent_type": "customer_support",
            "model": "gpt-3.5-turbo",
            "temperature": 0.7
        }
    )
    
    try:
        # Process the question
        response = llm(prompt.format(question=question))
        
        # Complete the trace
        trace.complete(
            output=response,
            metadata={
                "response_length": len(response),
                "success": True
            }
        )
        
        return response
        
    except Exception as error:
        # Log the error
        trace.error(error=str(error))
        raise error
 
# Test your agent
response = customer_support_agent("How do I reset my password?")
print(f"Agent response: {response}")

Node.js Example

import { OpenAI } from 'langchain/llms/openai'
import { PromptTemplate } from 'langchain/prompts'
import { AgenticAntsCallbackHandler } from '@agenticants/langchain'
 
// Initialize OpenAI with AgenticAnts callback
const llm = new OpenAI({
  temperature: 0.7,
  callbacks: [new AgenticAntsCallbackHandler(ants)]
})
 
const prompt = PromptTemplate.fromTemplate(
  "You are a helpful customer support agent. Answer this question: {question}"
)
 
async function customerSupportAgent(question: string) {
  // Create a trace for this interaction
  const trace = await ants.trace.create({
    name: 'customer-support',
    input: question,
    metadata: {
      agentType: 'customer_support',
      model: 'gpt-3.5-turbo',
      temperature: 0.7
    }
  })
  
  try {
    // Process the question
    const response = await llm.call(await prompt.format({ question }))
    
    // Complete the trace
    await trace.complete({
      output: response,
      metadata: {
        responseLength: response.length,
        success: true
      }
    })
    
    return response
    
  } catch (error) {
    // Log the error
    await trace.error({ error: error.message })
    throw error
  }
}
 
// Test your agent
const response = await customerSupportAgent("How do I reset my password?")
console.log(`Agent response: ${response}`)

Step 4: View Results in Dashboard

  1. Open AgenticAnts Dashboard - Go to dashboard.agenticants.ai (opens in a new tab)

  2. View Traces - Click on "Traces" to see your agent interactions

  3. Analyze Performance - Check metrics like:

    • Response time
    • Token usage
    • Success rate
    • Error rate
  4. Monitor Costs - View cost breakdown by:

    • Model used
    • Token consumption
    • Time period

Step 5: Add More Monitoring

Track User Interactions

def enhanced_customer_support_agent(question: str, user_id: str, session_id: str):
    trace = ants.trace.create(
        name="customer-support",
        input=question,
        metadata={
            "user_id": user_id,
            "session_id": session_id,
            "agent_type": "customer_support",
            "timestamp": datetime.now().isoformat()
        }
    )
    
    # Add spans for different steps
    with trace.span("question_analysis") as span:
        # Analyze the question
        question_type = analyze_question_type(question)
        span.set_metadata({"question_type": question_type})
    
    with trace.span("response_generation") as span:
        # Generate response
        response = llm(prompt.format(question=question))
        span.set_metadata({"response_length": len(response)})
    
    with trace.span("quality_check") as span:
        # Check response quality
        quality_score = check_response_quality(response)
        span.set_metadata({"quality_score": quality_score})
    
    trace.complete(
        output=response,
        metadata={
            "question_type": question_type,
            "quality_score": quality_score,
            "success": True
        }
    )
    
    return response

Add Error Handling

async function robustCustomerSupportAgent(question: string, userId: string) {
  const trace = await ants.trace.create({
    name: 'customer-support',
    input: question,
    metadata: {
      userId,
      agentType: 'customer_support'
    }
  })
  
  try {
    // Validate input
    if (!question || question.length < 3) {
      throw new Error('Question too short')
    }
    
    // Generate response
    const response = await llm.call(await prompt.format({ question }))
    
    // Validate response
    if (!response || response.length < 10) {
      throw new Error('Response too short')
    }
    
    await trace.complete({
      output: response,
      metadata: {
        success: true,
        responseLength: response.length
      }
    })
    
    return response
    
  } catch (error) {
    await trace.error({
      error: error.message,
      metadata: {
        errorType: error.constructor.name,
        success: false
      }
    })
    
    // Return fallback response
    return "I'm sorry, I encountered an issue. Please try again or contact support."
  }
}

Step 6: Set Up Alerts

Create Performance Alerts

# Set up alerts for your agent
ants.alerts.create({
    "name": "High Error Rate",
    "condition": "error_rate > 5%",
    "window": "5m",
    "channels": ["email", "slack"],
    "severity": "warning"
})
 
ants.alerts.create({
    "name": "Slow Response Time",
    "condition": "p95_latency > 3000ms",
    "window": "10m",
    "channels": ["email"],
    "severity": "info"
})

Create Cost Alerts

// Set up cost alerts
await ants.alerts.create({
  name: 'High Daily Cost',
  condition: 'daily_cost > 100',
  window: '1d',
  channels: ['email', 'slack'],
  severity: 'warning'
})
 
await ants.alerts.create({
  name: 'Monthly Budget Alert',
  condition: 'monthly_cost > 2000',
  window: '1d',
  channels: ['email'],
  severity: 'critical'
})

Step 7: Monitor and Optimize

View Key Metrics

# Get performance metrics for your agent
metrics = ants.metrics.get_agent_metrics(
    agent_name="customer-support",
    period="last_7_days"
)
 
print(f"Total requests: {metrics.total_requests}")
print(f"Average response time: {metrics.avg_response_time}ms")
print(f"Error rate: {metrics.error_rate}%")
print(f"Total cost: ${metrics.total_cost}")
print(f"Success rate: {metrics.success_rate}%")

Analyze Trends

// Get performance trends
const trends = await ants.metrics.getTrends({
  agent: 'customer-support',
  period: 'last_30_days',
  granularity: 'daily'
})
 
console.log('Performance trends:')
trends.forEach(day => {
  console.log(`${day.date}: ${day.requests} requests, ${day.avgLatency}ms avg`)
})

Common Issues and Solutions

Issue: "API Key not found"

Solution: Make sure your API key is set in environment variables:

export AGENTICANTS_API_KEY="your_api_key_here"

Issue: "No traces appearing in dashboard"

Solution: Check that you're calling trace.complete() or trace.error() to finish the trace.

Issue: "High latency"

Solution:

  • Check your network connection
  • Consider using a different model
  • Implement caching for repeated queries

Issue: "High costs"

Solution:

  • Use smaller models for simple tasks
  • Implement response caching
  • Optimize prompts to reduce token usage

Next Steps

Now that you have your first agent monitored, here's what to explore next:

  1. Advanced Monitoring - Learn about multi-agent systems
  2. Cost Optimization - Reduce costs with our optimization guide
  3. Production Deployment - Deploy to production with best practices
  4. Integration Guides - Connect with LangChain or other frameworks

Example Projects

Check out these complete examples:

Support

Need help? We're here for you:


Congratulations! 🎉 You've successfully instrumented your first AI agent with AgenticAnts. You now have complete visibility into your agent's performance, costs, and behavior.