LangChain Integration
Integrate AgenticAnts with LangChain for automatic tracing of your LLM applications.
Overview
LangChain is the most popular framework for building LLM applications. AgenticAnts provides seamless integration through callbacks and auto-instrumentation.
Features
- Automatic tracing of all LangChain operations
- Chain execution tracking (Sequential, Router, etc.)
- LLM calls monitoring with token usage and costs
- Tool usage tracking
- Memory and context tracking
- Multi-agent workflows
- Retrieval operations (vector stores, documents)
Installation
npm install @agenticants/sdk @agenticants/langchain langchainQuick Start
Auto-Instrumentation
The easiest way to get started:
import { AgenticAnts } from '@agenticants/sdk'
import { autoInstrument } from '@agenticants/langchain'
const ants = new AgenticAnts({ apiKey: process.env.AGENTICANTS_API_KEY })
// Enable auto-instrumentation
autoInstrument(ants)
// Now all LangChain calls are automatically traced!
import { ChatOpenAI } from 'langchain/chat_models/openai'
const llm = new ChatOpenAI({ modelName: 'gpt-4' })
const response = await llm.call([
{ role: 'user', content: 'What is AI?' }
])
// Automatically traced in AgenticAnts!Callback Handler
For more control, use the callback handler:
import { AgenticAnts } from '@agenticants/sdk'
import { AgenticAntsCallbackHandler } from '@agenticants/langchain'
import { ChatOpenAI } from 'langchain/chat_models/openai'
const ants = new AgenticAnts({ apiKey: process.env.AGENTICANTS_API_KEY })
const llm = new ChatOpenAI({
modelName: 'gpt-4',
callbacks: [new AgenticAntsCallbackHandler(ants, {
traceName: 'customer-support',
metadata: {
environment: 'production',
version: '1.0.0'
}
})]
})
const response = await llm.call([
{ role: 'user', content: 'Help me with my order' }
])Advanced Usage
Chains
Track complex chains:
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.vectorstores import Pinecone
from agenticants.integrations.langchain import AgenticAntsCallbackHandler
# Set up components
llm = ChatOpenAI()
vectorstore = Pinecone.from_existing_index('my-index')
# Create chain with AgenticAnts callback
handler = AgenticAntsCallbackHandler(ants, trace_name='rag-chain')
chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=vectorstore.as_retriever(),
callbacks=[handler]
)
# Run chain - fully traced!
result = chain({
"question": "What are the product features?",
"chat_history": []
})
# AgenticAnts captures:
# - Question asked
# - Documents retrieved
# - LLM prompt and response
# - Total tokens and cost
# - Execution time per stepAgents
Monitor LangChain agents:
import { AgentExecutor, ZeroShotAgent } from 'langchain/agents'
import { ChatOpenAI } from 'langchain/chat_models/openai'
import { SerpAPI } from 'langchain/tools'
import { AgenticAntsCallbackHandler } from '@agenticants/langchain'
const llm = new ChatOpenAI({ temperature: 0 })
const tools = [new SerpAPI()]
const handler = new AgenticAntsCallbackHandler(ants, {
traceName: 'research-agent',
metadata: { agentType: 'zero-shot' }
})
const agent = ZeroShotAgent.fromLLMAndTools(llm, tools)
const executor = new AgentExecutor({
agent,
tools,
callbacks: [handler]
})
const result = await executor.call({
input: "What's the weather in San Francisco?"
})
// AgenticAnts tracks:
// - Agent planning steps
// - Tool calls (SerpAPI)
// - LLM reasoning
// - Final answerMemory
Track conversation memory:
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
memory = ConversationBufferMemory()
chain = ConversationChain(
llm=llm,
memory=memory,
callbacks=[handler]
)
# Each conversation turn is traced
chain.predict(input="Hi, I'm looking for a laptop")
chain.predict(input="What are some good options under $1000?")
chain.predict(input="Tell me more about the first one")
# AgenticAnts shows:
# - Full conversation history
# - Memory usage
# - Context provided to LLMLangGraph Integration
For LangGraph (stateful multi-actor applications):
from langgraph.graph import StateGraph
from agenticants.integrations.langchain import AgenticAntsCallbackHandler
# Define your graph
workflow = StateGraph(AgentState)
# Add nodes
workflow.add_node("researcher", research_node)
workflow.add_node("writer", write_node)
workflow.add_node("editor", edit_node)
# Add edges
workflow.add_edge("researcher", "writer")
workflow.add_edge("writer", "editor")
# Compile with AgenticAnts tracing
app = workflow.compile()
handler = AgenticAntsCallbackHandler(ants, trace_name='content-pipeline')
# Run with tracing
result = app.invoke(
{"topic": "AI agents"},
config={"callbacks": [handler]}
)
# AgenticAnts visualizes:
# - Complete graph execution
# - Each node's input/output
# - State transitions
# - Timing per nodeWhat Gets Tracked
LLM Calls
{
type: 'llm',
model: 'gpt-4',
prompt: 'What is AI?',
completion: 'AI is...',
tokens: {
prompt: 15,
completion: 150,
total: 165
},
cost: 0.00495,
latency: 1250, // ms
temperature: 0.7,
maxTokens: 500
}Chain Execution
{
'type': 'chain',
'chain_type': 'ConversationalRetrievalChain',
'input': 'What are the features?',
'output': 'The features include...',
'steps': [
{'name': 'retrieval', 'docs': 5, 'duration': 200},
{'name': 'llm', 'tokens': 350, 'duration': 1500}
]
}Tool Usage
{
type: 'tool',
name: 'search',
input: 'weather in SF',
output: '68°F and sunny',
duration: 450 // ms
}Best Practices
1. Name Your Traces
handler = AgenticAntsCallbackHandler(
ants,
trace_name='customer-support-bot', # Descriptive name
metadata={
'channel': 'web',
'version': '2.1.0'
}
)2. Add Rich Metadata
const handler = new AgenticAntsCallbackHandler(ants, {
metadata: {
userId: 'user_123',
sessionId: 'session_abc',
customerTier: 'enterprise',
feature: 'qa-bot'
}
})3. Handle Errors
try:
result = chain.run(query, callbacks=[handler])
except Exception as error:
# Error is automatically captured by AgenticAnts
logger.error(f"Chain failed: {error}")4. Sample Appropriately
// Only trace in production for high-value requests
const shouldTrace = (request) => {
if (process.env.NODE_ENV !== 'production') return false
if (request.userTier === 'enterprise') return true
return Math.random() < 0.1
}
if (shouldTrace(request)) {
llm.callbacks = [new AgenticAntsCallbackHandler(ants)]
}Example Projects
RAG System
Complete retrieval-augmented generation:
from langchain.chains import RetrievalQA
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
# Load and process documents
loader = TextLoader('docs.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000)
docs = text_splitter.split_documents(documents)
# Create vector store
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(docs, embeddings)
# Create QA chain with tracing
handler = AgenticAntsCallbackHandler(ants, trace_name='doc-qa')
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(),
retriever=vectorstore.as_retriever(),
callbacks=[handler]
)
# Query with full observability
answer = qa.run("What is mentioned about pricing?")Multi-Agent System
Collaborative agents:
import { AgentExecutor } from 'langchain/agents'
const researcher = new AgentExecutor({
agent: researchAgent,
tools: researchTools,
callbacks: [new AgenticAntsCallbackHandler(ants, {
traceName: 'researcher-agent'
})]
})
const writer = new AgentExecutor({
agent: writerAgent,
tools: writerTools,
callbacks: [new AgenticAntsCallbackHandler(ants, {
traceName: 'writer-agent'
})]
})
// Coordinate agents
const research = await researcher.call({ input: topic })
const article = await writer.call({
input: `Write about: ${research.output}`
})
// AgenticAnts shows both agents' workTroubleshooting
Callbacks Not Working
# Make sure callbacks are passed correctly
chain.run(query, callbacks=[handler]) # Correct
# Not this
chain.callbacks = [handler]
chain.run(query) # May not workMissing Token Counts
// Ensure you're using a model that reports usage
const llm = new ChatOpenAI({
modelName: 'gpt-4', // Reports usage
callbacks: [handler]
})High Credit Usage
# Use sampling to reduce costs
import random
if random.random() < 0.1: # 10% sampling
callbacks = [handler]
else:
callbacks = []
chain.run(query, callbacks=callbacks)Next Steps
- Python SDK - Direct Python integration
- LlamaIndex - Another popular framework
- Guides - End-to-end tutorials