OpenRouter, LangChain, CrewAI, AutoGen & MCP
Integration guides for OpenAI-compatible providers, LangChain, CrewAI, AutoGen, and MCP.
OpenRouter & OpenAI-Compatible Providers
agentguard includes provider presets for all OpenAI-compatible APIs. Switch providers with a single line.
python
from agentguard.integrations import Provider
# OpenRouter — access hundreds of models
provider = Provider.openrouter(api_key="sk-or-...")
tools = provider.to_tools([search_web, get_weather])
response = provider.chat(
model="anthropic/claude-sonnet-4-20250514",
messages=[{{"role": "user", "content": "What's the weather?"}}],
tools=tools,
)
Real Cost Tracking for Compatible Providers
When you also want response-based spend tracking, wrap the client with guard_openai_compatible_client. Compatible usage extraction uses a plugin registry internally so provider-specific formats can be added without replacing the default extractor.
Provider Presets
python
# Groq — ultra-fast inference
provider = Provider.groq(api_key="gsk_...")
# Together AI
provider = Provider.together(api_key="...")
# Fireworks AI
provider = Provider.fireworks(api_key="...")
# Any OpenAI-compatible endpoint
provider = Provider.custom(
base_url="https://your-api.com/v1",
api_key="...",
)
LangChain
Use guarded tools as LangChain tools:
python
from agentguard import guard
from agentguard.integrations.langchain import as_langchain_tool
from langchain.agents import AgentExecutor, create_openai_tools_agent
@guard(validate_input=True, detect_hallucination=True)
def calculator(expression: str) -> float:
"""Evaluate a math expression."""
return eval(expression) # simplified example
# Convert to LangChain tool
lc_tool = as_langchain_tool(calculator)
# Use in a LangChain agent
agent = create_openai_tools_agent(llm, [lc_tool], prompt)
executor = AgentExecutor(agent=agent, tools=[lc_tool])
result = executor.invoke({{"input": "What is 42 * 17?"}})
CrewAI
python
from agentguard import guard
from agentguard.integrations.crewai import as_crewai_tool
from crewai import Agent, Task, Crew
@guard(validate_input=True, rate_limit=RateLimitConfig(calls_per_minute=30))
def search_web(query: str) -> str:
"""Search the web."""
return search_api.search(query)
researcher = Agent(
role="Researcher",
goal="Find accurate information",
tools=[as_crewai_tool(search_web)],
)
AutoGen
python
from agentguard import guard
from agentguard.integrations.autogen import register_guarded_tools
import autogen
@guard(validate_input=True)
def get_stock_price(symbol: str) -> float:
"""Get current stock price."""
return finance_api.get_price(symbol)
assistant = autogen.AssistantAgent("assistant", llm_config=llm_config)
user_proxy = autogen.UserProxyAgent("user", code_execution_config=False)
# Register guarded tools with AutoGen
register_guarded_tools(user_proxy, [get_stock_price])
Model Context Protocol (MCP)
Serve guarded tools as an MCP server:
python
from agentguard import guard
from agentguard.integrations.mcp import create_mcp_server
@guard(validate_input=True, detect_hallucination=True)
def read_file(path: str) -> str:
"""Read a file from disk."""
return open(path).read()
@guard(validate_input=True, budget=BudgetConfig(max_calls_per_session=50))
def write_file(path: str, content: str) -> bool:
"""Write content to a file."""
with open(path, "w") as f:
f.write(content)
return True
# Create and run MCP server
server = create_mcp_server([read_file, write_file])
server.run(transport="stdio")
✅ MCP + Guards = Safe tool serving
When serving tools via MCP, guards run on the server side. Every tool call from any connected client gets validated, rate-limited, and monitored automatically.