Obot AI + AxonFlow Integration
How to use this doc:
- Understanding the problem? Read sections 1–4
- Ready to implement? Jump to Integration Pattern
What Obot Does Well
Obot AI is an open-source MCP Gateway and AI platform. Its strengths are compelling for teams building MCP-based AI systems:
MCP Gateway: Central hub for Model Context Protocol servers. Connect to databases, APIs, and enterprise systems through a unified protocol.
Server Discovery: Find and configure MCP servers with role-based access. Manage which servers are available to which users.
OAuth 2.1 Authentication: Secure authentication for external services. Handle complex OAuth flows transparently.
Nanobot Framework: Build agents that leverage MCP tools. Agents can access any connected MCP server.
Multi-Tenant Deployment: Deploy for multiple teams with isolated configurations. Cloud or on-premises.
Open Source: Full transparency and community-driven development. Audit the code, contribute improvements.
What Obot Doesn't Try to Solve
Obot focuses on MCP management and agent hosting. These concerns are explicitly out of scope:
| Production Requirement | Obot's Position |
|---|---|
| Policy enforcement before LLM calls | Not provided—MCP requests flow through without governance |
| PII detection in MCP responses | Not addressed—MCP data passes through unchanged |
| Cross-system audit trails | Provides request logging—not unified AI governance |
| Per-agent cost attribution | Not tracked—requires external monitoring |
| SQL injection prevention | Not provided—MCP servers handle their own security |
| Token budget enforcement | Not provided—agents can consume unlimited tokens |
| Compliance-ready audit format | Basic logging—not designed for compliance |
This isn't a criticism—it's a design choice. Obot handles MCP orchestration. AI governance is a separate concern.
Where Teams Hit Production Friction
Based on real enterprise deployments, here are the blockers that appear after the prototype works:
1. The MCP Data Leak
An agent queries a database MCP server. The results include customer PII. The agent forwards this to the LLM as context. Obot facilitated the request—there's no PII filtering in the MCP path.
2. The "What Was Accessed?" Question
An audit of database access is requested. Obot logs MCP requests, but:
- What data was returned?
- Who requested it?
- Was it then sent to an LLM?
- What was the LLM's response?
The full chain isn't captured in a single audit trail.
3. The Multi-Agent Cost Explosion
Five Nanobot agents run continuously. Each makes LLM calls. The monthly bill arrives. Which agent spent what? Obot doesn't track LLM costs per agent.
4. The Security Review Block
Security review: BLOCKED
- MCP requests can expose sensitive data
- No PII filtering between MCP and LLM
- Audit trail doesn't cover LLM decisions
- Cost controls missing
- Compliance format not supported
The MCP infrastructure works perfectly. Governance gaps block deployment.
5. The Server Access Control Gap
A marketing agent should only access CRM data. A finance agent should access accounting systems. Obot provides server-level access, but nothing prevents an agent from sending CRM data to the wrong destination.
How AxonFlow Plugs In
AxonFlow doesn't replace Obot. It sits underneath it—providing the governance layer for all AI operations:
┌─────────────────┐
│ Nanobot App │
└────────┬────────┘
│
v
┌─────────────────┐
│ Obot Gateway │ <-- MCP Servers, OAuth, Discovery
└────────┬────────┘
│
v
┌─────────────────────────────────┐
│ AxonFlow │
│ ┌───────────┐ ┌────────────┐ │
│ │ Policy │ │ Audit │ │
│ │ Enforce │ │ Trail │ │
│ └───────────┘ └────────────┘ │
│ ┌───────────┐ ┌────────────┐ │
│ │ PII │ │ Cost │ │
│ │ Detection│ │ Control │ │
│ └───────────┘ └────────────┘ │
└────────────────┬────────────────┘
│
v
┌─────────────────┐
│ LLM Provider │
└─────────────────┘
What this gives you:
- Every LLM call logged with MCP context
- PII detected and blocked in MCP data before LLM
- SQL injection attempts blocked in MCP queries
- Cost tracked per agent, per MCP server, per user
- Full audit trail across the complete stack
What stays the same:
- Your Obot configuration doesn't change
- MCP server connections work as before
- Nanobot agents run unchanged
Integration Pattern
Wrap Nanobot LLM calls with AxonFlow governance:
import { AxonFlow } from "@axonflow/sdk";
interface GovernedLLMConfig {
mcpServers: string[];
tools: string[];
}
class GovernedObotClient {
private axonflow: AxonFlow;
constructor() {
this.axonflow = new AxonFlow({
endpoint: process.env.AXONFLOW_ENDPOINT!,
tenant: "obot-agent",
});
}
async chat(
userToken: string,
messages: Array<{ role: string; content: string }>,
config: GovernedLLMConfig
): Promise<string> {
const startTime = Date.now();
const query =
messages.filter((m) => m.role === "user").pop()?.content || "";
// 1. Pre-check with MCP context
const approval = await this.axonflow.getPolicyApprovedContext({
userToken,
query,
context: {
framework: "obot",
mcp_servers: config.mcpServers,
mcp_tools: config.tools,
},
});
if (!approval.approved) {
throw new Error(`Blocked: ${approval.blockReason}`);
}
// 2. Make LLM call (your Obot/Nanobot logic)
const response = await this.callLLM(messages);
const latencyMs = Date.now() - startTime;
// 3. Audit
await this.axonflow.auditLLMCall({
contextId: approval.contextId,
responseSummary: response.slice(0, 200),
provider: "openai",
model: "gpt-4",
tokenUsage: { promptTokens: 100, completionTokens: 50, totalTokens: 150 },
latencyMs,
metadata: {
mcp_servers: config.mcpServers,
mcp_tools: config.tools,
},
});
return response;
}
private async callLLM(
messages: Array<{ role: string; content: string }>
): Promise<string> {
// Your LLM call implementation
return "Response from LLM";
}
}
// Usage
const client = new GovernedObotClient();
const response = await client.chat(
"user-123",
[
{ role: "system", content: "You are a helpful assistant with MCP tools." },
{ role: "user", content: "Find our top customers from Salesforce" },
],
{
mcpServers: ["salesforce", "slack"],
tools: ["salesforce_query", "slack_send_message"],
}
);
More Examples
| Pattern | Language | Link |
|---|---|---|
| MCP-Aware Policy Routing | TypeScript | obot/typescript |
| Task Governance | TypeScript | obot/tasks |
| Go SDK Integration | Go | obot/go |