Obot AI + AxonFlow Integration
Overview
Obot AI is an open-source MCP Gateway and AI platform that provides enterprise-ready management for Model Context Protocol (MCP) servers. With features like Server Discovery, Configuration Management, OAuth 2.1 authentication, and the Nanobot agent framework, Obot helps organizations build and deploy AI agents at scale.
AxonFlow adds real-time inference governance, policy enforcement, and comprehensive audit trails to ensure Obot-powered AI agents operate within enterprise compliance requirements.
Together, they create a complete enterprise AI infrastructure stack with MCP connectivity, governance, and observability.
Why Use AxonFlow with Obot?
Obot Strengths
- Open-source MCP Gateway and control plane
- Server Discovery with role-based access
- OAuth 2.1 authentication for external services
- Nanobot agent framework with MCP-UI support
- Multi-tenant deployment (cloud or on-prem)
- Audit logging and request filtering
AxonFlow Strengths
- Real-time inference governance (policy enforcement at request time)
- Cross-system audit correlation (unified logging across AI stack)
- Granular policy enforcement (per-agent, per-user, per-MCP-server)
- Cost control and allocation (budget limits, usage tracking)
- PII protection (automatic masking in prompts and responses)
- Model routing (intelligent routing between LLM providers)
The Perfect Combination
Obot handles: MCP server management, agent framework, OAuth flows, server discovery
AxonFlow handles: Inference governance, compliance, cost control, policy enforcement
Integration Architecture
AxonFlow integrates with Obot using Gateway Mode, which wraps LLM calls with policy pre-checks and audit logging:
[Nanobot Agent / Obot Chat]
|
v
[Obot MCP Gateway] --> MCP Servers (Salesforce, Slack, etc.)
|
v (LLM requests)
[AxonFlow Pre-Check] --> Policy Evaluation
|
v (if approved)
[LLM Provider (OpenAI / Anthropic / Bedrock)]
|
v
[AxonFlow Audit] --> Compliance Logging
|
v
[Response to Obot]
Note: AxonFlow uses its own API for governance, not an OpenAI-compatible endpoint. Integration requires calling AxonFlow's pre-check and audit endpoints around your LLM calls.
Quick Start
Prerequisites
- Obot MCP Gateway running (see Obot Documentation)
- AxonFlow running locally or deployed (see Getting Started)
- API keys for your LLM provider
- Node.js 18+ (for TypeScript SDK examples)
AxonFlow API Overview
AxonFlow Gateway Mode uses two main endpoints:
| Endpoint | Purpose |
|---|---|
POST /api/policy/pre-check | Policy evaluation before LLM call |
POST /api/audit/llm-call | Audit logging after LLM call completes |
Required Headers:
Content-Type: application/jsonX-Client-Secret: your-client-secretX-License-Key: your-license-key(optional, for enterprise features)
TypeScript SDK Integration
Use the AxonFlow TypeScript SDK for governed LLM calls in your Obot/Nanobot applications:
Install Dependencies
npm install @axonflow/sdk openai
Create Governed LLM Client
import { AxonFlowClient, GatewayModeClient } from '@axonflow/sdk';
import OpenAI from 'openai';
interface GovernedLLMConfig {
axonflowUrl: string;
clientId: string;
clientSecret: string;
licenseKey?: string;
llmProvider: 'openai' | 'anthropic';
llmApiKey: string;
}
class GovernedLLMClient {
private axonflow: GatewayModeClient;
private openai: OpenAI;
private clientId: string;
constructor(config: GovernedLLMConfig) {
this.axonflow = new GatewayModeClient({
agentUrl: config.axonflowUrl,
clientId: config.clientId,
clientSecret: config.clientSecret,
licenseKey: config.licenseKey,
});
this.openai = new OpenAI({
apiKey: config.llmApiKey,
});
this.clientId = config.clientId;
}
async chat(
userToken: string,
messages: Array<{ role: 'system' | 'user' | 'assistant'; content: string }>,
context?: Record<string, unknown>
): Promise<string> {
const startTime = Date.now();
const query = messages.filter(m => m.role === 'user').pop()?.content || '';
// 1. Pre-check with AxonFlow
const preCheck = await this.axonflow.preCheck({
userToken,
query,
context: {
...context,
agent_framework: 'obot',
},
});
if (!preCheck.approved) {
throw new Error(`Request blocked: ${preCheck.blockReason}`);
}
const contextId = preCheck.contextId;
try {
// 2. Make LLM call
const completion = await this.openai.chat.completions.create({
model: 'gpt-4',
messages,
});
const response = completion.choices[0]?.message?.content || '';
const latencyMs = Date.now() - startTime;
// 3. Audit the call
await this.axonflow.auditLLMCall({
contextId,
responseSummary: response.slice(0, 200),
provider: 'openai',
model: 'gpt-4',
tokenUsage: {
promptTokens: completion.usage?.prompt_tokens || 0,
completionTokens: completion.usage?.completion_tokens || 0,
totalTokens: completion.usage?.total_tokens || 0,
},
latencyMs,
});
return response;
} catch (error) {
// Audit even on error
const latencyMs = Date.now() - startTime;
await this.axonflow.auditLLMCall({
contextId,
responseSummary: `Error: ${error instanceof Error ? error.message : 'Unknown error'}`,
provider: 'openai',
model: 'gpt-4',
tokenUsage: { promptTokens: 0, completionTokens: 0, totalTokens: 0 },
latencyMs,
metadata: { error: true },
});
throw error;
}
}
}
// Usage with Obot
const governedClient = new GovernedLLMClient({
axonflowUrl: 'http://localhost:8080',
clientId: 'obot-agent-1',
clientSecret: process.env.AXONFLOW_CLIENT_SECRET!,
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
llmProvider: 'openai',
llmApiKey: process.env.OPENAI_API_KEY!,
});
// Use in Nanobot handler
async function handleObotQuery(userToken: string, query: string): Promise<string> {
return governedClient.chat(userToken, [
{ role: 'system', content: 'You are a helpful assistant with access to MCP tools.' },
{ role: 'user', content: query },
], {
mcp_servers: ['salesforce', 'slack'],
});
}
Integration Patterns
Pattern 1: MCP Server-Aware Governance
Apply different policies based on which MCP servers the agent accesses:
import { GatewayModeClient } from '@axonflow/sdk';
interface MCPContext {
servers: string[];
tools: string[];
}
class MCPAwareGovernance {
private axonflow: GatewayModeClient;
// Policy mapping based on MCP server sensitivity
private static readonly SERVER_POLICIES: Record<string, string> = {
salesforce: 'crm-data-policy',
slack: 'communication-policy',
database: 'sensitive-data-policy',
github: 'code-access-policy',
snowflake: 'analytics-policy',
};
constructor(axonflow: GatewayModeClient) {
this.axonflow = axonflow;
}
private getMostRestrictivePolicy(servers: string[]): string {
// Sensitivity order: restricted > sensitive > standard
const policies = servers.map(s => MCPAwareGovernance.SERVER_POLICIES[s] || 'default-policy');
if (policies.includes('sensitive-data-policy')) return 'sensitive-data-policy';
if (policies.includes('crm-data-policy')) return 'crm-data-policy';
return 'default-policy';
}
async preCheckWithMCPContext(
userToken: string,
query: string,
mcpContext: MCPContext
): Promise<{ approved: boolean; contextId: string; policy: string }> {
const policy = this.getMostRestrictivePolicy(mcpContext.servers);
const result = await this.axonflow.preCheck({
userToken,
query,
context: {
policy_override: policy,
mcp_servers: mcpContext.servers,
mcp_tools: mcpContext.tools,
},
});
return {
approved: result.approved,
contextId: result.contextId,
policy,
};
}
}
// Usage
const mcpGovernance = new MCPAwareGovernance(axonflowClient);
const preCheck = await mcpGovernance.preCheckWithMCPContext(
userToken,
'Find our top 10 customers and message them on Slack',
{
servers: ['salesforce', 'slack'],
tools: ['salesforce_query', 'slack_send_message'],
}
);
if (!preCheck.approved) {
console.log(`Blocked by policy: ${preCheck.policy}`);
}
Pattern 2: Obot Task Governance
Govern Obot's scheduled tasks with AxonFlow:
import { GatewayModeClient } from '@axonflow/sdk';
interface TaskConfig {
name: string;
schedule: string; // cron expression
mcpServers: string[];
}
class GovernedObotTask {
private axonflow: GatewayModeClient;
private config: TaskConfig;
constructor(axonflow: GatewayModeClient, config: TaskConfig) {
this.axonflow = axonflow;
this.config = config;
}
async execute(query: string): Promise<string> {
const startTime = Date.now();
// Pre-check with task context
const preCheck = await this.axonflow.preCheck({
userToken: `task:${this.config.name}`,
query,
context: {
task_name: this.config.name,
task_schedule: this.config.schedule,
mcp_servers: this.config.mcpServers,
execution_type: 'scheduled',
},
});
if (!preCheck.approved) {
throw new Error(`Task blocked: ${preCheck.blockReason}`);
}
try {
// Execute the actual task (your Obot logic here)
const result = await this.runTask(query);
// Audit success
await this.axonflow.auditLLMCall({
contextId: preCheck.contextId,
responseSummary: result.slice(0, 200),
provider: 'openai',
model: 'gpt-4',
tokenUsage: { promptTokens: 100, completionTokens: 50, totalTokens: 150 },
latencyMs: Date.now() - startTime,
metadata: { task_name: this.config.name },
});
return result;
} catch (error) {
// Audit failure
await this.axonflow.auditLLMCall({
contextId: preCheck.contextId,
responseSummary: `Task error: ${error instanceof Error ? error.message : 'Unknown'}`,
provider: 'openai',
model: 'gpt-4',
tokenUsage: { promptTokens: 0, completionTokens: 0, totalTokens: 0 },
latencyMs: Date.now() - startTime,
metadata: { task_name: this.config.name, error: true },
});
throw error;
}
}
private async runTask(query: string): Promise<string> {
// Your actual Obot task implementation
return 'Task completed';
}
}
// Register governed tasks
const dailyReportTask = new GovernedObotTask(axonflowClient, {
name: 'daily-sales-report',
schedule: '0 8 * * *', // 8 AM daily
mcpServers: ['salesforce', 'snowflake'],
});
Pattern 3: Multi-Tenant Obot Deployment
Support multi-tenant deployments with isolated governance:
import { GatewayModeClient } from '@axonflow/sdk';
class MultiTenantGovernance {
private axonflowClients: Map<string, GatewayModeClient> = new Map();
private baseUrl: string;
private baseSecret: string;
constructor(axonflowUrl: string, baseSecret: string) {
this.baseUrl = axonflowUrl;
this.baseSecret = baseSecret;
}
private getClientForTenant(tenantId: string): GatewayModeClient {
if (!this.axonflowClients.has(tenantId)) {
this.axonflowClients.set(tenantId, new GatewayModeClient({
agentUrl: this.baseUrl,
clientId: `tenant-${tenantId}`,
clientSecret: this.baseSecret,
}));
}
return this.axonflowClients.get(tenantId)!;
}
async governedCall(
tenantId: string,
userId: string,
query: string,
llmCall: () => Promise<{ response: string; tokens: number }>
): Promise<string> {
const client = this.getClientForTenant(tenantId);
const startTime = Date.now();
const preCheck = await client.preCheck({
userToken: userId,
query,
context: {
tenant_id: tenantId,
isolation_level: 'strict',
},
});
if (!preCheck.approved) {
throw new Error(`Request blocked for tenant ${tenantId}`);
}
try {
const result = await llmCall();
await client.auditLLMCall({
contextId: preCheck.contextId,
responseSummary: result.response.slice(0, 200),
provider: 'openai',
model: 'gpt-4',
tokenUsage: {
promptTokens: result.tokens,
completionTokens: result.tokens / 2,
totalTokens: result.tokens * 1.5,
},
latencyMs: Date.now() - startTime,
metadata: { tenant_id: tenantId },
});
return result.response;
} catch (error) {
await client.auditLLMCall({
contextId: preCheck.contextId,
responseSummary: `Error: ${error instanceof Error ? error.message : 'Unknown'}`,
provider: 'openai',
model: 'gpt-4',
tokenUsage: { promptTokens: 0, completionTokens: 0, totalTokens: 0 },
latencyMs: Date.now() - startTime,
metadata: { tenant_id: tenantId, error: true },
});
throw error;
}
}
}
Go SDK Integration
For Go-based Obot extensions or custom MCP servers:
package main
import (
"context"
"fmt"
"time"
"github.com/getaxonflow/axonflow-go-sdk/axonflow"
"github.com/sashabaranov/go-openai"
)
type GovernedMCPHandler struct {
axonflow *axonflow.GatewayClient
openai *openai.Client
clientID string
}
func NewGovernedMCPHandler(axonflowURL, clientSecret, openaiKey string) *GovernedMCPHandler {
return &GovernedMCPHandler{
axonflow: axonflow.NewGatewayClient(axonflow.Config{
AgentURL: axonflowURL,
ClientID: "obot-mcp-handler",
ClientSecret: clientSecret,
}),
openai: openai.NewClient(openaiKey),
clientID: "obot-mcp-handler",
}
}
func (h *GovernedMCPHandler) HandleMCPRequest(
ctx context.Context,
userToken string,
query string,
mcpServers []string,
) (string, error) {
startTime := time.Now()
// 1. Pre-check with AxonFlow
preCheck, err := h.axonflow.PreCheck(ctx, axonflow.PreCheckRequest{
UserToken: userToken,
Query: query,
Context: map[string]interface{}{
"mcp_servers": mcpServers,
"agent_framework": "obot",
},
})
if err != nil {
return "", fmt.Errorf("pre-check failed: %w", err)
}
if !preCheck.Approved {
return "", fmt.Errorf("request blocked: %s", preCheck.BlockReason)
}
// 2. Make LLM call
resp, err := h.openai.CreateChatCompletion(ctx, openai.ChatCompletionRequest{
Model: openai.GPT4,
Messages: []openai.ChatCompletionMessage{
{Role: "system", Content: "You are an AI assistant with MCP tool access."},
{Role: "user", Content: query},
},
})
latencyMs := time.Since(startTime).Milliseconds()
if err != nil {
// Audit error
h.axonflow.AuditLLMCall(ctx, axonflow.AuditRequest{
ContextID: preCheck.ContextID,
ResponseSummary: fmt.Sprintf("Error: %v", err),
Provider: "openai",
Model: "gpt-4",
TokenUsage: axonflow.TokenUsage{},
LatencyMs: int(latencyMs),
Metadata: map[string]interface{}{"error": true},
})
return "", err
}
response := resp.Choices[0].Message.Content
// 3. Audit success
h.axonflow.AuditLLMCall(ctx, axonflow.AuditRequest{
ContextID: preCheck.ContextID,
ResponseSummary: truncate(response, 200),
Provider: "openai",
Model: "gpt-4",
TokenUsage: axonflow.TokenUsage{
PromptTokens: resp.Usage.PromptTokens,
CompletionTokens: resp.Usage.CompletionTokens,
TotalTokens: resp.Usage.TotalTokens,
},
LatencyMs: int(latencyMs),
})
return response, nil
}
func truncate(s string, maxLen int) string {
if len(s) <= maxLen {
return s
}
return s[:maxLen]
}
AxonFlow Policy Configuration
Create policies that match your Obot/MCP use cases:
{
"policies": [
{
"name": "obot-crm-policy",
"description": "Policy for Obot agents accessing CRM via MCP",
"enabled": true,
"rules": [
{
"type": "pii_protection",
"config": {
"fields": ["email", "phone", "address"],
"action": "mask"
}
},
{
"type": "content_filter",
"config": {
"blocked_patterns": ["export all contacts", "bulk download"],
"action": "block"
}
}
]
},
{
"name": "obot-database-policy",
"description": "Policy for Obot agents accessing databases via MCP",
"enabled": true,
"rules": [
{
"type": "rate_limit",
"config": {
"requests_per_minute": 10,
"action": "throttle"
}
},
{
"type": "content_filter",
"config": {
"blocked_patterns": ["DROP", "DELETE FROM", "TRUNCATE"],
"action": "block"
}
}
]
},
{
"name": "obot-scheduled-tasks",
"description": "Policy for automated Obot tasks",
"enabled": true,
"rules": [
{
"type": "cost_limit",
"config": {
"daily_limit_usd": 50.0,
"action": "block"
}
}
]
}
]
}
Deployment
Docker Compose (Development)
version: '3.8'
services:
obot-gateway:
image: ghcr.io/obot-platform/obot:latest
ports:
- "8000:8000"
environment:
- OBOT_AUTH_SECRET=${OBOT_AUTH_SECRET}
- DATABASE_URL=postgres://obot:obot@postgres:5432/obot
axonflow:
image: axonflow/agent:latest
ports:
- "8080:8080"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- CLIENT_SECRET=${AXONFLOW_CLIENT_SECRET}
depends_on:
- obot-gateway
postgres:
image: postgres:15
environment:
- POSTGRES_USER=obot
- POSTGRES_PASSWORD=obot
- POSTGRES_DB=obot
volumes:
- obot-data:/var/lib/postgresql/data
app:
build: .
environment:
- OBOT_URL=http://obot-gateway:8000
- AXONFLOW_URL=http://axonflow:8080
- OPENAI_API_KEY=${OPENAI_API_KEY}
depends_on:
- obot-gateway
- axonflow
volumes:
obot-data:
Best Practices
1. Always Use Context IDs
The context_id from pre-check must be passed to audit for proper correlation:
const preCheck = await axonflow.preCheck({ userToken, query });
const contextId = preCheck.contextId; // Store this immediately
// ... make LLM call ...
await axonflow.auditLLMCall({ contextId, ... }); // Use same contextId
2. Handle Blocked Requests Gracefully
if (!preCheck.approved) {
// Log for debugging
console.log(`Request blocked: ${preCheck.blockReason}`);
// Return user-friendly message
return "I'm unable to help with that request due to policy restrictions.";
}
3. Always Audit, Even on Errors
try {
const response = await llmCall();
await axonflow.auditLLMCall({ contextId, responseSummary: response, ... });
} catch (error) {
await axonflow.auditLLMCall({ contextId, responseSummary: `Error: ${error}`, ... });
throw error;
}
4. Request ID Propagation
Use consistent request IDs across Obot and AxonFlow for end-to-end tracing:
import { v4 as uuidv4 } from 'uuid';
const requestId = uuidv4();
// Pass to both systems
const preCheck = await axonflow.preCheck({
userToken,
query,
context: { request_id: requestId },
});
// Include in Obot MCP calls
const mcpResponse = await obot.callTool(server, tool, {
...params,
_request_id: requestId,
});
Troubleshooting
Common Issues
Issue: Pre-check returns 401 Unauthorized
- Verify
X-Client-Secretheader is correct - Check
X-License-Keyif using enterprise features - Ensure client_id is registered in AxonFlow
Issue: Audit calls failing
- Verify context_id is from a valid pre-check (not expired)
- Check that AxonFlow agent is healthy (
/healthendpoint)
Issue: High latency with dual gateway setup
- Consider co-locating AxonFlow and Obot
- Enable connection pooling in HTTP clients
- Use async requests where possible
Issue: MCP context not being applied to policies
- Ensure
mcp_serversis passed in context - Verify policy conditions match context field names