Integration Overview
AxonFlow integrates with AI frameworks and agent runtimes to add governance, policy enforcement, and audit logging to your existing AI applications. Add compliance guardrails without rewriting your agent code.
Looking for:
- Governance for LangChain / LangGraph / CrewAI
- Audit trail for agent workflows
- PII/PHI redaction before LLM calls
- Policy enforcement for multi-agent systems
- Cost controls and token budgets for agents
- Human approval gates for agent actions
- SQL injection prevention in AI pipelines
→ You're in the right place.
Recent release highlights
AxonFlow v7.1.x added new governance features that matter directly to framework and plugin users:
- v7.1.0 Release Notes covers decision explainability, session overrides, workflow checkpoints, and SDK parity
- v7.1.1 Release Notes covers the post-release fixes that made those features behave consistently across plugin paths
If you use LangChain, LangGraph, CrewAI, AutoGen, LlamaIndex.TS, Lyzr, OpenClaw, Claude Code, Cursor, or Codex, the updated integration pages below now call out the direct impact of those releases.
Supported Frameworks
LLM Orchestration Frameworks
| Framework | Language | Integration Type | Best Fit |
|---|---|---|---|
| LangChain | Python | SDK + Raw HTTP | Most comprehensive guide |
| LangChainGo | Go | Go SDK | Native Go integration |
| LangGraph | TypeScript | TypeScript SDK | Graph-based workflows |
| LlamaIndex.TS | TypeScript | TypeScript SDK | Node.js/TypeScript apps |
| CrewAI | Python | Python SDK | Multi-agent crews |
| AutoGen | Python | Python SDK | Microsoft multi-agent |
| DSPy | Python | Python SDK | Programmatic LLM pipelines |
| Lyzr | Python | Python SDK | Enterprise AI agents |
AI Agent Runtimes
| Runtime | Integration Type | Use Case | Best Fit |
|---|---|---|---|
| OpenClaw | Plugin (@axonflow/openclaw) | Policy enforcement, approval gates, audit trails | AI agent gateway |
| Anthropic Computer Use | Python SDK (ComputerUseGovernor) | Governed desktop actions, bash command blocking | Desktop automation |
| Claude Agent SDK | TypeScript SDK | MCP tool governance patterns | Custom agent tooling |
AI Assistants & CLI Tools
| Tool | Integration Type | Use Case | Best Fit |
|---|---|---|---|
| Claude Code | HTTP Hooks | CLI governance | Agentic coding assistant |
| Cursor | IDE hooks + MCP | IDE governance | Agentic code editor |
| OpenAI Codex | Hooks + skills + MCP | Hybrid governance | Cloud coding agent |
Enterprise Platforms
| Platform | Integration Type | Use Case | Best Fit |
|---|---|---|---|
| Microsoft Copilot Studio | HTTP API | Low-code AI apps | Power Platform integration |
| Semantic Kernel | Java | Microsoft AI orchestration | Java enterprise apps |
| Obot | TypeScript SDK | MCP Gateway | MCP-based agents |
Integration Patterns
Tool-Level Governance (Python, v6.0.0+) — Recommended for Python frameworks
Wrap any LangChain BaseTool with input/output policy enforcement. Works with LangChain, CrewAI, AutoGen, LangGraph, and any framework that accepts BaseTool:
from axonflow import AxonFlow
from axonflow.adapters import govern_tools
async with AxonFlow(endpoint="http://localhost:8080",
client_id="your-client-id",
client_secret="your-secret") as client:
governed = govern_tools([search, calculator], client)
# Use with any framework — they're still BaseTool instances
Benefits:
- Input governance: block tool calls with PII/SQLi before execution
- Output governance: redact sensitive data in tool results before LLM sees them
- Framework-agnostic: one wrapper works across all Python frameworks
- Per-tool governance details
Gateway Mode
The standard pattern for LLM call governance across all SDKs:
Your Framework → AxonFlow Pre-check → LLM Provider → AxonFlow Audit
Benefits:
- Policy enforcement before LLM calls
- Complete audit trail of all operations
- Token usage and cost tracking
- Works with any LLM provider and any SDK language
Proxy Mode
For simpler integrations:
Your Framework → AxonFlow Proxy → LLM Provider
Benefits:
- Single API endpoint
- Automatic policy enforcement
- Simpler integration (one call)
SDK Coverage
| SDK | Frameworks Using It |
|---|---|
| Python SDK | LangChain, CrewAI, AutoGen, DSPy, Lyzr |
| Go SDK | LangChainGo |
| TypeScript SDK | LangGraph, LlamaIndex.TS, Obot |
| Java SDK | Semantic Kernel |
| Raw HTTP | All frameworks (Copilot Studio uses Power Automate HTTP) |
Quick Start by Framework
Python Frameworks (LangChain, CrewAI, Lyzr)
from axonflow import AxonFlow
with AxonFlow.sync(
endpoint="http://localhost:8080",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
# Pre-check before LLM call
ctx = client.get_policy_approved_context(
user_token="user-123",
query="Your prompt here"
)
if ctx.approved:
# Make your framework's LLM call here
response = your_framework_llm_call(str(ctx.approved_data))
# Audit the result
client.audit_llm_call(
context_id=ctx.context_id,
provider="openai",
model="gpt-4",
response_summary=response[:200]
)
Go Frameworks (LangChainGo, Obot)
import "github.com/getaxonflow/axonflow-sdk-go/v8"
client := axonflow.NewClient(axonflow.AxonFlowConfig{
Endpoint: "http://localhost:8080",
ClientID: "your-client-id",
ClientSecret: "your-client-secret",
})
// Pre-check (GetPolicyApprovedContext handles policy check)
result, _ := client.GetPolicyApprovedContext("user-123", "Your prompt here", nil, nil)
if result.Approved {
// Your framework LLM call
response := yourFrameworkLLMCall(fmt.Sprint(result.ApprovedData))
// Audit
_, _ = client.AuditLLMCall(
result.ContextID,
response[:200],
"openai",
"gpt-4",
axonflow.TokenUsage{},
0,
nil,
)
}
TypeScript Frameworks (LlamaIndex.TS)
import { AxonFlow } from '@axonflow/sdk';
const client = new AxonFlow({
endpoint: 'http://localhost:8080',
clientId: process.env.AXONFLOW_CLIENT_ID,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET,
});
// Pre-check
const ctx = await client.getPolicyApprovedContext({
userToken: 'user-123',
query: 'Your prompt here'
});
if (ctx.approved) {
// Your framework LLM call
const response = await yourFrameworkLLMCall(JSON.stringify(ctx.approvedData ?? query));
// Audit
await client.auditLLMCall({
contextId: ctx.contextId,
responseSummary: response.slice(0, 200),
provider: 'openai',
model: 'gpt-4',
});
}
What AxonFlow Adds
| Capability | Description |
|---|---|
| PII Detection | 12+ PII types automatically detected and optionally redacted |
| SQL Injection Scanning | 37+ attack patterns blocked in prompts and responses |
| Policy Enforcement | Custom rules in Rego/OPA with single-digit ms evaluation |
| Audit Logging | Complete request/response logging with compliance retention |
| Cost Tracking | Token usage and cost per request |
| Multi-Model Routing | Route to different LLM providers based on policy |
Community vs Enterprise
| Feature | Community | Enterprise |
|---|---|---|
| Framework integrations | All | All |
| PII detection | ✅ | ✅ |
| SQL injection (basic) | ✅ | ✅ |
| SQL injection (advanced) | ✅ | |
| Audit logging | ✅ | ✅ |
| Policy enforcement | ✅ | ✅ |
| HITL approval queue | ✅ | |
| Compliance exports | ✅ | |
| Customer Portal | ✅ |
Choosing a Framework Guide
| If you're using... | Start with... |
|---|---|
| Python + multi-agent orchestration | CrewAI Guide or AutoGen Guide |
| Python + RAG/chains | LangChain Guide |
| Python + programmatic pipelines | DSPy Guide |
| Python + enterprise agents | Lyzr Guide |
| Go backend | LangChainGo Guide |
| TypeScript + graph workflows | LangGraph Guide |
| TypeScript + RAG | LlamaIndex.TS Guide |
| TypeScript + MCP | Obot Guide |
| Java enterprise | Semantic Kernel Guide |
| Low-code/Power Platform | Copilot Studio Guide |
| Claude Code CLI | Claude Code Guide |
Need a Different Framework?
AxonFlow's SDK and HTTP APIs work with any framework. Use the LangChain guide as a reference - it includes both SDK and raw HTTP examples that can be adapted to any framework.
For framework-specific integration help:
- Community: GitHub Discussions
- Enterprise: [email protected]
