Getting Started with AxonFlow
AxonFlow is an AI governance platform providing low-latency policy enforcement, multi-agent orchestration, and permission-aware data access for production AI systems.
If you prefer seeing how AxonFlow works before setting it up, this short demo shows runtime policy enforcement and execution control in action: Watch on YouTube
Opinionated Defaults, Configurable Enforcement
AxonFlow ships with secure-by-default behavior designed for regulated environments, but does not mandate enforcement.
AxonFlow explicitly separates:
- Detection — identifying PII, SQL injection, unsafe code, secrets, or other risks
- Policy — deciding what action to take when something is detected
Detection engines can run in audit-only, warn, redact, block, or require-approval modes depending on configuration.
All enforcement behavior is configurable:
- globally (environment variables)
- per connector
- per tenant (Enterprise)
- via time-bound policy overrides (Enterprise)
This allows teams to start in observe-only mode and progressively enforce controls as confidence grows.
For example, critical PII detection can be enabled while enforcement is configured to log-only during development and switched to block or require_approval in production.
Quick Start (5 Minutes)
Get AxonFlow running locally with Docker Compose:
# Clone the repository
git clone https://github.com/getaxonflow/axonflow.git
cd axonflow
# Set your OpenAI API key
export OPENAI_API_KEY=sk-your-key-here
# Start all services
docker compose up -d
# Check all services are healthy
docker compose ps
# Services available at:
# - Agent: http://localhost:8080
# - Orchestrator: http://localhost:8081
# - Grafana: http://localhost:3000
# - Prometheus: http://localhost:9090
That's it! You now have a fully functional AxonFlow deployment with:
- Agent + Orchestrator + PostgreSQL + Redis
- Full policy enforcement engine
- MCP connector support
- Grafana dashboards for monitoring
Verify Installation
Test that everything is working:
# Check agent health
curl http://localhost:8080/health
# Expected: {"service":"axonflow-agent","status":"healthy",...}
# Check orchestrator health
curl http://localhost:8081/health
# Expected: {"service":"axonflow-orchestrator","status":"healthy",...}
# Run the interactive demo
./examples/demo/demo.sh
The demo shows AxonFlow blocking SQL injection, detecting credit cards, and achieving single-digit millisecond latency:
Demo 1: SQL Injection Blocking
🛡️ BLOCKED - SQL Injection Detected
Demo 2: Safe Query (Allowed)
✓ ALLOWED - No policy violations
Demo 3: Credit Card Detection
🛡️ POLICY TRIGGERED - Credit Card Detected
Demo 4: Fast Policy Evaluation
⚡ Latency: single-digit ms
What You Can Build
AxonFlow enables you to add governance to any AI application:
Policy Enforcement
Define rules that control what your AI agents can do:
# policies/customer-support.yaml
name: customer-support-policy
rules:
- action: allow
conditions:
- field: user.role
operator: in
value: ["support", "admin"]
- action: block
conditions:
- field: request.contains_pii
operator: equals
value: true
message: "PII detected - request blocked"
Multi-Agent Orchestration
Coordinate multiple AI agents working in parallel:
from axonflow import AxonFlow, TokenUsage
async with AxonFlow(agent_url="http://localhost:8080") as client:
# Get policy-approved context for your agent
context = await client.get_policy_approved_context(
user_token="user-123",
query="What are the recent customer orders?"
)
if context.approved:
# Your agent logic here
result = await your_agent.run(context.data)
# Audit the interaction
await client.audit_llm_call(
context_id=context.context_id,
response_summary=result[:100],
provider="openai",
model="gpt-4",
token_usage=TokenUsage(
prompt_tokens=50,
completion_tokens=100,
total_tokens=150
),
latency_ms=500
)
MCP Connectors
Access external data sources with built-in permission controls. AxonFlow supports 15+ connectors including PostgreSQL, MySQL, MongoDB, Redis, S3, Snowflake, and more.
See the MCP Connectors documentation for configuration and usage details.
Core Concepts
| Concept | Description |
|---|---|
| Agent | Policy enforcement engine - evaluates requests with single-digit ms latency |
| Orchestrator | Coordinates multi-agent workflows and manages state |
| Policy | YAML rules defining what actions are allowed/blocked |
| MCP Connector | Permission-aware interface to external data sources |
| Audit Log | Immutable record of all AI interactions |
Choose Your Integration Mode
AxonFlow offers two integration modes. Your choice depends on whether you're starting fresh or adding governance to an existing stack.
You can start directly with AxonFlow as your orchestration and governance layer — no other framework required. If you already use LangChain, CrewAI, or similar, gateway mode lets you adopt AxonFlow incrementally.
Proxy Mode (Recommended for New Projects)
AxonFlow handles the full request lifecycle: policy → planning → routing → audit.
// Single call - everything handled automatically
const response = await axonflow.executeQuery({
userToken: 'user-123',
query: 'Analyze customer churn patterns',
requestType: 'chat'
});
Why Proxy Mode:
- 100% automatic audit logging — no risk of missing calls
- Multi-Agent Planning (MAP) — only available in Proxy Mode
- Response filtering catches PII in LLM outputs
- Simpler code — one API call instead of three
Gateway Mode (For Existing Stacks)
If you're already using LangChain, CrewAI, LlamaIndex, Lyzr, or similar frameworks, Gateway Mode lets you add governance without rewriting your LLM integration.
// 1. Pre-check policies
const ctx = await axonflow.getPolicyApprovedContext({ userToken, query });
if (!ctx.approved) throw new Error(ctx.blockReason);
// 2. Your existing LLM call (unchanged)
const response = await langchain.invoke(query);
// 3. Audit the call
await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: response.slice(0, 100),
provider: 'openai',
model: 'gpt-4',
tokenUsage: { promptTokens: 50, completionTokens: 100, totalTokens: 150 },
latencyMs: 500
});
Why Gateway Mode:
- No changes to your existing LLM calls
- Incremental adoption — add governance today, evaluate deeper integration later
- Works with any framework or direct API calls
Migration Path
Many teams start with Gateway Mode to get governance in place quickly, then evaluate moving to Proxy Mode based on:
| Factor | Gateway Mode | Proxy Mode |
|---|---|---|
| Integration effort | Low (wrap existing calls) | Medium (replace LLM calls) |
| Governance coverage | Manual audit calls | Automatic, 100% coverage |
| Multi-Agent Planning | Not available | Full MAP support |
| Latency overhead | ~10ms (policy check only) | ~30ms (full lifecycle) |
SDKs vs HTTP APIs
For Go, Java, Python, and TypeScript applications, we recommend using the AxonFlow SDKs. All SDKs are thin wrappers over the same REST APIs, which remain fully supported for custom integrations.
| Integration | Recommended For |
|---|---|
| SDKs | Application code, services, strongly typed environments |
| HTTP APIs | Agents, automation, CLI tools, CI pipelines, languages without SDKs (Ruby, PHP, etc.) |
All features—policy enforcement, audit logging, MCP connectors—are available via both SDKs and direct HTTP calls.
→ SDK Documentation · API Reference
Built-in Governance Policies
AxonFlow ships with 60+ built-in policies across multiple categories:
| Category | Community | Enterprise |
|---|---|---|
| Security | SQL injection (37 patterns), unsafe admin access, schema exposure | + Advanced SQLi, response scanning |
| Sensitive Data | PII detection (SSN, credit cards, PAN, Aadhaar, email, phone) | + Custom PII types, field-level redaction |
| Compliance | GDPR, PCI-DSS, HIPAA basic constraints | + EU AI Act, SEBI/RBI, MAS FEAT, DORA frameworks with retention and exports |
| Runtime Controls | Environment restrictions, basic approval gates | + HITL queues, multi-tenant isolation |
| Cost & Abuse | Per-user limits, anomalous usage detection | + Team/org budgets, compliance dashboards |
All policies are configurable. Teams typically start in observe-only mode and enable blocking once they trust the signal.
→ Full policy documentation · Community vs Enterprise
Project Structure
After cloning, you'll find:
axonflow/
├── docker-compose.yml # Local deployment config
├── platform/
│ ├── agent/ # Policy enforcement engine (Go)
│ ├── orchestrator/ # Multi-agent coordinator (Go)
│ ├── connectors/ # MCP connector implementations
│ └── examples/
│ └── demo/ # Interactive demo script
├── examples/
│ ├── hello-world/ # Simple SDK usage examples
│ └── workflows/ # Multi-step workflow examples
├── sdk/
│ ├── golang/ # Go SDK
│ └── typescript/ # TypeScript SDK
├── migrations/ # Database migrations
└── docs/ # Additional documentation
Next Steps
Learn the Basics
- Your First Agent - Build a policy-enforced AI agent
- Workflow Examples - Common patterns and recipes
- Policy Syntax - Write governance rules
Explore Examples
- Trip Planner - Multi-agent travel planning
- Customer Support - Support ticket automation
- Healthcare - HIPAA-compliant medical AI
- E-Commerce - Product recommendations
Integrate Your Stack
- Python SDK - Async-first Python client
- TypeScript SDK - Node.js and browser support
- Go SDK - Native Go client
- LangChain Integration - Use with LangChain agents
Go Deeper
- Architecture Overview - How AxonFlow works
- API Reference - Full API documentation
- Local Development - Development setup
System Requirements
| Requirement | Minimum | Recommended |
|---|---|---|
| Docker | 20.10+ | Latest |
| Docker Compose | 2.0+ | Latest |
| RAM | 4GB | 8GB |
| CPU | 2 cores | 4 cores |
| Disk | 10GB | 20GB |
Supported LLM Providers:
- OpenAI (GPT-4, GPT-4 Turbo)
- Anthropic (Claude 3)
- Local models via Ollama
Enterprise Deployment
Need production-grade deployment with high availability, auto-scaling, and enterprise support?
AxonFlow Enterprise offers:
- One-click AWS deployment via CloudFormation
- Multi-region high availability
- AWS Bedrock integration
- Industry compliance frameworks (HIPAA, SOC2, PCI-DSS)
- 24/7 premium support
Learn about Enterprise Features | AWS Marketplace
Get Help
- GitHub Issues: github.com/getaxonflow/axonflow/issues
- Documentation: docs.getaxonflow.com
- Email: [email protected]
