Getting Started with AxonFlow
AxonFlow is an AI governance platform providing low-latency policy enforcement, multi-agent orchestration, and permission-aware data access for production AI systems.
Quick Start (5 Minutes)
Get AxonFlow running locally with Docker Compose:
# Clone the repository
git clone https://github.com/getaxonflow/axonflow.git
cd axonflow
# Set your OpenAI API key
export OPENAI_API_KEY=sk-your-key-here
# Start all services
docker-compose up -d
# Check all services are healthy
docker-compose ps
# Services available at:
# - Agent: http://localhost:8080
# - Orchestrator: http://localhost:8081
# - Grafana: http://localhost:3000
# - Prometheus: http://localhost:9090
That's it! You now have a fully functional AxonFlow deployment with:
- Agent + Orchestrator + PostgreSQL + Redis
- Full policy enforcement engine
- MCP connector support
- Grafana dashboards for monitoring
Verify Installation
Test that everything is working:
# Check agent health
curl http://localhost:8080/health
# Expected: {"service":"axonflow-agent","status":"healthy",...}
# Check orchestrator health
curl http://localhost:8081/health
# Expected: {"service":"axonflow-orchestrator","status":"healthy",...}
# Run the interactive demo
./platform/examples/demo/demo.sh
The demo shows AxonFlow blocking SQL injection, detecting credit cards, and achieving single-digit millisecond latency:
Demo 1: SQL Injection Blocking
🛡️ BLOCKED - SQL Injection Detected
Demo 2: Safe Query (Allowed)
✓ ALLOWED - No policy violations
Demo 3: Credit Card Detection
🛡️ POLICY TRIGGERED - Credit Card Detected
Demo 4: Fast Policy Evaluation
⚡ Latency: single-digit ms
What You Can Build
AxonFlow enables you to add governance to any AI application:
Policy Enforcement
Define rules that control what your AI agents can do:
# policies/customer-support.yaml
name: customer-support-policy
rules:
- action: allow
conditions:
- field: user.role
operator: in
value: ["support", "admin"]
- action: block
conditions:
- field: request.contains_pii
operator: equals
value: true
message: "PII detected - request blocked"
Multi-Agent Orchestration
Coordinate multiple AI agents working in parallel:
from axonflow import AxonFlow
async with AxonFlow(base_url="http://localhost:8080") as client:
# Get policy-approved context for your agent
context = await client.get_policy_approved_context(
user_id="user-123",
action="query_customer_data",
resource="orders"
)
if context.approved:
# Your agent logic here
result = await your_agent.run(context.data)
# Audit the interaction
await client.audit_llm_call(
user_id="user-123",
prompt=prompt,
response=result
)
MCP Connectors
Access external data sources with built-in permission controls. AxonFlow supports 15+ connectors including PostgreSQL, MySQL, MongoDB, Redis, S3, Snowflake, and more.
See the MCP Connectors documentation for configuration and usage details.
Core Concepts
| Concept | Description |
|---|---|
| Agent | Policy enforcement engine - evaluates requests with single-digit ms latency |
| Orchestrator | Coordinates multi-agent workflows and manages state |
| Policy | YAML rules defining what actions are allowed/blocked |
| MCP Connector | Permission-aware interface to external data sources |
| Audit Log | Immutable record of all AI interactions |
Project Structure
After cloning, you'll find:
axonflow/
├── docker-compose.yml # Local deployment config
├── platform/
│ ├── agent/ # Policy enforcement engine (Go)
│ ├── orchestrator/ # Multi-agent coordinator (Go)
│ ├── connectors/ # MCP connector implementations
│ └── examples/
│ └── demo/ # Interactive demo script
├── examples/
│ ├── hello-world/ # Simple SDK usage examples
│ └── workflows/ # Multi-step workflow examples
├── sdk/
│ ├── golang/ # Go SDK
│ └── typescript/ # TypeScript SDK
├── migrations/ # Database migrations
└── docs/ # Additional documentation
Next Steps
Learn the Basics
- Your First Agent - Build a policy-enforced AI agent
- Workflow Examples - Common patterns and recipes
- Policy Syntax - Write governance rules
Explore Examples
- Trip Planner - Multi-agent travel planning
- Customer Support - Support ticket automation
- Healthcare - HIPAA-compliant medical AI
- E-Commerce - Product recommendations
Integrate Your Stack
- Python SDK - Async-first Python client
- TypeScript SDK - Node.js and browser support
- Go SDK - Native Go client
- LangChain Integration - Use with LangChain agents
Go Deeper
- Architecture Overview - How AxonFlow works
- API Reference - Full API documentation
- Local Development - Development setup
System Requirements
| Requirement | Minimum | Recommended |
|---|---|---|
| Docker | 20.10+ | Latest |
| Docker Compose | 2.0+ | Latest |
| RAM | 4GB | 8GB |
| CPU | 2 cores | 4 cores |
| Disk | 10GB | 20GB |
Supported LLM Providers:
- OpenAI (GPT-4, GPT-4 Turbo)
- Anthropic (Claude 3)
- Local models via Ollama
Enterprise Deployment
Need production-grade deployment with high availability, auto-scaling, and enterprise support?
AxonFlow Enterprise offers:
- One-click AWS deployment via CloudFormation
- Multi-region high availability
- AWS Bedrock integration
- Industry compliance frameworks (HIPAA, SOC2, PCI-DSS)
- 24/7 premium support
Learn about Enterprise Features | AWS Marketplace
Get Help
- GitHub Issues: github.com/getaxonflow/axonflow/issues
- Documentation: docs.getaxonflow.com
- Email: [email protected]