Skip to main content

Getting Started with AxonFlow

AxonFlow is an AI governance platform providing low-latency policy enforcement, multi-agent orchestration, and permission-aware data access for production AI systems.

Quick Start (5 Minutes)

Get AxonFlow running locally with Docker Compose:

# Clone the repository
git clone https://github.com/getaxonflow/axonflow.git
cd axonflow

# Set your OpenAI API key
export OPENAI_API_KEY=sk-your-key-here

# Start all services
docker-compose up -d

# Check all services are healthy
docker-compose ps

# Services available at:
# - Agent: http://localhost:8080
# - Orchestrator: http://localhost:8081
# - Grafana: http://localhost:3000
# - Prometheus: http://localhost:9090

That's it! You now have a fully functional AxonFlow deployment with:

  • Agent + Orchestrator + PostgreSQL + Redis
  • Full policy enforcement engine
  • MCP connector support
  • Grafana dashboards for monitoring

Verify Installation

Test that everything is working:

# Check agent health
curl http://localhost:8080/health
# Expected: {"service":"axonflow-agent","status":"healthy",...}

# Check orchestrator health
curl http://localhost:8081/health
# Expected: {"service":"axonflow-orchestrator","status":"healthy",...}

# Run the interactive demo
./platform/examples/demo/demo.sh

The demo shows AxonFlow blocking SQL injection, detecting credit cards, and achieving single-digit millisecond latency:

Demo 1: SQL Injection Blocking
🛡️ BLOCKED - SQL Injection Detected

Demo 2: Safe Query (Allowed)
✓ ALLOWED - No policy violations

Demo 3: Credit Card Detection
🛡️ POLICY TRIGGERED - Credit Card Detected

Demo 4: Fast Policy Evaluation
⚡ Latency: single-digit ms

What You Can Build

AxonFlow enables you to add governance to any AI application:

Policy Enforcement

Define rules that control what your AI agents can do:

# policies/customer-support.yaml
name: customer-support-policy
rules:
- action: allow
conditions:
- field: user.role
operator: in
value: ["support", "admin"]
- action: block
conditions:
- field: request.contains_pii
operator: equals
value: true
message: "PII detected - request blocked"

Multi-Agent Orchestration

Coordinate multiple AI agents working in parallel:

from axonflow import AxonFlow

async with AxonFlow(base_url="http://localhost:8080") as client:
# Get policy-approved context for your agent
context = await client.get_policy_approved_context(
user_id="user-123",
action="query_customer_data",
resource="orders"
)

if context.approved:
# Your agent logic here
result = await your_agent.run(context.data)

# Audit the interaction
await client.audit_llm_call(
user_id="user-123",
prompt=prompt,
response=result
)

MCP Connectors

Access external data sources with built-in permission controls. AxonFlow supports 15+ connectors including PostgreSQL, MySQL, MongoDB, Redis, S3, Snowflake, and more.

See the MCP Connectors documentation for configuration and usage details.

Core Concepts

ConceptDescription
AgentPolicy enforcement engine - evaluates requests with single-digit ms latency
OrchestratorCoordinates multi-agent workflows and manages state
PolicyYAML rules defining what actions are allowed/blocked
MCP ConnectorPermission-aware interface to external data sources
Audit LogImmutable record of all AI interactions

Project Structure

After cloning, you'll find:

axonflow/
├── docker-compose.yml # Local deployment config
├── platform/
│ ├── agent/ # Policy enforcement engine (Go)
│ ├── orchestrator/ # Multi-agent coordinator (Go)
│ ├── connectors/ # MCP connector implementations
│ └── examples/
│ └── demo/ # Interactive demo script
├── examples/
│ ├── hello-world/ # Simple SDK usage examples
│ └── workflows/ # Multi-step workflow examples
├── sdk/
│ ├── golang/ # Go SDK
│ └── typescript/ # TypeScript SDK
├── migrations/ # Database migrations
└── docs/ # Additional documentation

Next Steps

Learn the Basics

Explore Examples

Integrate Your Stack

Go Deeper

System Requirements

RequirementMinimumRecommended
Docker20.10+Latest
Docker Compose2.0+Latest
RAM4GB8GB
CPU2 cores4 cores
Disk10GB20GB

Supported LLM Providers:

  • OpenAI (GPT-4, GPT-4 Turbo)
  • Anthropic (Claude 3)
  • Local models via Ollama

Enterprise Deployment

Need production-grade deployment with high availability, auto-scaling, and enterprise support?

AxonFlow Enterprise offers:

  • One-click AWS deployment via CloudFormation
  • Multi-region high availability
  • AWS Bedrock integration
  • Industry compliance frameworks (HIPAA, SOC2, PCI-DSS)
  • 24/7 premium support

Learn about Enterprise Features | AWS Marketplace

Get Help