Skip to main content

Getting Started with AxonFlow

Want to skip local setup?

Try AxonFlow instantly at try.getaxonflow.com — no Docker, no installation. Register in 30 seconds and start testing with any SDK.

AxonFlow is a runtime control layer for production AI systems. It sits in the execution path and records why a model or tool action was allowed, blocked, paused, or resumed. In a local deployment, you get:

  • an Agent on :8080 for policy enforcement, gateway mode, MCP policy checks, and single-entry-point routing
  • an Orchestrator on :8081 for provider routing, planning, workflow execution, and management APIs
  • supporting services for state, metrics, and dashboards

This guide is grounded in the current repository layout and runtime behavior.

See it before you wire it in

Community Quickstart Demo (Code + Terminal, 2.5 min) — governed calls, PII block, Gateway Mode, and MAP from YAML: Watch on YouTube

Want the product/runtime view instead? Watch the Runtime Control Demo (Portal + Workflow, 3 min) — approvals, retry safety, execution state, and the audit viewer.

What You Will Run

For local development, AxonFlow defaults to community mode when DEPLOYMENT_MODE is unset or set to community.

That gives you:

  • local, self-hosted startup with docker compose
  • core policy enforcement, decision records, and audit plumbing
  • proxy mode via POST /api/request
  • gateway mode via POST /api/policy/pre-check and POST /api/audit/llm-call
  • MCP connector execution and policy checks on the Agent

AxonFlow is not a workflow engine. You keep your app or orchestrator and add runtime checks plus execution records around it.

Prerequisites

  • Docker Desktop or Docker Engine with Compose v2
  • At least one LLM provider credential if you want to exercise live model calls
  • curl for quick verification

System Requirements

RequirementMinimumRecommended
Docker20.10+Latest
Docker Compose2.0+Latest
RAM4 GB8 GB
CPU2 cores4 cores
Disk10 GB20 GB

Quick Start

# Clone the repository
git clone https://github.com/getaxonflow/axonflow.git
cd axonflow

# Create your local environment file
cp .env.example .env

# Add at least one provider key to .env
# OPENAI_API_KEY=...
# or ANTHROPIC_API_KEY=...
# or MISTRAL_API_KEY=...
# or GOOGLE_API_KEY=...

# Start the stack
docker compose up -d

Verify the Services

curl http://localhost:8080/health
curl http://localhost:8081/health
docker compose ps

Expected local ports:

ServiceURLPurpose
Agenthttp://localhost:8080Policy enforcement, gateway mode, MCP endpoints
Orchestratorhttp://localhost:8081LLM routing, plan/workflow APIs, management APIs
Grafanahttp://localhost:3000Dashboards
Prometheushttp://localhost:9090Metrics

Try the Three Core Paths

1. Proxy Mode

Use proxy mode when you want AxonFlow to handle policy evaluation, provider routing, and audit logging in one request.

curl -X POST http://localhost:8080/api/request \
-H "Content-Type: application/json" \
-d '{
"client_id": "local-dev",
"user_token": "demo-user",
"query": "Summarize why runtime governance matters for AI systems",
"request_type": "llm_chat",
"context": {
"provider": "openai"
}
}'

Use this path when:

  • you are starting a new app
  • you want AxonFlow to own model routing and audit capture
  • you want MAP and response-side governance behavior

2. Gateway Mode

Use gateway mode when you already have LLM calls in your app and want to add governance without replacing them.

curl -X POST http://localhost:8080/api/policy/pre-check \
-H "Content-Type: application/json" \
-d '{
"client_id": "local-dev",
"user_token": "demo-user",
"query": "Look up customer with SSN 123-45-6789"
}'

Then, after your application performs the LLM call, report the outcome back to AxonFlow:

curl -X POST http://localhost:8080/api/audit/llm-call \
-H "Content-Type: application/json" \
-d '{
"client_id": "local-dev",
"context_id": "replace-with-context-id",
"response_summary": "Handled customer request with redaction applied",
"provider": "openai",
"model": "gpt-4o",
"token_usage": {
"prompt_tokens": 42,
"completion_tokens": 87,
"total_tokens": 129
},
"latency_ms": 650
}'

Use this path when:

  • you already use OpenAI, Anthropic, LangChain, CrewAI, or another framework directly
  • you want policy checks before the call
  • you are willing to keep audit reporting explicit in your application

3. MCP Connectors

The local repo ships with a runtime config file at config/axonflow.yaml. For local development, it registers PostgreSQL-backed demo connectors such as postgres, database, analytics-db, and audit-store.

Check connector health:

curl http://localhost:8080/mcp/health
curl http://localhost:8080/mcp/connectors

If you update connector configuration at runtime, refresh the cache on the Agent:

curl -X POST http://localhost:8080/api/v1/connectors/refresh

Run the Demo Script

If you want a guided walkthrough instead of manual API calls:

./examples/demo/demo.sh

That script is the fastest way to see policy enforcement, connector usage, and audit behavior together in a working local stack.

Repository Map

After cloning, these are the most useful paths to know:

axonflow/
├── .env.example # Local provider/env configuration
├── docker-compose.yml # Local stack
├── config/axonflow.yaml # Local runtime connector configuration
├── platform/agent/ # Agent service
├── platform/orchestrator/ # Orchestrator service
├── platform/connectors/ # Built-in connector implementations
├── examples/ # Demo and integration examples
├── migrations/ # Core, enterprise, and industry migrations
├── ee/ # Enterprise-only implementation surfaces
└── docs/ # In-repo technical/reference docs

If you are deciding how to integrate:

If you are configuring the runtime:

If you want to integrate via SDK:

If you are using AI coding tools or agent frameworks:

If you are working with multimodal LLMs (images):

  • Media Governance — OCR-based PII detection, format validation, and content safety for images sent to GPT-4o, Claude, Gemini

If you are headed toward workflows and orchestration:

Moving Beyond Local Development

When you are ready for production deployment, the next pages are:

Enterprise deployments (AWS Marketplace, CloudFormation) are covered in the Enterprise Documentation Portal.

When Community Stops Being The Whole Story

Community is the right place to begin, but most teams hit a predictable next stage.

You are usually ready for Evaluation when you need:

  • more realistic staging-scale limits
  • organization-level governance instead of just local or team-level experimentation
  • approval, simulation, and evidence workflows for stakeholder review
  • a stronger internal case that AxonFlow is ready for a real rollout

If that is where you are headed next, request a license here:

What Enterprises Usually Need Next

Once AxonFlow is supporting more than a couple of isolated use cases, teams usually need more than “higher limits.” They need:

  • protected operational workflows
  • identity integration and admin controls
  • enterprise connector and provider operations
  • stronger governance guarantees across multiple teams
  • deployment and support expectations that match production risk

That is when the protected enterprise surface becomes the actual operating model, not just a teaser.

If you are already building the internal case, these pages are worth reading next: