Deployment
AxonFlow Community is designed to run self-hosted. The default local deployment uses Docker Compose and brings up the same core runtimes engineers use in trial and early production:
- Agent on
:8080for inline policy enforcement and MCP access - Orchestrator on
:8081for workflow execution, routing, and multi-agent control - PostgreSQL for platform state and audit data
- Redis for runtime coordination and rate limiting
- Prometheus and Grafana for observability
Community is the fastest path to understand the platform end to end. Evaluation and Enterprise add higher limits, identity, compliance, and enterprise deployment workflows once a team is moving from pilot to production.
This page is intentionally community-first. If you are an engineer trying to understand whether AxonFlow can run as the governed control plane for your AI stack, this is the place to start. If you already know you need AWS-native enterprise rollout paths, use this page to understand the runtime shape first, then move into the protected deployment docs.
Deployment Options
| Option | Description | Best For |
|---|---|---|
| Self-Hosted | Docker Compose deployment for local, trial, and smaller self-managed environments | Engineers validating the platform quickly |
| AWS Marketplace | Managed AWS deployment path | Enterprise rollout and procurement workflows |
| CloudFormation | AWS infrastructure-as-code deployment | Enterprise teams with custom VPC, networking, and controls |
For AWS Marketplace and CloudFormation deployment details, use the protected enterprise docs after licensing.
How Most Teams Progress
The typical journey looks like this:
- start with Community Docker Compose to validate SDK integration, policies, MCP, and workflows
- use Evaluation when the team needs higher limits and more production-like governance features
- move to Enterprise when procurement, identity, compliance, and enterprise deployment workflows become part of the rollout
That progression is useful because it mirrors how serious AI products are usually adopted: first by engineers, then by platform teams, then by broader enterprise stakeholders.
Quick Start
git clone https://github.com/getaxonflow/axonflow.git
cd axonflow
cp .env.example .env
# Add at least one provider key if you want proxy-mode / routed LLM features
# OPENAI_API_KEY=...
# or ANTHROPIC_API_KEY=...
docker compose up -d
What Starts in Community Docker Compose
| Service | Port | Purpose |
|---|---|---|
| Agent | 8080 | Policy enforcement, gateway APIs, MCP APIs |
| Orchestrator | 8081 | Workflow execution, routing, WCP, provider APIs |
| PostgreSQL | 5432 | Platform and audit data |
| Redis | 6379 | Cache and coordination |
| Prometheus | 9090 | Metrics scraping |
| Grafana | 3000 | Dashboards (admin / grafana_localdev456) |
Verify the Installation
curl -s http://localhost:8080/health | jq .
curl -s http://localhost:8081/health | jq .
curl -s http://localhost:8080/prometheus | head
curl -s -o /dev/null -w "%{http_code}" http://localhost:3000
Expected health responses include status, service, version, and capability metadata. Prometheus scraping uses /prometheus; /metrics is a JSON endpoint kept for platform/debug flows.
Readiness Checklist
- Add at least one LLM provider key if you need proxy mode, routed workflows, or MAP
- Review LLM Providers and Choosing a Mode
- Confirm MCP connector configuration if your workflow needs governed database or API access
- Verify Prometheus and Grafana so you can observe latency, blocked requests, and token/cost activity from day one
- Use Deployment Mode Matrix and Capacity Planning And Sizing before you commit to a larger pilot or shared environment
What A Staff Engineer Usually Wants To Prove
Before recommending AxonFlow for broader adoption, a senior or staff engineer usually wants to show:
- the local stack is easy to run repeatedly
- the request path is observable
- policies behave predictably
- the platform can support the multi-agent or connector-heavy workflows the team actually plans to build
That is why the deployment story should not stop at "containers started." It should end at "we ran a realistic governed workflow and know how it behaved."
System Requirements
Minimum for local trial
- 2 vCPU
- 4 GB RAM
- 10 GB free disk
- Docker Desktop or Docker Engine with Compose v2
Recommended for serious team usage
- 4+ vCPU
- 8-16 GB RAM
- Persistent PostgreSQL storage
- Centralized logs plus Prometheus/Grafana retained outside a laptop
Production Direction
Community is enough to build and validate sophisticated governed AI systems. When teams need larger limits, enterprise identity, procurement-friendly deployment, and stronger governance guarantees, the natural next step is Evaluation or Enterprise.
Typical progression:
- Start with Community Docker Compose to validate workflows, SDK integration, and policy behavior.
- Move to Evaluation when the team needs higher limits and a more production-like rollout.
- Move to Enterprise for AWS-native deployment paths, stronger governance, identity, and commercial support.
If you are already mapping that journey, use these pages together:
- Evaluation Rollout Guide
- Community To Enterprise Migration
- Enterprise Rollout Checklist
- When Community Stops Being Enough
