Deployment
Deploy AxonFlow in the environment that best fits your needs.
Deployment Options
| Option | Description | Best For |
|---|---|---|
| Self-Hosted | Docker Compose deployment | Development, small teams |
| AWS Marketplace | One-click AWS deployment | Quick production setup |
| CloudFormation | Infrastructure as Code | Enterprise, custom VPC |
Quick Start (Self-Hosted)
# Clone the repository
git clone https://github.com/getaxonflow/axonflow.git
cd axonflow
# Set your OpenAI API key (optional — needed for LLM features)
export OPENAI_API_KEY=sk-your-key-here
# Start all services
docker compose up
# Agent API: http://localhost:8080
# Orchestrator API: http://localhost:8081
Service Ports (Self-Hosted)
| Service | Port | Description |
|---|---|---|
| Agent | 8080 | Policy enforcement engine (primary entry point) |
| Orchestrator | 8081 | Multi-agent coordination (internal) |
| Demo App UI | 3000 | React frontend (development) |
| Grafana | 3001 | Metrics dashboards (monitoring) |
| PostgreSQL | 5432 | Database (localhost only) |
Grafana runs on port 3001 (not the Grafana default of 3000) because port 3000 is used by the Demo App UI / Customer Portal. If you are not running the Demo App, you can remap Grafana to 3000 in your docker-compose.yml.
Verify Your Installation
After docker compose up completes, verify that all services are healthy:
# Check agent health (port 8080)
curl -s http://localhost:8080/health | jq .
# Expected: {"status":"healthy","version":"4.1.0"}
# Check orchestrator health (port 8081)
curl -s http://localhost:8081/health | jq .
# Expected: {"status":"healthy","version":"4.1.0"}
# Send a test query to the agent
curl -s -X POST http://localhost:8080/api/v1/query \
-H "Content-Type: application/json" \
-d '{"query": "hello", "context": {"user_id": "test"}}' | jq .
# Verify Grafana is accessible (port 3001)
curl -s -o /dev/null -w "%{http_code}" http://localhost:3001
# Expected: 200 (or 302 redirect to login)
If health checks fail, run docker compose ps to verify all containers are running, then check logs with docker compose logs agent or docker compose logs orchestrator.
Deployment Guides
| Guide | Description |
|---|---|
| Self-Hosted | Complete Docker Compose setup |
| AWS Marketplace | Deploy via AWS Marketplace |
| CloudFormation | Deploy with CloudFormation templates |
| Bedrock Integration | Configure AWS Bedrock as LLM provider |
| Licensing | License tiers and activation |
| Post-Deployment | Configuration after deployment |
Troubleshooting
| Guide | Description |
|---|---|
| Troubleshooting | Common deployment issues |
| Runtime Issues | Runtime debugging and recovery |
Architecture Requirements
Minimum (Development)
- 2 vCPU, 4 GB RAM
- Docker with 8 GB allocated
- 10 GB disk space
Recommended (Production)
- 4+ vCPU, 16+ GB RAM
- PostgreSQL 14+ (RDS recommended)
- Redis 6+ (ElastiCache recommended)
- Application Load Balancer
Enterprise Deployment
Enterprise Edition provides production deployment options designed for regulated industries and multi-region requirements.
| Capability | Description |
|---|---|
| AWS Marketplace | One-click deploy via AWS Marketplace with pre-configured CloudFormation |
| CloudFormation One-Click | Full-stack IaC template: ECS Fargate, RDS Multi-AZ, ElastiCache, ALB, VPC endpoints |
| Multi-Region Deployment | Deploy agent and orchestrator across multiple AWS regions with cross-region RDS replication |
| Managed Upgrades | Zero-downtime rolling upgrades with automatic database migrations |
| Dedicated Support | SLA-backed support with deployment planning and architecture review |
Enterprise deployment includes CloudFormation one-click deploy, multi-region replication, managed rolling upgrades, and dedicated support.
Next Steps
- Review Architecture for system design
- Configure LLM Providers after deployment
- Set up Monitoring for observability