Skip to main content

Self-Hosted Deployment Guide

AxonFlow Community Edition is available as source-available software under the BSL 1.1 license. You can run the complete platform locally using docker-compose with no license server or authentication required.

Overview

Self-hosted mode provides:

  • ✅ Full AxonFlow platform (Agent + Orchestrator + Demo App)
  • ✅ PostgreSQL database with automatic migrations
  • ✅ No license validation or authentication
  • ✅ Same features as production deployment
  • ✅ Perfect for development, evaluation, and small-scale production

Quick Start

Prerequisites

  • Docker: Version 20.10 or later
  • Docker Compose: Version 2.0 or later
  • OpenAI API Key: For LLM features (optional for testing)
  • Memory: 4GB RAM minimum, 8GB recommended
  • Disk: 10GB free space

Installation

# 1. Clone the repository
git clone https://github.com/getaxonflow/axonflow.git
cd axonflow

# 2. Set your OpenAI API key (optional)
export OPENAI_API_KEY=sk-your-key-here

# 3. Start all services
docker compose up

# 4. Access Grafana monitoring (optional)
open http://localhost:3000 # Login: admin / grafana_localdev456

The first startup takes 2-3 minutes to:

  • Pull Docker images
  • Initialize PostgreSQL database
  • Run database migrations
  • Start all services

Optional: MCP Connector Configuration

By default, AxonFlow runs without MCP connectors (Community mode). To enable connectors:

  1. Configure connectors in your environment:
# Enable specific connectors (optional)
export ENABLED_CONNECTORS="amadeus,openai"
  1. Create connector secrets (only for enabled connectors):
# Example: Amadeus connector
export AMADEUS_API_KEY=your-key
export AMADEUS_API_SECRET=your-secret

Available Connectors:

  • amadeus - Travel API (flights, hotels)
  • salesforce - CRM integration
  • slack - Team messaging
  • snowflake - Data warehouse
  • openai - LLM provider
  • anthropic - LLM provider

Note: The platform works fully without connectors. Enable only what you need.

Services

The docker-compose deployment includes:

ServicePortDescription
Agent8080Policy enforcement engine
Orchestrator8081Multi-agent coordination
PostgreSQL5432Database (localhost only)
Redis6379Rate limiting and caching
Prometheus9090Metrics collection
Grafana3000Monitoring dashboard

Verification

Health Checks

# Check agent health
curl http://localhost:8080/health

# Check orchestrator health
curl http://localhost:8080/health

# Check Grafana dashboard (optional)
curl http://localhost:3000/api/health

Expected responses:

{
"status": "healthy",
"version": "1.0.0",
"mode": "self-hosted"
}

Service Logs

# View all logs
docker compose logs -f

# View specific service
docker compose logs -f axonflow-agent
docker compose logs -f axonflow-orchestrator
docker compose logs -f postgres

Look for the self-hosted mode confirmation:

🏠 Self-hosted mode: Skipping authentication for client 'demo-client'

Configuration

Environment Variables

Create a .env file to customize your deployment:

# OpenAI API Key (required for LLM features)
OPENAI_API_KEY=sk-your-key-here

# Anthropic API Key (optional, for Claude models)
ANTHROPIC_API_KEY=

# Self-Hosted Mode (automatically enabled in docker-compose)
SELF_HOSTED_MODE=true

# Database Configuration (auto-configured by docker-compose)
DATABASE_URL=postgres://axonflow:demo123@postgres:5432/axonflow_demo?sslmode=disable

# Local LLM Endpoint (optional, for Ollama or other local models)
LOCAL_LLM_ENDPOINT=http://localhost:11434

# Internal Service Authentication (recommended for production)
# Shared secret for Agent-Orchestrator communication
AXONFLOW_INTERNAL_SERVICE_SECRET=your-secure-secret-at-least-32-characters

# === Security Detection Configuration (Issue #891) ===
# Philosophy: Block high-confidence threats, warn on heuristics, redact PII

# SQL Injection Scanner Mode: off, basic, advanced (default: basic)
SQLI_SCANNER_MODE=basic

# SQLI_ACTION: block|warn|log (default: block - high confidence attacks)
SQLI_ACTION=block

# PII_ACTION: block|warn|redact|log (default: redact - preserves UX)
PII_ACTION=redact

# SENSITIVE_DATA_ACTION: block|warn|log (default: warn - may have false positives)
SENSITIVE_DATA_ACTION=warn

# HIGH_RISK_ACTION: block|warn|log (default: warn - composite score needs tuning)
HIGH_RISK_ACTION=warn

# DANGEROUS_QUERY_ACTION: block|warn|log (default: block - DROP/TRUNCATE)
DANGEROUS_QUERY_ACTION=block

# LLM Provider Routing Configuration
# LLM_ROUTING_STRATEGY: weighted, round_robin, failover (default: weighted)
LLM_ROUTING_STRATEGY=weighted
# PROVIDER_WEIGHTS: comma-separated provider:weight pairs (e.g., openai:50,anthropic:30,bedrock:20)
PROVIDER_WEIGHTS=
# DEFAULT_LLM_PROVIDER: primary provider for failover strategy
DEFAULT_LLM_PROVIDER=

A template is provided at .env.example.

See SQL Injection Scanning for SQL injection scanner options.

See LLM Provider Routing for provider routing configuration.

Internal Service Authentication

When running both Agent and Orchestrator, they communicate internally for MCP connector routing. For production deployments, configure a shared secret:

# Generate a secure secret
openssl rand -hex 32

# Add to your .env file
AXONFLOW_INTERNAL_SERVICE_SECRET=<generated-secret>

Requirements:

  • Secret must be at least 32 characters
  • Must be identical on both Agent and Orchestrator

Without a configured secret, development mode uses a fallback token and logs a [SECURITY] warning. Always configure the secret for production deployments.

Using Local LLMs

To use local models (Ollama, LM Studio, etc.) instead of OpenAI:

# 1. Start your local LLM server
ollama serve

# 2. Update .env
LOCAL_LLM_ENDPOINT=http://host.docker.internal:11434

# 3. Restart services
docker compose restart

SDK Integration

TypeScript/JavaScript

import { AxonFlow } from '@axonflow/sdk';
import OpenAI from 'openai';

// Connect to self-hosted instance (no license key needed)
const axonflow = new AxonFlow({
endpoint: 'http://localhost:8080'
// No authentication required for localhost!
});

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// Use Gateway Mode for self-hosted
const ctx = await axonflow.getPolicyApprovedContext({
userToken: 'user-123',
query: 'Hello, world!'
});

if (ctx.approved) {
const start = Date.now();
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello, world!' }]
});

await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: response.choices[0].message.content?.substring(0, 100) || '',
provider: 'openai',
model: 'gpt-4',
latencyMs: Date.now() - start
});

console.log(response.choices[0].message.content);
}

Go

package main

import (
"github.com/getaxonflow/axonflow-sdk-go"
)

func main() {
// Connect to self-hosted instance (no license key needed)
client := axonflow.NewClient(axonflow.AxonFlowConfig{
Endpoint: "http://localhost:8080", // Agent runs on port 8080
// No authentication required for localhost!
})

resp, err := client.ExecuteQuery(
"user-token",
"Hello, world!",
"chat",
map[string]interface{}{},
)
}

Testing

Run the automated integration test:

bash scripts/test_self_hosted.sh

This verifies:

  • ✅ All services build successfully
  • ✅ All containers are healthy
  • ✅ Agent accepts requests without license key
  • ✅ Self-hosted mode is active
  • ✅ Health checks passing

Production Use

When to Use Self-Hosted

Good for:

  • Development and testing
  • Small-scale production (<1000 req/day)
  • Air-gapped environments
  • Cost-sensitive deployments
  • Learning and evaluation

Not recommended for:

  • High availability requirements (>99.9%)
  • Large-scale production (>10K req/day)
  • Enterprise compliance needs
  • Multi-region deployments

For production workloads, consider Enterprise deployment.

Scaling Self-Hosted

To scale the self-hosted deployment, modify docker-compose.yml:

axonflow-agent:
# ... existing config
deploy:
replicas: 3 # Run 3 agent instances

axonflow-orchestrator:
# ... existing config
deploy:
replicas: 5 # Run 5 orchestrator instances

Then restart with:

docker compose up -d --scale axonflow-agent=3 --scale axonflow-orchestrator=5

Persistence

Database data is persisted in Docker volumes. To back up:

# Backup database
docker compose exec postgres pg_dump -U axonflow axonflow_demo > backup.sql

# Restore database
cat backup.sql | docker compose exec -T postgres psql -U axonflow axonflow_demo

Troubleshooting

Container Fails to Start

# Check logs
docker compose logs axonflow-agent

# Common issues:
# 1. Port already in use - Change ports in docker-compose.yml
# 2. Out of memory - Allocate more RAM to Docker
# 3. Database migration failed - Check postgres logs

Health Check Failing

# Wait for services to fully start (2-3 minutes)
docker compose ps

# If postgres is unhealthy:
docker compose restart postgres

Authentication Errors

If you see X-License-Key header required errors:

# Verify SELF_HOSTED_MODE is set in docker-compose.yml
grep SELF_HOSTED_MODE docker-compose.yml

# Should see:
# - SELF_HOSTED_MODE=true

Database Connection Issues

# Check database is running
docker compose ps postgres

# Test connection
docker compose exec postgres psql -U axonflow -d axonflow_demo -c "SELECT 1"

Cleanup

Stop and remove all services:

# Stop services (keeps data)
docker compose down

# Stop and remove data volumes
docker compose down -v

Migration to Production

When ready to move to production:

  1. Export Data:

    docker compose exec postgres pg_dump -U axonflow axonflow_demo > production-seed.sql
  2. Deploy to AWS Marketplace: Follow Enterprise Deployment Guide

  3. Import Data (if needed):

    # Connect to production RDS and import
    psql $PRODUCTION_DATABASE_URL < production-seed.sql
  4. Update SDK Configuration:

    // Development (self-hosted) - Agent runs on port 8080
    const axonflow = new AxonFlow({ endpoint: 'http://localhost:8080' });

    // Production (AWS Marketplace)
    const axonflow = new AxonFlow({
    licenseKey: process.env.AXONFLOW_LICENSE_KEY
    });

Next Steps

Support