Skip to main content

LLM Provider Configuration File Setup

This guide shows how to configure LLM providers using YAML configuration files instead of environment variables. This is the recommended approach for production Community deployments.

Overview

AxonFlow supports a three-tier configuration priority:

  1. Database (Enterprise - Customer Portal managed)
  2. Config File (Community - YAML/JSON file)
  3. Environment Variables (Fallback)

This page covers the Config File approach for Community users.

Quick Start

1. Create Configuration File

Create a file named axonflow.yaml:

version: "1.0"

llm_providers:
openai:
enabled: true
credentials:
api_key: ${OPENAI_API_KEY}

anthropic:
enabled: true
credentials:
api_key: ${ANTHROPIC_API_KEY}

2. Set Environment Variable

Tell AxonFlow where to find your config file:

export AXONFLOW_CONFIG_FILE=/path/to/axonflow.yaml

3. Start Orchestrator

The orchestrator will automatically load the configuration on startup:

./orchestrator
# [Config File] Config file loader initialized: /path/to/axonflow.yaml
# [LLM Config] Loaded 2 providers from config_file (tenant: default)

Environment Variables

VariableDescription
AXONFLOW_CONFIG_FILEPrimary: Path to unified config file
AXONFLOW_LLM_CONFIG_FILEAlternative: Path to LLM-specific config file

The primary variable takes precedence if both are set.

Configuration File Format

Full Example

version: "1.0"

llm_providers:
# OpenAI
openai:
enabled: true
credentials:
api_key: ${OPENAI_API_KEY}
config:
model: gpt-4o
max_tokens: 4096
priority: 10
weight: 0.4

# Anthropic
anthropic:
enabled: true
credentials:
api_key: ${ANTHROPIC_API_KEY}
config:
model: claude-sonnet-4-20250514
max_tokens: 8192
priority: 8
weight: 0.4

# AWS Bedrock (uses AWS credential chain)
bedrock:
enabled: true
config:
region: us-east-1
model: anthropic.claude-sonnet-4-20250514-v1:0
priority: 5
weight: 0.1

# Ollama (self-hosted)
ollama:
enabled: true
config:
endpoint: http://localhost:11434
model: llama3.2:latest
priority: 3
weight: 0.1

Provider Configuration

Each provider supports the following fields:

FieldTypeRequiredDescription
enabledbooleanYesWhether the provider is active
credentialsmapVariesProvider-specific credentials
configmapVariesProvider-specific configuration
priorityintegerNoHigher = preferred for failover
weightfloatNoTraffic distribution (0.0-1.0)

Environment Variable Expansion

Use ${VAR_NAME} syntax to reference environment variables:

credentials:
api_key: ${OPENAI_API_KEY}
config:
endpoint: ${OLLAMA_ENDPOINT:-http://localhost:11434}

The :- syntax provides default values if the variable is not set.

Provider-Specific Configuration

OpenAI

openai:
enabled: true
credentials:
api_key: ${OPENAI_API_KEY}
config:
model: gpt-4o # Optional, defaults to gpt-4o
max_tokens: 4096 # Optional

Anthropic

anthropic:
enabled: true
credentials:
api_key: ${ANTHROPIC_API_KEY}
config:
model: claude-sonnet-4-20250514 # Optional
max_tokens: 8192 # Optional

AWS Bedrock

Bedrock uses AWS credential chain (environment, IAM role, etc.):

bedrock:
enabled: true
config:
region: us-east-1 # Required
model: anthropic.claude-sonnet-4-20250514-v1:0 # Required
warning

Both region and model are required for Bedrock. The provider will be disabled if either is missing.

Ollama (Self-Hosted)

ollama:
enabled: true
config:
endpoint: http://localhost:11434 # Required
model: llama3.2:latest # Optional
warning

The endpoint is required for Ollama. The provider will be disabled without it.

Hot Reloading

Config changes are picked up automatically through cache invalidation. The default cache TTL is 30 seconds, meaning any changes to your config file will take effect within 30 seconds.

How Hot Reload Works

  1. The orchestrator caches the parsed config file in memory with a 30-second TTL.
  2. On the next LLM request after the cache expires, the config file is re-read from disk.
  3. If the file contents have changed, the new configuration is parsed and applied atomically.
  4. Existing in-flight requests continue with the old configuration; only new requests use the updated config.

Timing Expectations

ScenarioMaximum Delay
Add a new providerUp to 30 seconds
Change model or priorityUp to 30 seconds
Disable a providerUp to 30 seconds
Update API key (via env var)Requires orchestrator restart (env vars are read at startup)
tip

No manual refresh is needed. Simply edit your config file and the orchestrator will automatically pick up changes on the next request after the cache expires. To force an immediate reload, restart the orchestrator: docker compose restart orchestrator.

Configuration Validation

The config loader validates your file on every load (startup and hot reload). Validation errors are logged but do not prevent startup -- the orchestrator falls back to environment variables for any provider that fails validation.

What Is Validated

  • File exists and is accessible
  • File is not a directory
  • YAML syntax is valid
  • Required provider fields are present
  • Provider type is recognized
  • Credential references resolve (environment variables exist)

Example Validation Errors

Missing API key environment variable:

[WARN] Config validation: provider "openai" credentials.api_key references
unset environment variable OPENAI_API_KEY — provider will be disabled

Invalid YAML syntax:

[ERROR] Failed to parse config file: yaml: line 12: did not find expected key

Missing required field:

[WARN] Config validation: provider "bedrock" requires both region and model
in config — provider will be disabled

You can validate your config file before deploying by starting the orchestrator with debug logging:

AXONFLOW_LOG_LEVEL=debug AXONFLOW_CONFIG_FILE=./axonflow.yaml ./orchestrator

Common Errors

Error MessageCauseSolution
Config file not foundFile path incorrectCheck AXONFLOW_CONFIG_FILE path
Permission deniedFile not readableCheck file permissions
Config path is a directoryPath points to folderUse path to YAML file
Failed to parse config fileInvalid YAMLValidate YAML syntax
Bedrock provider requires both region and modelMissing configAdd both region and model
Ollama provider requires endpointMissing endpointAdd endpoint to config

Docker Deployment

Mount your config file into the container:

# docker-compose.yaml
services:
orchestrator:
image: axonflow/orchestrator:latest
environment:
- AXONFLOW_CONFIG_FILE=/config/axonflow.yaml
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
volumes:
- ./axonflow.yaml:/config/axonflow.yaml:ro

Kubernetes Deployment

Use a ConfigMap for your configuration:

apiVersion: v1
kind: ConfigMap
metadata:
name: axonflow-config
data:
axonflow.yaml: |
version: "1.0"
llm_providers:
openai:
enabled: true
credentials:
api_key: ${OPENAI_API_KEY}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: orchestrator
spec:
template:
spec:
containers:
- name: orchestrator
env:
- name: AXONFLOW_CONFIG_FILE
value: /config/axonflow.yaml
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: llm-credentials
key: openai-api-key
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: axonflow-config

Migrating from Environment Variables

If you're currently using environment variables, migration is straightforward:

Before (Environment Variables)

export OPENAI_API_KEY=sk-xxx
export ANTHROPIC_API_KEY=sk-ant-xxx
export BEDROCK_REGION=us-east-1
export BEDROCK_MODEL=anthropic.claude-sonnet-4-20250514-v1:0

After (Config File)

# axonflow.yaml
version: "1.0"

llm_providers:
openai:
enabled: true
credentials:
api_key: ${OPENAI_API_KEY} # Still uses env var for secret

anthropic:
enabled: true
credentials:
api_key: ${ANTHROPIC_API_KEY}

bedrock:
enabled: true
config:
region: us-east-1
model: anthropic.claude-sonnet-4-20250514-v1:0
export AXONFLOW_CONFIG_FILE=/path/to/axonflow.yaml
export OPENAI_API_KEY=sk-xxx # Secrets still in env vars
export ANTHROPIC_API_KEY=sk-ant-xxx
tip

Keep API keys in environment variables and reference them with ${VAR_NAME}. This is more secure than hardcoding credentials in config files.

See Also