LLM Provider Configuration File Setup
This guide shows how to configure LLM providers using YAML configuration files instead of environment variables. It is a strong default for production self-hosted deployments because it keeps provider configuration explicit, reviewable, and easy to evolve.
Overview
AxonFlow supports a three-tier configuration priority:
- Database (Enterprise - Customer Portal managed)
- Config File (Community - YAML/JSON file)
- Environment Variables (Fallback)
This page covers the Config File approach for Community users.
Quick Start
1. Create Configuration File
Create a file named axonflow.yaml:
version: "1.0"
llm_providers:
openai:
enabled: true
credentials:
api_key: ${OPENAI_API_KEY}
anthropic:
enabled: true
credentials:
api_key: ${ANTHROPIC_API_KEY}
2. Set Environment Variable
Tell AxonFlow where to find your config file:
export AXONFLOW_CONFIG_FILE=/path/to/axonflow.yaml
3. Start Orchestrator
The orchestrator will automatically load the configuration on startup:
./orchestrator
# [Config File] Config file loader initialized: /path/to/axonflow.yaml
# [LLM Config] Loaded 2 providers from config_file (tenant: default)
Environment Variables
| Variable | Description |
|---|---|
AXONFLOW_CONFIG_FILE | Primary: Path to unified config file |
AXONFLOW_LLM_CONFIG_FILE | Alternative: Path to LLM-specific config file |
The primary variable takes precedence if both are set.
Configuration File Format
Full Example
version: "1.0"
llm_providers:
# OpenAI
openai:
enabled: true
credentials:
api_key: ${OPENAI_API_KEY}
config:
model: gpt-4o
max_tokens: 4096
priority: 10
weight: 0.4
# Anthropic
anthropic:
enabled: true
credentials:
api_key: ${ANTHROPIC_API_KEY}
config:
model: claude-sonnet-4-20250514
max_tokens: 8192
priority: 8
weight: 0.4
# Gemini
gemini:
enabled: true
credentials:
api_key: ${GOOGLE_API_KEY}
config:
model: gemini-2.0-flash
priority: 5
weight: 0.1
# Mistral AI
mistral:
enabled: true
credentials:
api_key: ${MISTRAL_API_KEY}
config:
model: mistral-small-latest
priority: 5
weight: 0.1
# Ollama (self-hosted)
ollama:
enabled: true
config:
endpoint: http://localhost:11434
model: llama3.2:latest
priority: 3
weight: 0.1
Provider Configuration
Each provider supports the following fields:
| Field | Type | Required | Description |
|---|---|---|---|
enabled | boolean | Yes | Whether the provider is active |
credentials | map | Varies | Provider-specific credentials |
config | map | Varies | Provider-specific configuration |
priority | integer | No | Higher = preferred for failover |
weight | float | No | Traffic distribution (0.0-1.0) |
timeout_seconds | integer | No | Provider-specific request timeout in seconds. Overrides the global default. |
rate_limit | integer | No | Maximum requests per second to this provider. |
Environment Variable Expansion
Use ${VAR_NAME} syntax to reference environment variables:
credentials:
api_key: ${OPENAI_API_KEY}
config:
endpoint: ${OLLAMA_ENDPOINT:-http://localhost:11434}
The :- syntax provides default values if the variable is not set.
Provider-Specific Configuration
OpenAI
openai:
enabled: true
credentials:
api_key: ${OPENAI_API_KEY}
config:
model: gpt-4o # Optional, defaults to gpt-4o
max_tokens: 4096 # Optional
Anthropic
anthropic:
enabled: true
credentials:
api_key: ${ANTHROPIC_API_KEY}
config:
model: claude-sonnet-4-20250514 # Optional
max_tokens: 8192 # Optional
Google Gemini
gemini:
enabled: true
credentials:
api_key: ${GOOGLE_API_KEY}
config:
model: gemini-2.0-flash # Optional, defaults to gemini-2.0-flash
Mistral AI
mistral:
enabled: true
credentials:
api_key: ${MISTRAL_API_KEY}
config:
model: mistral-small-latest # Optional, defaults to mistral-small-latest
Azure OpenAI
azure-openai:
enabled: true
credentials:
api_key: ${AZURE_OPENAI_API_KEY}
config:
endpoint: ${AZURE_OPENAI_ENDPOINT}
deployment_name: ${AZURE_OPENAI_DEPLOYMENT_NAME}
api_version: ${AZURE_OPENAI_API_VERSION:-2024-08-01-preview}
AWS Bedrock
Bedrock is enterprise-only and uses the AWS credential chain (environment variables, IAM role, instance profile, or equivalent):
bedrock:
enabled: true
config:
region: us-east-1 # Required
model: anthropic.claude-sonnet-4-20250514-v1:0 # Required
For config-file and runtime-managed provider configuration, Bedrock needs both region and model. The env-var bootstrap path can supply a default BEDROCK_MODEL, but the config-file path should set both explicitly.
Ollama (Self-Hosted)
ollama:
enabled: true
config:
endpoint: http://localhost:11434 # Required
model: llama3.2:latest # Optional
The endpoint is required for Ollama. The provider will be disabled without it.
Provider Availability Reminder
- Community: OpenAI, Anthropic, Gemini, Mistral, Azure OpenAI, Ollama
- Enterprise: everything above, plus Bedrock and custom providers
Hot Reloading
Config changes are picked up automatically through cache invalidation. The default cache TTL is 30 seconds, meaning any changes to your config file will take effect within 30 seconds.
How Hot Reload Works
- The orchestrator caches the parsed config file in memory with a 30-second TTL.
- On the next LLM request after the cache expires, the config file is re-read from disk.
- If the file contents have changed, the new configuration is parsed and applied atomically.
- Existing in-flight requests continue with the old configuration; only new requests use the updated config.
Timing Expectations
| Scenario | Maximum Delay |
|---|---|
| Add a new provider | Up to 30 seconds |
| Change model or priority | Up to 30 seconds |
| Disable a provider | Up to 30 seconds |
| Update API key (via env var) | Requires orchestrator restart (env vars are read at startup) |
No manual refresh is needed. Simply edit your config file and the orchestrator will automatically pick up changes on the next request after the cache expires. To force an immediate reload, restart the orchestrator: docker compose restart orchestrator.
Configuration Validation
At startup, AxonFlow verifies that the configured file exists, is readable, and parses as valid YAML before installing the loader.
Provider-specific checks happen when the runtime projects config-file values into live provider configuration. In practice that means missing required values such as an Ollama endpoint, Azure OpenAI deployment_name, or Bedrock region and model will cause that provider to be skipped.
What Is Checked
- File exists and is accessible
- File path is a file, not a directory
- YAML syntax is valid
- Provider-specific required values are present when the runtime loads them
- Unset environment variables referenced in the file expand to empty strings unless a
:-defaultvalue is provided
Example Validation Errors
Invalid YAML syntax:
[ERROR] Failed to parse config file: yaml: line 12: did not find expected key
Missing required Bedrock fields at runtime:
[LLM Config] WARNING: Bedrock provider requires both region and model.
Got region="us-east-1", model="" - provider disabled
You can validate your config file before deploying by starting the orchestrator with debug logging:
AXONFLOW_LOG_LEVEL=debug AXONFLOW_CONFIG_FILE=./axonflow.yaml ./orchestrator
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
Config file not found | File path incorrect | Check AXONFLOW_CONFIG_FILE path |
Permission denied | File not readable | Check file permissions |
Config path is a directory | Path points to folder | Use path to YAML file |
Failed to parse config file | Invalid YAML | Validate YAML syntax |
Bedrock provider requires region | Missing region | Add region to Bedrock config |
Ollama provider requires endpoint | Missing endpoint | Add endpoint to config |
Azure OpenAI provider requires endpoint, deployment_name, and api_key | Incomplete Azure OpenAI config | Set all three values before enabling the provider |
Docker Deployment
Mount your config file into the container:
# docker-compose.yaml
services:
orchestrator:
image: axonflow/orchestrator:latest
environment:
- AXONFLOW_CONFIG_FILE=/config/axonflow.yaml
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
volumes:
- ./axonflow.yaml:/config/axonflow.yaml:ro
Kubernetes Deployment
Use a ConfigMap for your configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: axonflow-config
data:
axonflow.yaml: |
version: "1.0"
llm_providers:
openai:
enabled: true
credentials:
api_key: ${OPENAI_API_KEY}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: orchestrator
spec:
template:
spec:
containers:
- name: orchestrator
env:
- name: AXONFLOW_CONFIG_FILE
value: /config/axonflow.yaml
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: llm-credentials
key: openai-api-key
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: axonflow-config
Migrating from Environment Variables
If you're currently using environment variables, migration is straightforward:
Before (Environment Variables)
export OPENAI_API_KEY=sk-xxx
export ANTHROPIC_API_KEY=sk-ant-xxx
export BEDROCK_REGION=us-east-1
export BEDROCK_MODEL=anthropic.claude-sonnet-4-20250514-v1:0
After (Config File)
# axonflow.yaml
version: "1.0"
llm_providers:
openai:
enabled: true
credentials:
api_key: ${OPENAI_API_KEY} # Still uses env var for secret
anthropic:
enabled: true
credentials:
api_key: ${ANTHROPIC_API_KEY}
bedrock:
enabled: true
config:
region: us-east-1
model: anthropic.claude-sonnet-4-20250514-v1:0
export AXONFLOW_CONFIG_FILE=/path/to/axonflow.yaml
export OPENAI_API_KEY=sk-xxx # Secrets still in env vars
export ANTHROPIC_API_KEY=sk-ant-xxx
Keep API keys in environment variables and reference them with ${VAR_NAME}. This is more secure than hardcoding credentials in config files.
See Also
- LLM Providers Overview - Provider comparison and selection guide
- AWS Bedrock Setup - Detailed Bedrock configuration
- Ollama Setup - Self-hosted LLM deployment
