Provider And Credential Matrix
AxonFlow supports several LLM providers, but the naming and credential models are not identical across every surface. This page exists to remove the most common sources of confusion:
- runtime provider type versus portal-managed provider name
- model name versus deployment name
- environment-variable setup versus portal-managed secret references
- community-available providers versus paid-tier providers
If your team is building a multi-provider AI system, this page should save you from the “we configured the wrong provider name in the wrong place” class of mistakes.
Provider Matrix
| Runtime provider type | Portal-managed name | Community runtime | Paid tier runtime | Typical credential model | Common model naming note |
|---|---|---|---|---|---|
openai | openai | Yes | Yes | API key | model names such as gpt-4o |
anthropic | anthropic | Yes | Yes | API key | model names such as claude-sonnet-4-20250514 |
gemini | not currently portal-managed | Yes | Yes | Google API key | model names such as gemini-2.5-pro |
azure-openai | not currently portal-managed | Yes | Yes | endpoint, API key, deployment name, API version | deployment name is often more important than base model name |
ollama | ollama | Yes | Yes | no API key by default, endpoint-based | local/self-hosted model tags such as llama3.2:latest |
bedrock | bedrock | No | Yes | AWS credentials / role | model IDs such as anthropic.claude-sonnet-4-20250514-v1:0 |
custom | custom | No | Yes | implementation-specific | your adapter decides the naming contract |
What The Codebase Currently Does
In the runtime, the current provider-type constants are:
openaianthropicbedrockollamageminiazure-openaicustom
In the current customer portal provider-management API, the accepted managed provider names are narrower:
bedrockollamaopenaianthropiccustom
That means the runtime can support more providers than the current portal CRUD surface actively manages. gemini and azure-openai are runtime-only and not yet portal-managed.
Credential Patterns By Provider
OpenAI
- primary credential: API key
- common env vars:
OPENAI_API_KEY,OPENAI_MODEL - best fit: fast cloud start, proxy mode, routed workflows
Anthropic
- primary credential: API key
- common env vars:
ANTHROPIC_API_KEY,ANTHROPIC_MODEL - best fit: long-context reasoning, high-quality text workflows
Gemini
- primary credential: Google API key
- common env vars:
GOOGLE_API_KEY,GOOGLE_MODEL - best fit: Google ecosystem and multimodal-heavy workloads
Azure OpenAI
- primary credentials: endpoint plus API key
- common env vars:
AZURE_OPENAI_ENDPOINTAZURE_OPENAI_API_KEYAZURE_OPENAI_DEPLOYMENT_NAMEAZURE_OPENAI_API_VERSION
- best fit: Azure-first enterprises that already govern model access through Azure
Ollama
- primary credential model: endpoint access, usually no remote API key
- common env vars:
OLLAMA_ENDPOINTOLLAMA_MODEL
- best fit: local inference, air-gapped setups, or self-hosted model control
Bedrock
- primary credential model: AWS role or AWS credentials
- best fit: paid-tier regulated AWS estates and managed foundation-model access
Custom
- primary credential model: implementation-specific
- best fit: paid-tier teams that need a proprietary or private model gateway inside AxonFlow
The Naming Pitfalls That Cause Real Problems
Azure OpenAI
The runtime provider type is azure-openai, but the model call often uses your Azure deployment name, not only a raw OpenAI model string. Engineers frequently set the provider correctly and still fail because the deployment name is wrong.
Portal management versus runtime support
A provider can be runtime-supported without being fully exposed through the current portal CRUD layer. That is why this matrix separates runtime provider type from portal-managed name.
Bedrock and custom providers
Both require a paid tier in the current license-gating code path. Evaluation is good for proving many advanced workflows, but it does not unlock every paid-provider scenario.
Recommended Reading Path
Start with:
- LLM Providers Overview
- Provider Routing
- the provider-specific setup page you actually plan to deploy
Then use Deployment Mode Matrix and Community vs Evaluation vs Enterprise to decide whether your planned provider mix fits the tier and operating model you want.
