Skip to main content

Provider And Credential Matrix

AxonFlow supports several LLM providers, but the naming and credential models are not identical across every surface. This page exists to remove the most common sources of confusion:

  • runtime provider type versus portal-managed provider name
  • model name versus deployment name
  • environment-variable setup versus portal-managed secret references
  • community-available providers versus paid-tier providers

If your team is building a multi-provider AI system, this page should save you from the “we configured the wrong provider name in the wrong place” class of mistakes.

Provider Matrix

Runtime provider typePortal-managed nameCommunity runtimePaid tier runtimeTypical credential modelCommon model naming note
openaiopenaiYesYesAPI keymodel names such as gpt-4o
anthropicanthropicYesYesAPI keymodel names such as claude-sonnet-4-20250514
gemininot currently portal-managedYesYesGoogle API keymodel names such as gemini-2.5-pro
azure-openainot currently portal-managedYesYesendpoint, API key, deployment name, API versiondeployment name is often more important than base model name
ollamaollamaYesYesno API key by default, endpoint-basedlocal/self-hosted model tags such as llama3.2:latest
bedrockbedrockNoYesAWS credentials / rolemodel IDs such as anthropic.claude-sonnet-4-20250514-v1:0
customcustomNoYesimplementation-specificyour adapter decides the naming contract

What The Codebase Currently Does

In the runtime, the current provider-type constants are:

  • openai
  • anthropic
  • bedrock
  • ollama
  • gemini
  • azure-openai
  • custom

In the current customer portal provider-management API, the accepted managed provider names are narrower:

  • bedrock
  • ollama
  • openai
  • anthropic
  • custom

That means the runtime can support more providers than the current portal CRUD surface actively manages. gemini and azure-openai are runtime-only and not yet portal-managed.

Credential Patterns By Provider

OpenAI

  • primary credential: API key
  • common env vars: OPENAI_API_KEY, OPENAI_MODEL
  • best fit: fast cloud start, proxy mode, routed workflows

Anthropic

  • primary credential: API key
  • common env vars: ANTHROPIC_API_KEY, ANTHROPIC_MODEL
  • best fit: long-context reasoning, high-quality text workflows

Gemini

  • primary credential: Google API key
  • common env vars: GOOGLE_API_KEY, GOOGLE_MODEL
  • best fit: Google ecosystem and multimodal-heavy workloads

Azure OpenAI

  • primary credentials: endpoint plus API key
  • common env vars:
    • AZURE_OPENAI_ENDPOINT
    • AZURE_OPENAI_API_KEY
    • AZURE_OPENAI_DEPLOYMENT_NAME
    • AZURE_OPENAI_API_VERSION
  • best fit: Azure-first enterprises that already govern model access through Azure

Ollama

  • primary credential model: endpoint access, usually no remote API key
  • common env vars:
    • OLLAMA_ENDPOINT
    • OLLAMA_MODEL
  • best fit: local inference, air-gapped setups, or self-hosted model control

Bedrock

  • primary credential model: AWS role or AWS credentials
  • best fit: paid-tier regulated AWS estates and managed foundation-model access

Custom

  • primary credential model: implementation-specific
  • best fit: paid-tier teams that need a proprietary or private model gateway inside AxonFlow

The Naming Pitfalls That Cause Real Problems

Azure OpenAI

The runtime provider type is azure-openai, but the model call often uses your Azure deployment name, not only a raw OpenAI model string. Engineers frequently set the provider correctly and still fail because the deployment name is wrong.

Portal management versus runtime support

A provider can be runtime-supported without being fully exposed through the current portal CRUD layer. That is why this matrix separates runtime provider type from portal-managed name.

Bedrock and custom providers

Both require a paid tier in the current license-gating code path. Evaluation is good for proving many advanced workflows, but it does not unlock every paid-provider scenario.

Start with:

  1. LLM Providers Overview
  2. Provider Routing
  3. the provider-specific setup page you actually plan to deploy

Then use Deployment Mode Matrix and Community vs Evaluation vs Enterprise to decide whether your planned provider mix fits the tier and operating model you want.