Enterprise Provider Features
Enterprise unlocks Bedrock, custom providers, runtime provider operations, and the governance and operational controls teams usually need once AxonFlow moves from pilot to company-wide deployment.
This page is intentionally public because it helps engineering leaders understand what changes when they move from community experimentation to a real production rollout.
What Changes at Enterprise
| Capability | Community | Enterprise |
|---|---|---|
| OpenAI, Anthropic, Gemini, Azure OpenAI, Ollama | ✅ | ✅ |
| AWS Bedrock | ❌ | ✅ |
| Custom providers | ❌ | ✅ |
| YAML and env-based provider config | ✅ | ✅ |
| Database-backed runtime provider config | ❌ | ✅ |
| Customer portal operations for providers | ❌ | ✅ |
| Runtime credential handling and rotation workflows | ❌ | ✅ |
| Enterprise routing and production operations | Limited | ✅ |
Why This Matters in Practice
Community is enough to prove the governance model and build strong early applications. Enterprise becomes important when the deployment starts looking like this:
- multiple teams or business units share the platform
- provider credentials must be handled operationally, not just as static env vars
- regulated workloads need Bedrock or stricter cloud controls
- platform teams need runtime changes without relying on image rebuilds or ad hoc host access
- procurement and risk teams need stronger governance guarantees
Typical Enterprise Operating Model
Enterprise teams usually move from:
- one or two providers configured in YAML or env vars
- a wider multi-provider production rollout
- runtime-managed providers, credential operations, and enterprise-only routing
That pattern is common in large-company platform groups because the technical integration is only the first step. Operating the provider estate safely becomes the harder problem.
AWS Bedrock
Enterprise adds AWS Bedrock as a governed provider. The Bedrock integration supports:
- Region-specific deployment -- configure a specific AWS region (e.g.,
us-east-1,eu-west-1) to keep inference traffic within data residency boundaries. - Inference profile support -- use Bedrock cross-region inference profiles (e.g.,
eu.anthropic.claude-sonnet-4-5-20250929-v1:0) for cost-optimized or latency-optimized routing within AWS. - Model family auto-detection -- the provider automatically detects the model family (Anthropic, Amazon, Meta, Mistral) from the model ID to handle request and response format differences. This can be overridden for custom or private models.
- IAM-based credential management -- Bedrock uses AWS IAM roles and SDK credentials rather than API keys, which fits the credential management model most regulated AWS deployments already use.
For more on setup, see AWS Bedrock Setup.
Custom Providers
Enterprise supports registering custom LLM providers for organizations that operate internal model gateways, fine-tuned model endpoints, or non-standard provider APIs. Custom providers are configured through the portal's LLM provider management interface with a provider name, endpoint configuration, and credential reference.
Runtime Provider Management
In community, providers are configured through YAML files and environment variables. Enterprise adds database-backed provider configuration that can be managed at runtime through the customer portal.
This means platform teams can:
- Add, update, or disable providers without redeploying the platform or editing configuration files on hosts.
- Set per-provider cost rates (
cost_per_1k_input_tokens,cost_per_1k_output_tokens) for accurate cost tracking and budget enforcement. - Configure provider priority and weight for routing decisions when multiple providers are available.
- Store credentials via AWS Secrets Manager ARN rather than environment variables, with rotation support.
- Monitor provider health through the portal, including health status, last health check time, and last error.
Each provider configuration is tenant-scoped, so different tenants on a shared platform can have different provider configurations, cost rates, and routing weights.
Who Usually Cares About This Page
- Staff and principal engineers designing the target architecture
- platform and security teams deciding how provider operations should work
- engineering leaders who need to justify why community is great for buildout but not enough for scaled internal adoption
