Enterprise Provider Features
Enterprise Feature
Runtime provider management via Customer Portal is available in AxonFlow Enterprise Edition.
Overview
Enterprise customers get additional LLM provider capabilities through the Customer Portal:
| Feature | Community | Enterprise |
|---|---|---|
| All LLM Providers (OpenAI, Anthropic, Bedrock, Gemini, Ollama) | ✅ | ✅ |
| Multi-Provider Routing | ✅ | ✅ |
| Automatic Failover | ✅ | ✅ |
| Circuit Breaker | ✅ | ✅ |
| YAML Configuration | ✅ | ✅ |
| Customer Portal UI | ❌ | ✅ |
| Runtime Configuration | ❌ | ✅ |
| Secure Credential Storage | ❌ | ✅ |
| Per-Provider Metrics | ❌ | ✅ |
| Cost Tracking Dashboard | ❌ | ✅ |
| API Key Rotation | ❌ | ✅ |
Customer Portal UI
Enterprise customers can manage LLM providers through a web interface:
Provider Configuration
- Add/remove providers without code changes
- Update API keys securely
- Adjust routing weights in real-time
- Changes take effect within 30 seconds
Real-Time Monitoring
- Per-provider request counts
- Latency percentiles (P50, P95, P99)
- Error rates and types
- Cost per provider/model
Credential Management
- Secure API key storage (encrypted at rest)
- Key rotation without downtime
- Audit log for credential changes
- Role-based access control
Runtime Configuration
Change provider settings without redeployment:
Via Customer Portal API
# Update provider weight
curl -X PATCH https://api.getaxonflow.com/v1/providers/openai \
-H "Authorization: Bearer $AXONFLOW_API_KEY" \
-d '{"weight": 0.3}'
# Disable a provider temporarily
curl -X PATCH https://api.getaxonflow.com/v1/providers/anthropic \
-H "Authorization: Bearer $AXONFLOW_API_KEY" \
-d '{"enabled": false}'
# Update routing strategy
curl -X PUT https://api.getaxonflow.com/v1/routing \
-H "Authorization: Bearer $AXONFLOW_API_KEY" \
-d '{"strategy": "cost-optimized"}'
Changes Propagate Automatically
- Configuration change saved to Customer Portal
- Orchestrator polls for changes (30-second interval)
- New configuration applied atomically
- No restarts or downtime required
Cost Tracking
Enterprise includes detailed cost analytics:
Per-Provider Costs
| Provider | Requests | Tokens | Cost |
|---|---|---|---|
| OpenAI | 45,000 | 12.3M | $234.56 |
| Anthropic | 32,000 | 8.1M | $123.45 |
| Bedrock | 23,000 | 5.6M | $78.90 |
Cost Optimization Recommendations
The Customer Portal analyzes usage patterns and provides recommendations:
- "Route 20% more traffic to Bedrock to save $50/month"
- "Enable Ollama for development queries to reduce costs"
- "Consider Llama 3.1 for simple completions"
SLA Management
Set per-provider SLOs:
# Enterprise configuration
providers:
openai:
slo:
latency_p99: 5s
error_rate: 0.1%
availability: 99.9%
alerts:
- type: latency_threshold
threshold: 5s
channel: pagerduty
Alerting Integration
- PagerDuty
- Slack
- OpsGenie
- Custom webhooks
Getting Started
Step 1: Upgrade to Enterprise
Contact our sales team:
- Email: [email protected]
- Demo: getaxonflow.com/demo
Step 2: Access Customer Portal
After upgrading:
- Log in to app.getaxonflow.com
- Navigate to Settings > LLM Providers
- Configure providers via the UI
Step 3: Migrate from YAML (Optional)
Existing YAML configurations can be imported:
axonctl providers import --file axonflow.yaml
Pricing
Enterprise LLM features are included in all paid tiers:
SaaS (AxonFlow-Hosted)
| Tier | Monthly | Included Requests |
|---|---|---|
| Starter | $5,000 | 500K/month |
| Professional | $15,000 | 3M/month |
| Enterprise | $50,000 | 10M/month |
In-VPC (Self-Hosted)
| Tier | Monthly | Max Nodes |
|---|---|---|
| Professional | $20,000 | 10 |
| Enterprise | $60,000 | 50 |
| Enterprise Plus | Custom | Unlimited |
Next Steps
- LLM Providers Overview - All supported providers
- AWS Bedrock Setup - HIPAA-compliant deployment
- Custom Provider SDK - Build custom providers