Skip to main content

Enterprise Provider Features

Enterprise Feature

Runtime provider management via Customer Portal is available in AxonFlow Enterprise Edition.

Contact Sales | Request Demo

Overview

Enterprise customers get additional LLM provider capabilities through the Customer Portal:

FeatureCommunityEnterprise
All LLM Providers (OpenAI, Anthropic, Bedrock, Gemini, Ollama)
Multi-Provider Routing
Automatic Failover
Circuit Breaker
YAML Configuration
Customer Portal UI
Runtime Configuration
Secure Credential Storage
Per-Provider Metrics
Cost Tracking Dashboard
API Key Rotation

Customer Portal UI

Enterprise customers can manage LLM providers through a web interface:

Provider Configuration

  • Add/remove providers without code changes
  • Update API keys securely
  • Adjust routing weights in real-time
  • Changes take effect within 30 seconds

Real-Time Monitoring

  • Per-provider request counts
  • Latency percentiles (P50, P95, P99)
  • Error rates and types
  • Cost per provider/model

Credential Management

  • Secure API key storage (encrypted at rest)
  • Key rotation without downtime
  • Audit log for credential changes
  • Role-based access control

Runtime Configuration

Change provider settings without redeployment:

Via Customer Portal API

# Update provider weight
curl -X PATCH https://api.getaxonflow.com/v1/providers/openai \
-H "Authorization: Bearer $AXONFLOW_API_KEY" \
-d '{"weight": 0.3}'

# Disable a provider temporarily
curl -X PATCH https://api.getaxonflow.com/v1/providers/anthropic \
-H "Authorization: Bearer $AXONFLOW_API_KEY" \
-d '{"enabled": false}'

# Update routing strategy
curl -X PUT https://api.getaxonflow.com/v1/routing \
-H "Authorization: Bearer $AXONFLOW_API_KEY" \
-d '{"strategy": "cost-optimized"}'

Changes Propagate Automatically

  1. Configuration change saved to Customer Portal
  2. Orchestrator polls for changes (30-second interval)
  3. New configuration applied atomically
  4. No restarts or downtime required

Cost Tracking

Enterprise includes detailed cost analytics:

Per-Provider Costs

ProviderRequestsTokensCost
OpenAI45,00012.3M$234.56
Anthropic32,0008.1M$123.45
Bedrock23,0005.6M$78.90

Cost Optimization Recommendations

The Customer Portal analyzes usage patterns and provides recommendations:

  • "Route 20% more traffic to Bedrock to save $50/month"
  • "Enable Ollama for development queries to reduce costs"
  • "Consider Llama 3.1 for simple completions"

SLA Management

Set per-provider SLOs:

# Enterprise configuration
providers:
openai:
slo:
latency_p99: 5s
error_rate: 0.1%
availability: 99.9%
alerts:
- type: latency_threshold
threshold: 5s
channel: pagerduty

Alerting Integration

  • PagerDuty
  • Slack
  • OpsGenie
  • Custom webhooks

Getting Started

Step 1: Upgrade to Enterprise

Contact our sales team:

Step 2: Access Customer Portal

After upgrading:

  1. Log in to app.getaxonflow.com
  2. Navigate to Settings > LLM Providers
  3. Configure providers via the UI

Step 3: Migrate from YAML (Optional)

Existing YAML configurations can be imported:

axonctl providers import --file axonflow.yaml

Pricing

Enterprise LLM features are included in all paid tiers:

SaaS (AxonFlow-Hosted)

TierMonthlyIncluded Requests
Starter$5,000500K/month
Professional$15,0003M/month
Enterprise$50,00010M/month

In-VPC (Self-Hosted)

TierMonthlyMax Nodes
Professional$20,00010
Enterprise$60,00050
Enterprise PlusCustomUnlimited

Next Steps