Mistral AI Setup
Mistral is available in AxonFlow Community and Enterprise. As a leading European AI company based in France, Mistral offers high-performance models with competitive pricing and EU data residency by default.
Runtime Defaults
AxonFlow's Mistral provider defaults to:
- Provider name:
mistral - Default model:
mistral-small-latest - Default endpoint:
https://api.mistral.ai
Environment Variables
export MISTRAL_API_KEY=your-mistral-api-key
export MISTRAL_MODEL=mistral-small-latest
export MISTRAL_TIMEOUT_SECONDS=120 # Request timeout in seconds (default: 120)
export MISTRAL_ENDPOINT=https://api.mistral.ai # Custom endpoint (e.g., self-hosted)
YAML Configuration
version: "1.0"
llm_providers:
mistral:
enabled: true
credentials:
api_key: ${MISTRAL_API_KEY}
config:
model: ${MISTRAL_MODEL:-mistral-small-latest}
Models
AxonFlow accepts any model alias that the Mistral API supports. The -latest aliases automatically resolve to the newest version.
| Model alias | Tier | AxonFlow default |
|---|---|---|
mistral-small-latest | Small (fast, cost-effective) | ✅ |
mistral-medium-latest | Medium (balanced) | |
mistral-large-latest | Large (most capable) | |
codestral-latest | Code generation | |
ministral-8b-latest | Lightweight, low latency |
Mistral updates model versions and pricing frequently. For the current model list, pricing, and context windows, see the Mistral models documentation.
Use -latest aliases (e.g., mistral-small-latest) rather than pinned version IDs unless you need reproducibility. The aliases automatically pick up new model releases without config changes.
Proxy Mode Example
cURL
curl -X POST http://localhost:8080/api/request \
-H "Content-Type: application/json" \
-d '{
"query": "Explain the EU AI Act in two sentences.",
"context": {
"provider": "mistral"
}
}'
Python
from axonflow import AxonFlow # v6.0.0+
async with AxonFlow(
endpoint="http://localhost:8080",
client_id="community",
) as client:
response = await client.proxy_llm_call(
query="Explain the EU AI Act in two sentences.",
context={"provider": "mistral"},
)
print(response.data)
Go
import axonflow "github.com/getaxonflow/axonflow-sdk-go/v5"
client := axonflow.NewClient(axonflow.AxonFlowConfig{
Endpoint: "http://localhost:8080",
ClientID: "community",
})
resp, _ := client.ProxyLLMCall(axonflow.ProxyLLMCallRequest{
Query: "Explain the EU AI Act in two sentences.",
Context: map[string]interface{}{"provider": "mistral"},
})
fmt.Println(resp.Data)
TypeScript
import { AxonFlow } from '@axonflow/sdk'; // v5.0.0+
const client = new AxonFlow({
endpoint: 'http://localhost:8080',
clientId: 'community',
});
const resp = await client.proxyLLMCall({
query: 'Explain the EU AI Act in two sentences.',
context: { provider: 'mistral' },
});
console.log(resp.data);
Gateway Mode Example
Gateway mode lets you call Mistral directly while AxonFlow handles policy evaluation and audit logging.
# 1. Pre-check with AxonFlow
PRECHECK=$(curl -s -X POST http://localhost:8080/api/policy/pre-check \
-H "Content-Type: application/json" \
-d '{"client_id": "community", "query": "Analyze customer data"}')
CONTEXT_ID=$(echo "$PRECHECK" | jq -r '.context_id')
# 2. Call Mistral directly
RESPONSE=$(curl -s https://api.mistral.ai/v1/chat/completions \
-H "Authorization: Bearer $MISTRAL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "mistral-small-latest",
"messages": [{"role": "user", "content": "Analyze customer data"}],
"max_tokens": 500
}')
# 3. Audit the call
curl -X POST http://localhost:8080/api/audit/llm-call \
-H "Content-Type: application/json" \
-d "{
\"client_id\": \"community\",
\"context_id\": \"$CONTEXT_ID\",
\"provider\": \"mistral\",
\"model\": \"mistral-small-latest\",
\"latency_ms\": 500,
\"token_usage\": {\"prompt_tokens\": 10, \"completion_tokens\": 50, \"total_tokens\": 60}
}"
Multi-Provider Routing
Use Mistral alongside other providers with weighted routing:
export MISTRAL_API_KEY=...
export OPENAI_API_KEY=...
export LLM_ROUTING_STRATEGY=weighted
AxonFlow distributes requests across healthy providers based on configured weights. See Provider Routing for details.
Current Capabilities
- Chat completions (text-in, text-out)
- Streaming (SSE)
- Code generation (via Codestral)
- Cost estimation
- Health monitoring
Planned (not yet implemented):
- Function calling / tool use
- Vision (Pixtral multimodal)
- JSON mode (
response_format)
Getting an API Key
- Go to console.mistral.ai
- Sign up or sign in
- Navigate to API Keys in the left sidebar
- Click Create new key
- Copy the key and set it as
MISTRAL_API_KEY
Troubleshooting
| Error | Cause | Solution |
|---|---|---|
401 Unauthorized | Invalid or expired API key | Regenerate at console.mistral.ai |
429 Too Many Requests | Rate limit exceeded | Reduce request frequency or upgrade Mistral plan |
| Provider not in health check | MISTRAL_API_KEY not set | Set the environment variable before starting AxonFlow |
| Provider limit exceeded | Community mode limits to 2 providers | Use evaluation license or set LLM_PROVIDERS=mistral to prioritize |
| Timeout errors | Model overloaded | Increase MISTRAL_TIMEOUT_SECONDS or use mistral-small-latest |
Related Resources
- Provider Routing — Weighted, failover, and round-robin strategies
- Proxy Mode — Full proxy mode documentation
- Gateway Mode — Gateway mode documentation
- Getting Started — Quick start guide
