Anthropic Claude Setup
Anthropic is available in AxonFlow Community and Enterprise. It is a strong option for long-context workflows, reasoning-heavy agent tasks, and teams that prefer Claude as the primary model family.
Runtime Defaults
AxonFlow's Anthropic provider defaults to:
- Provider name:
anthropic - Default model:
claude-sonnet-4-20250514 - Default base URL:
https://api.anthropic.com
Current model aliases in the runtime map older Claude constants onto the current-generation Claude 4 family, so it is still better to set the exact model you want in configuration.
Environment Variables
export ANTHROPIC_API_KEY=sk-ant-your-api-key
export ANTHROPIC_MODEL=claude-sonnet-4-20250514
export ANTHROPIC_TIMEOUT_SECONDS=120 # Request timeout in seconds (default: 120)
export ANTHROPIC_ENDPOINT=https://api.anthropic.com # Custom endpoint for proxies
YAML Configuration
version: "1.0"
llm_providers:
anthropic:
enabled: true
credentials:
api_key: ${ANTHROPIC_API_KEY}
config:
model: ${ANTHROPIC_MODEL:-claude-sonnet-4-20250514}
Good Fits
- Research and reasoning workflows
- Long-context enterprise copilots
- Teams that want a safety-focused Claude default in community mode
- Knowledge assistants, analysis pipelines, and review workflows that need Claude capability with production governance
Proxy Mode
import { AxonFlow } from '@axonflow/sdk';
const axonflow = new AxonFlow({
endpoint: 'http://localhost:8080',
clientId: process.env.AXONFLOW_CLIENT_ID,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET,
});
const response = await axonflow.proxyLLMCall({
userToken: 'user-123',
query: 'Explain the security tradeoffs of this architecture.',
requestType: 'chat',
context: {
provider: 'anthropic',
model: 'claude-sonnet-4-20250514',
},
});
console.log(response.data);
Gateway Mode
import { AxonFlow } from '@axonflow/sdk';
import Anthropic from '@anthropic-ai/sdk';
const prompt = 'Explain the security tradeoffs of this architecture.';
const axonflow = new AxonFlow({
endpoint: 'http://localhost:8080',
clientId: process.env.AXONFLOW_CLIENT_ID,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET,
});
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const ctx = await axonflow.getPolicyApprovedContext({
userToken: 'user-123',
query: prompt,
});
if (!ctx.approved) {
throw new Error(`Blocked: ${ctx.blockReason}`);
}
const approvedPrompt =
typeof ctx.approvedData.query === 'string' ? String(ctx.approvedData.query) : prompt;
const message = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: approvedPrompt }],
});
const firstBlock = message.content.find(block => block.type === 'text');
const output = firstBlock?.text ?? '';
await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: output.slice(0, 200),
provider: 'anthropic',
model: 'claude-sonnet-4-20250514',
tokenUsage: {
promptTokens: message.usage.input_tokens,
completionTokens: message.usage.output_tokens,
totalTokens: message.usage.input_tokens + message.usage.output_tokens,
},
latencyMs: 250,
});
Notes for Production Teams
- The Anthropic provider supports streaming in the runtime.
- For request-level routing,
context.provider = "anthropic"is a preference unless strict pinning is enabled for that request. - If your organization standardizes on Claude but needs runtime routing, combine Anthropic with a secondary provider and configure failover or weighted routing at the platform level.
Troubleshooting
| Issue | Cause | Fix |
|---|---|---|
| 401 Authentication failed | Invalid API key | Verify ANTHROPIC_API_KEY |
| 429 Rate limited | Concurrent request limit | Reduce concurrency or contact Anthropic |
| Overloaded error | Anthropic capacity issues | Retry with backoff |
