OpenAI Setup
OpenAI is available in AxonFlow Community and Enterprise. It is the simplest cloud provider to start with for Proxy Mode, MAP, and Gateway Mode examples.
Runtime Defaults
AxonFlow's OpenAI provider defaults to:
- Provider name:
openai - Default model:
gpt-4o - Default endpoint:
https://api.openai.com
If you want a different model, set it explicitly in environment variables or YAML.
Environment Variables
export OPENAI_API_KEY=sk-your-api-key
export OPENAI_MODEL=gpt-4o
export OPENAI_TIMEOUT_SECONDS=120 # Request timeout in seconds (default: 120)
export OPENAI_ENDPOINT=https://api.openai.com # Custom endpoint for proxies or compatible APIs
YAML Configuration
version: "1.0"
llm_providers:
openai:
enabled: true
credentials:
api_key: ${OPENAI_API_KEY}
config:
model: ${OPENAI_MODEL:-gpt-4o}
Good Fits
- General-purpose chat and agent tasks
- Strong default for early community trials
- Teams that want broad SDK support across TypeScript, Python, Go, and Java
- Internal copilots, customer support agents, and application backends that need OpenAI governance without rewriting model-call logic
Proxy Mode
TypeScript
import { AxonFlow } from '@axonflow/sdk';
const axonflow = new AxonFlow({
endpoint: 'http://localhost:8080',
clientId: process.env.AXONFLOW_CLIENT_ID,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET,
});
const response = await axonflow.proxyLLMCall({
userToken: 'user-123',
query: 'Summarize the main risks in this API design.',
requestType: 'chat',
context: {
provider: 'openai',
model: 'gpt-4o',
},
});
console.log(response.data);
Python
from axonflow import AxonFlow
async with AxonFlow(
endpoint="http://localhost:8080",
client_id="demo-client",
client_secret="demo-secret",
) as client:
response = await client.proxy_llm_call(
user_token="user-123",
query="Summarize the main risks in this API design.",
request_type="chat",
context={"provider": "openai", "model": "gpt-4o"},
)
print(response.data)
Gateway Mode
Use Gateway Mode when you want AxonFlow to approve and audit the request but your application to call OpenAI directly.
import { AxonFlow } from '@axonflow/sdk';
import OpenAI from 'openai';
const prompt = 'Summarize the main risks in this API design.';
const axonflow = new AxonFlow({
endpoint: 'http://localhost:8080',
clientId: process.env.AXONFLOW_CLIENT_ID,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET,
});
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const ctx = await axonflow.getPolicyApprovedContext({
userToken: 'user-123',
query: prompt,
});
if (!ctx.approved) {
throw new Error(`Blocked: ${ctx.blockReason}`);
}
const approvedPrompt =
typeof ctx.approvedData.query === 'string' ? String(ctx.approvedData.query) : prompt;
const completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: approvedPrompt }],
});
const output = completion.choices[0].message.content ?? '';
await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: output.slice(0, 200),
provider: 'openai',
model: 'gpt-4o',
tokenUsage: {
promptTokens: completion.usage?.prompt_tokens ?? 0,
completionTokens: completion.usage?.completion_tokens ?? 0,
totalTokens: completion.usage?.total_tokens ?? 0,
},
latencyMs: 250,
});
Notes for Production Teams
- AxonFlow uses the configured OpenAI provider for Proxy Mode and MAP.
- Request-level
context.provider = "openai"is a routing hint unless you also setcontext.strict_provider = true. - If your OpenAI account has limited model access, keep
OPENAI_MODELorconfig.modelaligned with what the account can actually use.
Troubleshooting
| Issue | Cause | Fix |
|---|---|---|
| 401 Unauthorized | Invalid or expired API key | Verify OPENAI_API_KEY |
| 429 Rate Limited | Too many requests | Reduce concurrency or upgrade OpenAI plan |
| Model not found | Model ID not accessible | Check model access in OpenAI dashboard |
