Azure OpenAI Setup
Azure OpenAI is available in AxonFlow Community and Enterprise. It is usually the right fit for teams already running on Azure and for organizations that want Azure deployment controls while keeping the same AxonFlow governance flow used with other providers.
Required Configuration
export AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
export AZURE_OPENAI_API_KEY=your-azure-openai-key
export AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4o-mini
export AZURE_OPENAI_API_VERSION=2024-08-01-preview
AxonFlow uses your deployment name as the model identifier for Azure OpenAI requests.
Authentication Behavior
The runtime auto-detects authentication mode from the endpoint:
| Endpoint Pattern | Auth Method |
|---|---|
*.cognitiveservices.azure.com | Authorization: Bearer <token> |
*.openai.azure.com | api-key: <key> |
If you are migrating between Azure AI Foundry-style and classic Azure OpenAI endpoints, make sure the endpoint and credential type stay matched.
YAML Configuration
version: "1.0"
llm_providers:
azure-openai:
enabled: true
credentials:
api_key: ${AZURE_OPENAI_API_KEY}
config:
endpoint: ${AZURE_OPENAI_ENDPOINT}
deployment_name: ${AZURE_OPENAI_DEPLOYMENT_NAME}
api_version: ${AZURE_OPENAI_API_VERSION:-2024-08-01-preview}
Good Fits
- Azure-first enterprise deployments
- Teams standardizing on Azure networking, identity, and compliance boundaries
- Organizations that want Azure-hosted OpenAI models without changing their AxonFlow integration pattern
- Financial services, healthcare, and Microsoft-centric internal platforms that need Azure-compatible AI governance
Proxy Mode
import { AxonFlow } from '@axonflow/sdk';
const axonflow = new AxonFlow({
endpoint: 'http://localhost:8080',
clientId: process.env.AXONFLOW_CLIENT_ID,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET,
});
const response = await axonflow.proxyLLMCall({
userToken: 'user-123',
query: 'Summarize the architecture review for this Azure platform team.',
requestType: 'chat',
context: {
provider: 'azure-openai',
model: process.env.AZURE_OPENAI_DEPLOYMENT_NAME ?? 'gpt-4o-mini',
},
});
console.log(response.data);
Gateway Mode
import { AxonFlow } from '@axonflow/sdk';
const prompt = 'Summarize the architecture review for this Azure platform team.';
const endpoint = process.env.AZURE_OPENAI_ENDPOINT!;
const apiKey = process.env.AZURE_OPENAI_API_KEY!;
const deployment = process.env.AZURE_OPENAI_DEPLOYMENT_NAME!;
const apiVersion = process.env.AZURE_OPENAI_API_VERSION ?? '2024-08-01-preview';
const axonflow = new AxonFlow({
endpoint: 'http://localhost:8080',
clientId: process.env.AXONFLOW_CLIENT_ID,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET,
});
const ctx = await axonflow.getPolicyApprovedContext({
userToken: 'user-123',
query: prompt,
});
if (!ctx.approved) {
throw new Error(`Blocked: ${ctx.blockReason}`);
}
const approvedPrompt =
typeof ctx.approvedData.query === 'string' ? String(ctx.approvedData.query) : prompt;
const headers: Record<string, string> = {
'Content-Type': 'application/json',
};
if (endpoint.toLowerCase().includes('.cognitiveservices.azure.com')) {
headers.Authorization = `Bearer ${apiKey}`;
} else {
headers['api-key'] = apiKey;
}
const completion = await fetch(
`${endpoint}/openai/deployments/${deployment}/chat/completions?api-version=${apiVersion}`,
{
method: 'POST',
headers,
body: JSON.stringify({
messages: [{ role: 'user', content: approvedPrompt }],
max_tokens: 500,
}),
},
).then(res => res.json());
const output = completion.choices?.[0]?.message?.content ?? '';
await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: output.slice(0, 200),
provider: 'azure-openai',
model: deployment,
tokenUsage: {
promptTokens: completion.usage?.prompt_tokens ?? 0,
completionTokens: completion.usage?.completion_tokens ?? 0,
totalTokens: completion.usage?.total_tokens ?? 0,
},
latencyMs: 250,
});
Notes for Production Teams
- The deployment name is the runtime model identifier in Azure OpenAI flows.
- Azure OpenAI is community-available in AxonFlow and does not require the enterprise build.
- It is often a good community-to-evaluation bridge for organizations that already know they will standardize on Azure.
