Build Your First AxonFlow Agent in 10 Minutes
This tutorial gets you from a running AxonFlow endpoint to a governed LLM request using the public SDKs.
What You'll Build
A simple agent flow that:
- sends a request through AxonFlow Proxy Mode
- evaluates built-in system policies automatically
- returns the LLM response when allowed
- records the request in the audit path for your deployment
Time to complete: 10 minutes
Difficulty: Beginner
Prerequisite: A running AxonFlow Agent endpoint such as http://localhost:8080
Before You Start
You need:
- A running AxonFlow deployment
- For local/community usage, the quickest path is Getting Started with Docker Compose
- For shared or managed environments, use the Agent endpoint your team already exposes
- Client credentials
clientIdis typically your tenant or organization identifierclientSecretis optional in many community/self-hosted setups and common in shared or enterprise environments
- One SDK environment
- TypeScript: Node.js 18+
- Go: Go 1.21+
- Python: Python 3.10+
- An LLM provider configured in AxonFlow
- OpenAI, Anthropic, Gemini, Ollama, and other supported providers are covered in LLM Overview
Step 1: Install an SDK
Choose one language.
TypeScript
mkdir my-first-agent
cd my-first-agent
npm init -y
npm install @axonflow/sdk
npm install --save-dev typescript @types/node tsx
Go
mkdir my-first-agent
cd my-first-agent
go mod init my-first-agent
go get github.com/getaxonflow/axonflow-sdk-go/v5
Python
mkdir my-first-agent
cd my-first-agent
python3 -m venv venv
source venv/bin/activate
pip install axonflow
Step 2: Configure the Client
Use environment variables so the example stays production-safe.
export AXONFLOW_ENDPOINT="http://localhost:8080"
export AXONFLOW_CLIENT_ID="my-tenant"
# Optional in many self-hosted community setups
export AXONFLOW_CLIENT_SECRET="your-client-secret"
TypeScript
Create index.ts:
import { AxonFlow } from '@axonflow/sdk';
const axonflow = new AxonFlow({
endpoint: process.env.AXONFLOW_ENDPOINT!,
clientId: process.env.AXONFLOW_CLIENT_ID!,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET,
debug: true,
});
async function main() {
const health = await axonflow.healthCheck();
console.log('Connected to AxonFlow:', health);
}
main().catch((error) => {
console.error('Connection failed:', error);
process.exit(1);
});
Run it:
npx tsx index.ts
Go
Create main.go:
package main
import (
"log"
"os"
axonflow "github.com/getaxonflow/axonflow-sdk-go/v5"
)
func main() {
client := axonflow.NewClient(axonflow.AxonFlowConfig{
Endpoint: os.Getenv("AXONFLOW_ENDPOINT"),
ClientID: os.Getenv("AXONFLOW_CLIENT_ID"),
ClientSecret: os.Getenv("AXONFLOW_CLIENT_SECRET"),
Debug: true,
})
if err := client.HealthCheck(); err != nil {
log.Fatal(err)
}
log.Println("Connected to AxonFlow")
}
Run it:
go run main.go
Python
Create main.py:
import os
from axonflow import AxonFlow
client = AxonFlow.sync(
endpoint=os.environ["AXONFLOW_ENDPOINT"],
client_id=os.environ["AXONFLOW_CLIENT_ID"],
client_secret=os.getenv("AXONFLOW_CLIENT_SECRET"),
debug=True,
)
health = client.health_check()
print(f"Connected to AxonFlow: {health}")
Run it:
python main.py
Step 3: Understand the Default Governance Baseline
AxonFlow already enforces a built-in baseline before you add your own custom policies.
Today that baseline includes:
- 73 pattern-based system policies on the Agent
- 10 condition-based system policies on the Orchestrator
- 83 total built-in system policies
That gives you out-of-the-box coverage for things like:
- SQL injection detection
- global and regional PII detection
- code secret and unsafe-code detection
- runtime governance checks in proxy flows
You can add tenant or organization policies later, but you do not need to build a policy catalog just to get started.
Step 4: Send Your First Governed Request
Use Proxy Mode so AxonFlow handles policy evaluation, provider routing, and audit logging in one call.
TypeScript
Update index.ts:
import { AxonFlow } from '@axonflow/sdk';
const axonflow = new AxonFlow({
endpoint: process.env.AXONFLOW_ENDPOINT!,
clientId: process.env.AXONFLOW_CLIENT_ID!,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET,
debug: true,
});
async function main() {
const response = await axonflow.proxyLLMCall({
userToken: 'user-123',
query: 'What is the capital of France?',
requestType: 'chat',
context: {
provider: 'openai',
model: 'gpt-4o',
},
});
if (response.blocked) {
console.log('Blocked:', response.blockReason);
return;
}
console.log('Response:', response.data);
console.log('Policy info:', response.policyInfo);
}
main().catch((error) => {
console.error(error);
process.exit(1);
});
Go
Update main.go:
package main
import (
"fmt"
"log"
"os"
axonflow "github.com/getaxonflow/axonflow-sdk-go/v5"
)
func main() {
client := axonflow.NewClient(axonflow.AxonFlowConfig{
Endpoint: os.Getenv("AXONFLOW_ENDPOINT"),
ClientID: os.Getenv("AXONFLOW_CLIENT_ID"),
ClientSecret: os.Getenv("AXONFLOW_CLIENT_SECRET"),
Debug: true,
})
response, err := client.ProxyLLMCall(
"user-123",
"What is the capital of France?",
"chat",
map[string]interface{}{
"provider": "openai",
"model": "gpt-4o",
},
)
if err != nil {
log.Fatal(err)
}
if response.Blocked {
fmt.Println("Blocked:", response.BlockReason)
return
}
fmt.Println("Response:", response.Data)
fmt.Printf("Policy info: %+v\n", response.PolicyInfo)
}
Python
Update main.py:
import os
from axonflow import AxonFlow
client = AxonFlow.sync(
endpoint=os.environ["AXONFLOW_ENDPOINT"],
client_id=os.environ["AXONFLOW_CLIENT_ID"],
client_secret=os.getenv("AXONFLOW_CLIENT_SECRET"),
debug=True,
)
response = client.proxy_llm_call(
user_token="user-123",
query="What is the capital of France?",
request_type="chat",
context={
"provider": "openai",
"model": "gpt-4o",
},
)
if response.blocked:
print(f"Blocked: {response.block_reason}")
else:
print(f"Response: {response.data}")
print(f"Policy info: {response.policy_info}")
Expected Outcome
For a normal request, you should see:
- a successful LLM response
- policy metadata in the response
- an audit record in the runtime logs or audit store
For a clearly unsafe request, AxonFlow should block or warn depending on the matching policy.
Step 5: Inspect Audit Output
Every governed request flows through the audit path. How you inspect that depends on your deployment:
- Local/community Docker Compose:
docker compose logs agentanddocker compose logs orchestrator - Self-managed Kubernetes or containers: use your normal log pipeline
- AWS deployments: inspect the relevant CloudWatch log groups
Example local commands:
docker compose logs agent --tail=100
docker compose logs orchestrator --tail=100
What You Just Validated
You now have proof that:
- the SDK can reach your AxonFlow runtime
- Proxy Mode is working end to end
- built-in system policies are active
- governed LLM calls return policy metadata
- requests are entering the audit path
That is already enough to start validating a real application integration.
Next Steps
From here, most teams branch into one of these paths:
- Use your existing framework
- If you already have LangChain, LangGraph, CrewAI, AutoGen, or Semantic Kernel flows, start with Integration Overview
- Add custom policies
- Use Policy Overview and System Policy API
- Connect governed tools and data
- Start with MCP Overview
- Build multi-agent workflows
- Continue with Orchestration Overview
- Review production-fit features
Troubleshooting
Connection refused or timeout
- Verify
AXONFLOW_ENDPOINT - Confirm the Agent is healthy
- If you use a reverse proxy or load balancer, verify it routes to the Agent correctly
Authentication failed
- Confirm
clientIdandclientSecret - If you are in community/self-hosted mode, verify your environment really allows requests without a secret
- Check any proxy or gateway auth layer in front of AxonFlow
Request blocked unexpectedly
- Inspect
policyInfoin the response - Review the relevant system policy category in System Policies Reference
- Test with a simpler prompt to confirm whether a policy matched
Slow response
- Separate AxonFlow policy-evaluation latency from provider latency
- Check Agent and Orchestrator health
- Review database and network latency in your environment
