Workflow Examples
This tutorial collects the workflow patterns that teams usually need right after the first-request experience. It is intentionally practical: each pattern maps to a current Agent or SDK surface that engineers can build on today.
Use this page when you want to move from "the SDK connects" to "we can now design a governed AI workflow for production."
What This Page Covers
- governed LLM requests through the Agent
- governed MCP data access for SQL and connector workflows
- Multi-Agent Planning (MAP) with the current generate-then-execute lifecycle
- decision points for when to stay in community and when to ask for evaluation or enterprise features
Choosing the Right Workflow Pattern
The most common source of confusion is choosing the wrong AxonFlow surface for the problem. The quickest way to avoid that is to decide first whether AxonFlow is only governing a request, governing a data access step, or coordinating a full workflow.
| Goal | Best fit | Why |
|---|---|---|
| Govern one LLM request in an app | Proxy / Agent request path | Lowest-friction governed request path |
| Govern access to a database or tool | MCP query or MCP execution | Direct connector and policy surface |
| Let AxonFlow coordinate a plan | MAP | Built-in orchestration with stored plans |
| Keep your own orchestrator but add gates | WCP | External workflow stays in control |
This distinction matters for senior engineers because it changes how identity, cost, auditability, and recovery are designed. A support copilot, a shopping assistant, and a multi-agent research workflow may all use the same product, but they should not all be built on the same API shape.
1. Governed LLM Request
The simplest workflow is a governed LLM call through the Agent on 8080. AxonFlow evaluates built-in system policies, applies tenant policies when present, routes the request to the configured provider, and records the request in the audit path.
TypeScript
import { AxonFlow } from '@axonflow/sdk';
const client = new AxonFlow({
endpoint: process.env.AXONFLOW_ENDPOINT!,
clientId: process.env.AXONFLOW_CLIENT_ID!,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET,
});
const response = await client.proxyLLMCall({
userToken: 'support-agent-123',
query: 'Summarize the latest support issues for premium users.',
requestType: 'chat',
context: {
provider: 'openai',
model: 'gpt-4o',
department: 'support',
priority: 'high',
},
});
if (response.blocked) {
console.log('Blocked:', response.blockReason);
} else {
console.log(response.data);
}
Python
from axonflow import AxonFlow
with AxonFlow.sync(
endpoint="http://localhost:8080",
client_id="support-app",
client_secret="secret",
) as client:
response = client.proxy_llm_call(
user_token="support-agent-123",
query="Summarize the latest support issues for premium users.",
request_type="chat",
context={
"provider": "openai",
"model": "gpt-4o",
"department": "support",
"priority": "high",
},
)
print(response.blocked, response.data)
Use this pattern when the application already owns the workflow and just needs AxonFlow to govern the requests.
Typical uses:
- agent-assist copilots
- internal search and summarization tools
- support drafting or analyst copilots
- governed frontend or backend LLM features
2. Governed MCP Query
When the workflow needs governed access to a database or other MCP-backed system, use the direct MCP query surface instead of pretending the connector is just another chat call.
TypeScript
const result = await client.mcpQuery({
connector: 'postgres',
statement: 'SELECT id, email, status FROM customers WHERE status = $1 LIMIT 20',
options: {
parameters: ['active'],
},
});
console.log(result.redacted);
console.log(result.policy_info?.policies_evaluated);
console.log(result.data);
Go
ctx := context.Background()
result, err := client.MCPQuery(ctx, axonflow.MCPQueryRequest{
Connector: "postgres",
Statement: "SELECT id, email, status FROM customers WHERE status = $1 LIMIT 20",
Options: map[string]interface{}{
"parameters": []interface{}{"active"},
},
})
if err != nil {
log.Fatal(err)
}
fmt.Println(result.Redacted)
fmt.Println(result.PolicyInfo)
This is the right public/community pattern for teams building governed data access, internal copilots, and agent-tool retrieval flows.
Typical uses:
- SQL-backed support or operations assistants
- governed access to customer, order, or case data
- connector-backed retrieval in agent frameworks
- MCP tools inside larger multi-agent systems
3. Multi-Agent Planning (MAP)
MAP is now a two-step lifecycle:
- generate the plan
- execute the stored plan
That matters because real teams often want to inspect steps, estimate cost, or attach approvals before execution starts.
TypeScript
const plan = await client.generatePlan(
'Research vendors for a customer support knowledge assistant',
'generic'
);
console.log(plan.planId);
console.log(plan.steps);
const execution = await client.executePlan(plan.planId);
console.log(execution.status);
console.log(execution.result);
When MAP is the right choice
- the workflow can be expressed as agents, steps, and dependencies
- you want AxonFlow to own orchestration instead of just governing an external orchestrator
- you want a plan lifecycle with execution history, cancellation, resumption, or cost-estimation hooks
For deeper MAP guidance, continue with Getting Started with MAP and Planning Patterns.
MAP is especially useful when the workflow needs:
- explicit step structure
- reusable agent definitions
- execution visibility across many steps
- plan review before execution
4. Workflow Governance for Existing Orchestrators
If the workflow already runs in LangGraph, CrewAI, Temporal, or a custom engine, the right pattern is Workflow Control Plane rather than MAP. In that model, your orchestrator still runs the work while AxonFlow evaluates each step through allow, block, or require_approval.
That path is covered in:
What a Staff Engineer Usually Wants to Validate
Most serious reviewers are not just asking whether the API works. They are usually trying to answer questions like:
- where does policy evaluation happen in the request path?
- how do MCP data-access controls differ from LLM request controls?
- what gets audited automatically, and what still belongs in our application?
- when should we use community only, and when do approval queues or protected operations surfaces become necessary?
This tutorial is intentionally organized around those questions so it can work both as a hands-on guide and as architectural orientation.
Community, Evaluation, and Enterprise
Community is enough to validate the core engineering questions:
- can we govern LLM requests?
- can we govern MCP connector access?
- can we build multi-agent or workflow-driven applications on this surface?
Evaluation and enterprise become more important when you need:
- approval queues and portal-driven human review
- organization-tier policy management
- protected operational dashboards and customer portal workflows
- advanced connectors or deeper compliance operations
That upgrade story matters because the best docs funnel does two jobs at once: it helps engineers ship something real in community, and it makes it obvious why evaluation or enterprise becomes the next step as the project moves toward production scale.
