Workflow Examples
Complete code examples for common AxonFlow patterns - Copy, paste, and customize for your use case.
Overview
This guide provides production-ready code examples for common AxonFlow workflows. Each example is complete and runnable - just update the configuration and deploy.
What You'll Learn
- Simple query execution patterns
- Multi-Agent Parallel (MAP) execution
- Policy enforcement patterns
- MCP connector integration
- Error handling and retry logic
- Performance optimization
Prerequisites
Before running any example, set these environment variables:
export AXONFLOW_ENDPOINT="http://localhost:8080" # Agent endpoint
export AXONFLOW_CLIENT_ID="your-client-id"
export AXONFLOW_CLIENT_SECRET="your-client-secret" # Optional for community mode
# For MCP connector examples (Section 4):
export OPENAI_API_KEY="sk-..." # Or use Ollama as provider
Table of Contents
- Simple Query Execution
- Multi-Agent Parallel Execution (MAP)
- Policy Enforcement Patterns
- MCP Connector Integration
- Error Handling
- Performance Optimization
1. Simple Query Execution
1.1 Basic Query
The simplest AxonFlow query with policy enforcement.
TypeScript
import { AxonFlow } from '@axonflow/sdk';
const client = new AxonFlow({
endpoint: process.env.AXONFLOW_ENDPOINT!,
clientId: process.env.AXONFLOW_CLIENT_ID!,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET!
});
async function simpleQuery() {
const response = await client.executeQuery({
query: 'What is the weather in San Francisco?',
context: {
provider: 'openai',
model: 'gpt-4',
}
});
console.log('Response:', response.result);
console.log('Latency:', response.metadata.latency_ms + 'ms');
}
Expected Output:
{
"success": true,
"data": "AI governance refers to the frameworks and processes...",
"policy_info": {
"allowed": true,
"risk_score": 0.1
},
"processing_time": "245ms"
}
Go
package main
import (
"fmt"
"log"
"os"
"github.com/getaxonflow/axonflow-sdk-go/v3"
)
func simpleQuery() {
client := axonflow.NewClient(axonflow.AxonFlowConfig{
Endpoint: os.Getenv("AXONFLOW_ENDPOINT"),
ClientID: os.Getenv("AXONFLOW_CLIENT_ID"),
ClientSecret: os.Getenv("AXONFLOW_CLIENT_SECRET"),
})
response, err := client.ProxyLLMCall(
"user-123",
"What is the weather in San Francisco?",
"chat",
map[string]interface{}{
"provider": "openai",
"model": "gpt-4",
},
)
if err != nil {
log.Fatal(err)
}
fmt.Println("Response:", response.Result)
fmt.Printf("Latency: %dms\n", response.Metadata.LatencyMS)
}
Python
from axonflow import AxonFlow
import os
client = AxonFlow(
endpoint=os.environ["AXONFLOW_ENDPOINT"],
client_id=os.environ["AXONFLOW_CLIENT_ID"],
client_secret=os.environ.get("AXONFLOW_CLIENT_SECRET", ""),
)
def simple_query():
response = client.proxy_llm_call(
query="What is the weather in San Francisco?",
context={"provider": "openai", "model": "gpt-4"},
)
print(f"Response: {response.result}")
print(f"Latency: {response.metadata.latency_ms}ms")
Expected Output:
Response: The current weather in San Francisco is...
Latency: 4ms
Verify: Confirm the query was processed and a policy decision was returned:
curl -s http://localhost:8080/api/v1/health | jq .
# Expected: {"status":"healthy","version":"..."}
# Check audit log for the query execution:
curl -s http://localhost:8081/api/v1/executions?limit=1 | jq '.[0].policy_info'
# Expected: {"allowed": true, "risk_score": ...}
1.2 Query with Context
Add user context for audit trails and policy decisions.
TypeScript
async function queryWithContext() {
const response = await client.executeQuery({
query: 'Get customer data for user 12345',
context: {
user_id: 'user-789',
user_role: 'customer_service',
department: 'support',
timestamp: new Date().toISOString(),
ip_address: req.ip,
session_id: req.session.id
}
});
return response;
}
Context fields like user_role and department are available to server-side policies for role-based access decisions. Configure context-aware policies via the Policy API or the Customer Portal.
1.3 Query with LLM Integration
Connect to AWS Bedrock, OpenAI, or Anthropic Claude.
TypeScript
async function queryWithLLM() {
const response = await client.executeQuery({
query: 'Generate a product description for wireless headphones with noise cancellation',
llm: {
provider: 'aws-bedrock',
model: 'anthropic.claude-sonnet-4-20250514-v1:0',
temperature: 0.7,
max_tokens: 500
},
context: {
user_id: 'marketing-team',
purpose: 'product_description'
}
});
console.log('Generated Description:', response.result);
}
Supported LLM Providers
| Provider | Configuration |
|---|---|
| AWS Bedrock | provider: 'aws-bedrock', model: 'anthropic.claude-sonnet-4-...' |
| OpenAI | provider: 'openai', model: 'gpt-4' |
| Anthropic | provider: 'anthropic', model: 'claude-opus-4-20250514' |
2. Multi-Agent Parallel Execution (MAP)
Execute multiple agents in parallel for 40x faster results.
2.1 Basic Parallel Execution
TypeScript
async function parallelExecution() {
const response = await client.executeParallel([
{
query: 'Search flights from SFO to Paris',
mcp: { connector: 'amadeus', operation: 'search_flights' }
},
{
query: 'Search hotels in Paris city center',
mcp: { connector: 'amadeus', operation: 'search_hotels' }
},
{
query: 'Get weather forecast for Paris next week',
mcp: { connector: 'weather', operation: 'forecast' }
}
]);
console.log('Flights:', response[0].result);
console.log('Hotels:', response[1].result);
console.log('Weather:', response[2].result);
console.log('Total Time:', response.metadata.total_time_ms + 'ms');
}
Go
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/getaxonflow/axonflow-sdk-go/v3"
)
func parallelExecution() {
client := axonflow.NewClient(axonflow.AxonFlowConfig{
Endpoint: os.Getenv("AXONFLOW_ENDPOINT"),
ClientID: os.Getenv("AXONFLOW_CLIENT_ID"),
ClientSecret: os.Getenv("AXONFLOW_CLIENT_SECRET"),
})
requests := []*axonflow.QueryRequest{
{
Query: "Search flights from SFO to Paris",
MCP: &axonflow.MCPConfig{Connector: "amadeus", Operation: "search_flights"},
},
{
Query: "Search hotels in Paris city center",
MCP: &axonflow.MCPConfig{Connector: "amadeus", Operation: "search_hotels"},
},
{
Query: "Get weather forecast for Paris next week",
MCP: &axonflow.MCPConfig{Connector: "weather", Operation: "forecast"},
},
}
responses, err := client.ExecuteParallel(context.Background(), requests)
if err != nil {
log.Fatal(err)
}
fmt.Println("Flights:", responses[0].Result)
fmt.Println("Hotels:", responses[1].Result)
fmt.Println("Weather:", responses[2].Result)
fmt.Printf("Total Time: %dms\n", responses[0].Metadata.TotalTimeMS)
}
Expected Output:
{
"results": [
{
"index": 0,
"success": true,
"data": "Found 12 flights from SFO to CDG...",
"policy_info": { "allowed": true }
},
{
"index": 1,
"success": true,
"data": "Found 8 hotels in Paris city center...",
"policy_info": { "allowed": true }
},
{
"index": 2,
"success": true,
"data": "Paris forecast: 18-22C, partly cloudy...",
"policy_info": { "allowed": true }
}
],
"metadata": {
"total_time_ms": 5120,
"parallel": true,
"agent_count": 3
}
}
Verify: Confirm parallel execution completed successfully:
# Check that all 3 executions were logged:
curl -s http://localhost:8081/api/v1/executions?limit=3 | jq 'length'
# Expected: 3
# Verify parallel execution timing (total should be close to max single query, not sum):
curl -s http://localhost:8081/api/v1/executions?limit=3 | jq '.[].processing_time_ms'
Performance:
- Sequential: 3 queries × 5 seconds = 15 seconds
- Parallel (MAP): Max(5s, 5s, 5s) = 5 seconds
- Speedup: 3x (scales to 40x with more agents)
2.2 Real-World Example: Trip Planning
Complete trip planning with parallel execution.
TypeScript
import { AxonFlow } from '@axonflow/sdk';
interface TripPlanRequest {
destination: string;
origin: string;
dates: { departure: string; return: string };
travelers: number;
budget: 'economy' | 'business' | 'luxury';
}
async function planTrip(request: TripPlanRequest) {
const client = new AxonFlow({
endpoint: process.env.AXONFLOW_ENDPOINT!,
clientId: process.env.AXONFLOW_CLIENT_ID!,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET!
});
console.log(`🛫 Planning trip to ${request.destination}...`);
const startTime = Date.now();
// Execute 5 queries in parallel
const responses = await client.executeParallel([
// 1. Flight search
{
query: `Search ${request.budget} class flights from ${request.origin} to ${request.destination} for ${request.travelers} travelers departing ${request.dates.departure} returning ${request.dates.return}`,
mcp: { connector: 'amadeus', operation: 'search_flights' },
context: { budget: request.budget, type: 'flights' }
},
// 2. Hotel search
{
query: `Search ${request.budget} hotels in ${request.destination} for ${request.travelers} guests from ${request.dates.departure} to ${request.dates.return}`,
mcp: { connector: 'amadeus', operation: 'search_hotels' },
context: { budget: request.budget, type: 'hotels' }
},
// 3. Activities
{
query: `Recommend top activities in ${request.destination}`,
llm: { provider: 'aws-bedrock', model: 'anthropic.claude-sonnet-4-20250514-v1:0' },
context: { budget: request.budget, type: 'activities' }
},
// 4. Weather forecast
{
query: `Get weather forecast for ${request.destination} from ${request.dates.departure} to ${request.dates.return}`,
mcp: { connector: 'weather', operation: 'forecast' },
context: { type: 'weather' }
},
// 5. Restaurant recommendations
{
query: `Recommend ${request.budget} restaurants in ${request.destination}`,
llm: { provider: 'aws-bedrock', model: 'anthropic.claude-sonnet-4-20250514-v1:0' },
context: { budget: request.budget, type: 'restaurants' }
}
]);
const totalTime = Date.now() - startTime;
// Compile results
const tripPlan = {
flights: responses[0].result,
hotels: responses[1].result,
activities: responses[2].result,
weather: responses[3].result,
restaurants: responses[4].result,
performance: {
total_time_ms: totalTime,
speedup: '5x (parallel execution)',
policy_latency_avg: responses.reduce((sum, r) => sum + r.metadata.latency_ms, 0) / 5
}
};
console.log(`✅ Trip planned in ${totalTime}ms`);
return tripPlan;
}
// Usage
const trip = await planTrip({
destination: 'Paris',
origin: 'San Francisco',
dates: { departure: '2026-06-01', return: '2026-06-07' },
travelers: 2,
budget: 'luxury'
});
Output:
🛫 Planning trip to Paris...
✅ Trip planned in 6,234ms
Performance:
- 5 queries executed in parallel
- Average policy latency: 4ms
- Total time: 6.2 seconds (would be 30+ seconds sequential)
- Speedup: 5x
3. Policy Enforcement Patterns
The policies shown below are configured server-side in AxonFlow via the Policy API or the Customer Portal. They are not sent inline with SDK requests. AxonFlow evaluates your configured policies automatically on every request.
3.1 PII Detection and Redaction
Automatically detect and redact sensitive information.
Policy
package axonflow.policy
import future.keywords.contains
import future.keywords.if
# Default deny
default allow = false
# Redact PII from query before processing
redacted_query := query {
query := regex.replace(input.query, `\b\d{3}-\d{2}-\d{4}\b`, "***-**-****") # SSN
query := regex.replace(query, `\b\d{16}\b`, "****-****-****-****") # Credit card
query := regex.replace(query, `\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b`, "***@***.***") # Email
}
# Allow if no PII detected or successfully redacted
allow if {
redacted_query != input.query
count(pii_violations) == 0
}
allow if {
redacted_query == input.query
}
# Detect PII violations
pii_violations[violation] {
regex.match(`\b\d{3}-\d{2}-\d{4}\b`, input.query)
violation := {"type": "SSN", "message": "Social Security Number detected"}
}
pii_violations[violation] {
regex.match(`\b\d{16}\b`, input.query)
violation := {"type": "credit_card", "message": "Credit card number detected"}
}
TypeScript
async function queryWithPIIProtection() {
// This query contains a fake SSN
const response = await client.executeQuery({
query: 'Get customer info for SSN 123-45-6789'
});
// Policy automatically redacts: "Get customer info for SSN ***-**-****"
console.log('Original query redacted:', response.metadata.policy_decision);
console.log('Safe query executed');
}
Expected Output:
{
"success": true,
"data": "Customer info retrieved (redacted query)",
"policy_info": {
"allowed": true,
"redacted": true,
"redacted_fields": ["ssn"],
"original_query_hash": "a1b2c3...",
"pii_violations_detected": 1,
"pii_types": ["SSN"]
},
"metadata": {
"latency_ms": 6,
"policy_engine": "shared_policy_engine"
}
}
Verify: Confirm PII was detected and redacted:
# Query the audit log to verify redaction occurred:
curl -s http://localhost:8081/api/v1/executions?limit=1 | jq '.[0].policy_info.redacted'
# Expected: true
# Verify the original SSN does NOT appear in logs:
curl -s http://localhost:8081/api/v1/executions?limit=1 | jq '.[0].query'
# Expected: "Get customer info for SSN ***-**-****"
3.2 Rate Limiting
Enforce rate limits per user or organization.
Policy
package axonflow.policy
import future.keywords.if
# Default allow
default allow = true
# Rate limit: 100 requests per hour per user
deny["Rate limit exceeded"] if {
user_request_count := count_user_requests(input.context.user_id)
user_request_count > 100
}
# Rate limit: 10,000 requests per hour per organization
deny["Organization rate limit exceeded"] if {
org_request_count := count_org_requests(input.context.organization_id)
org_request_count > 10000
}
# Helper functions (implement with external data)
count_user_requests(user_id) := count {
# Query Redis or database for request count in last hour
count := http.send({
"method": "GET",
"url": sprintf("https://api.internal/rate-limit/user/%s", [user_id])
}).body.count
}
count_org_requests(org_id) := count {
count := http.send({
"method": "GET",
"url": sprintf("https://api.internal/rate-limit/org/%s", [org_id])
}).body.count
}
3.3 Role-Based Access Control (RBAC)
Control access based on user roles.
Policy
package axonflow.policy
import future.keywords.if
# Default deny
default allow = false
# Admin can do anything
allow if {
input.context.user_role == "admin"
}
# Customer service can view customer data
allow if {
input.context.user_role == "customer_service"
is_customer_query(input.query)
}
# Analysts can run reports
allow if {
input.context.user_role == "analyst"
is_report_query(input.query)
}
# Regular users can only query their own data
allow if {
input.context.user_role == "user"
is_own_data_query(input.query, input.context.user_id)
}
# Helper functions
is_customer_query(query) if {
contains(lower(query), "customer")
}
is_report_query(query) if {
contains(lower(query), "report")
contains(lower(query), "analytics")
}
is_own_data_query(query, user_id) if {
contains(query, user_id)
}
4. MCP Connector Integration
4.1 Salesforce Query
Query Salesforce CRM data with permission checks.
TypeScript
async function querySalesforce() {
const response = await client.executeQuery({
query: 'Get all opportunities closing this quarter',
mcp: {
connector: 'salesforce',
operation: 'query',
parameters: {
object: 'Opportunity',
fields: ['Id', 'Name', 'Amount', 'CloseDate', 'StageName'],
where: 'CloseDate >= THIS_QUARTER'
}
},
context: {
user_id: 'sales-rep-123',
user_role: 'sales_representative'
}
});
console.log('Opportunities:', response.result);
}
Salesforce Policy
package axonflow.policy
# Sales reps can query their own opportunities
allow {
input.context.user_role == "sales_representative"
input.mcp.connector == "salesforce"
input.mcp.parameters.object == "Opportunity"
check_ownership(input.context.user_id, input.mcp.parameters)
}
# Sales managers can query all opportunities
allow {
input.context.user_role == "sales_manager"
input.mcp.connector == "salesforce"
input.mcp.parameters.object == "Opportunity"
}
check_ownership(user_id, params) {
# Add WHERE clause to filter by OwnerId
params.where := sprintf("%s AND OwnerId = '%s'", [params.where, user_id])
}
4.2 Snowflake Data Access
Query Snowflake data warehouse with column-level security.
TypeScript
async function querySnowflake() {
const response = await client.executeQuery({
query: 'Get customer revenue by region for Q4 2024',
mcp: {
connector: 'snowflake',
operation: 'query',
parameters: {
database: 'ANALYTICS',
schema: 'PUBLIC',
query: `
SELECT
region,
SUM(revenue) as total_revenue,
COUNT(DISTINCT customer_id) as customer_count
FROM customer_revenue
WHERE quarter = 'Q4_2024'
GROUP BY region
ORDER BY total_revenue DESC
`
}
},
context: {
user_id: 'analyst-456',
user_role: 'data_analyst',
clearance_level: 3
}
});
console.log('Revenue by Region:', response.result);
}
Snowflake Policy
package axonflow.policy
# Analysts can query specific schemas based on clearance level
allow {
input.context.user_role == "data_analyst"
input.mcp.connector == "snowflake"
input.context.clearance_level >= required_clearance_level(input.mcp.parameters.schema)
not contains_sensitive_columns(input.mcp.parameters.query)
}
required_clearance_level(schema) := 3 {
schema == "PUBLIC"
}
required_clearance_level(schema) := 5 {
schema == "SENSITIVE"
}
# Block queries containing sensitive columns
contains_sensitive_columns(query) {
sensitive_columns := ["ssn", "credit_card", "salary", "password"]
lower_query := lower(query)
some column in sensitive_columns
contains(lower_query, column)
}
4.3 Slack Notifications
Send Slack notifications with approval workflows.
TypeScript
async function sendSlackAlert() {
const response = await client.executeQuery({
query: 'Send alert to #engineering channel about production incident',
mcp: {
connector: 'slack',
operation: 'send_message',
parameters: {
channel: '#engineering',
message: '🚨 Production incident detected: High error rate',
severity: 'critical',
incident_id: 'INC-12345'
}
},
context: {
user_id: 'oncall-engineer',
user_role: 'sre',
incident_severity: 'critical'
}
});
console.log('Slack message sent:', response.result);
}
Slack Policy
package axonflow.policy
# SREs can send critical alerts anytime
allow {
input.context.user_role == "sre"
input.mcp.parameters.severity == "critical"
input.mcp.connector == "slack"
}
# Developers can send info/warning alerts during business hours
allow {
input.context.user_role == "developer"
input.mcp.parameters.severity in ["info", "warning"]
is_business_hours(input.context.timestamp)
}
# Block spam (max 5 messages per hour per user)
deny["Slack rate limit exceeded"] {
user_message_count := count_slack_messages(input.context.user_id)
user_message_count > 5
}
is_business_hours(timestamp) {
hour := time.clock(timestamp)[0]
hour >= 9
hour < 17
}
5. Error Handling
5.1 Retry Logic with Exponential Backoff
TypeScript
async function queryWithRetry(
request: QueryRequest,
maxRetries: number = 3
): Promise<QueryResponse> {
let lastError: Error;
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await client.executeQuery(request);
return response;
} catch (error) {
lastError = error as Error;
// Don't retry on policy violations or invalid requests
if (error.code === 'POLICY_DENIED' || error.code === 'INVALID_REQUEST') {
throw error;
}
// Exponential backoff: 1s, 2s, 4s
const delayMs = Math.pow(2, attempt) * 1000;
console.log(`Retry attempt ${attempt + 1}/${maxRetries} after ${delayMs}ms`);
await new Promise(resolve => setTimeout(resolve, delayMs));
}
}
throw new Error(`Failed after ${maxRetries} retries: ${lastError.message}`);
}
// Usage
try {
const response = await queryWithRetry({
query: 'Get customer data'
});
console.log('Success:', response.result);
} catch (error) {
console.error('Failed:', error);
}
5.2 Graceful Degradation
TypeScript
async function queryWithFallback(query: string) {
try {
// Try primary query with MCP connector
const response = await client.executeQuery({
query: query,
mcp: { connector: 'snowflake', operation: 'query' }
});
return { data: response.result, source: 'snowflake' };
} catch (error) {
console.warn('Primary query failed, trying fallback:', error);
try {
// Fallback to LLM-generated response
const response = await client.executeQuery({
query: query,
llm: { provider: 'aws-bedrock', model: 'anthropic.claude-sonnet-4-20250514-v1:0' }
});
return { data: response.result, source: 'llm' };
} catch (llmError) {
console.error('Fallback also failed:', llmError);
// Final fallback: static response
return {
data: 'Service temporarily unavailable. Please try again later.',
source: 'static'
};
}
}
}
5.3 Circuit Breaker Pattern
TypeScript
class CircuitBreaker {
private failures: number = 0;
private lastFailureTime: number = 0;
private state: 'closed' | 'open' | 'half-open' = 'closed';
constructor(
private threshold: number = 5,
private timeout: number = 60000 // 1 minute
) {}
async execute<T>(fn: () => Promise<T>): Promise<T> {
if (this.state === 'open') {
if (Date.now() - this.lastFailureTime > this.timeout) {
this.state = 'half-open';
} else {
throw new Error('Circuit breaker is OPEN');
}
}
try {
const result = await fn();
this.onSuccess();
return result;
} catch (error) {
this.onFailure();
throw error;
}
}
private onSuccess() {
this.failures = 0;
this.state = 'closed';
}
private onFailure() {
this.failures++;
this.lastFailureTime = Date.now();
if (this.failures >= this.threshold) {
this.state = 'open';
console.error('Circuit breaker opened due to failures');
}
}
}
// Usage
const breaker = new CircuitBreaker(5, 60000);
async function reliableQuery(query: string) {
return breaker.execute(async () => {
return await client.executeQuery({
query: query
});
});
}
6. Performance Optimization
6.1 Policy Caching
Cache compiled policies to reduce latency.
TypeScript
import * as fs from 'fs';
import * as crypto from 'crypto';
class PolicyCache {
private cache = new Map<string, string>();
load(policyPath: string): string {
const content = fs.readFileSync(policyPath, 'utf-8');
const hash = crypto.createHash('sha256').update(content).digest('hex');
if (!this.cache.has(hash)) {
this.cache.set(hash, content);
console.log('Policy loaded and cached:', hash);
}
return content;
}
clear() {
this.cache.clear();
}
}
const policyCache = new PolicyCache();
async function optimizedQuery() {
const policy = policyCache.load('./policies/main.rego');
const response = await client.executeQuery({
query: 'Get customer data'
});
return response;
}
6.2 Connection Pooling
Reuse client connections for better performance.
TypeScript
class AxonFlowConnectionPool {
private static instance: AxonFlow;
static getClient(): AxonFlow {
if (!this.instance) {
this.instance = new AxonFlow({
endpoint: process.env.AXONFLOW_ENDPOINT!,
clientId: process.env.AXONFLOW_CLIENT_ID!,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET!,
poolSize: 10, // Connection pool size
keepAlive: true
});
console.log('✅ AxonFlow connection pool initialized');
}
return this.instance;
}
}
// Usage - reuse same client across requests
const client = AxonFlowConnectionPool.getClient();
const response = await client.executeQuery({ query });
6.3 Batch Query Execution
Execute multiple queries efficiently.
TypeScript
async function batchQueries(queries: string[]) {
const batchSize = 10; // Process 10 queries at a time
const results = [];
for (let i = 0; i < queries.length; i += batchSize) {
const batch = queries.slice(i, i + batchSize);
// Execute batch in parallel
const batchResults = await Promise.all(
batch.map(query =>
client.executeQuery({
query: query
})
)
);
results.push(...batchResults);
console.log(`Processed ${Math.min(i + batchSize, queries.length)}/${queries.length} queries`);
}
return results;
}
// Usage
const queries = [
'Get customer 1',
'Get customer 2',
// ... 100 more queries
];
const results = await batchQueries(queries);
console.log(`Processed ${results.length} queries`);
Community vs Enterprise
All examples on this page work with AxonFlow Community. Enterprise unlocks:
| Capability | Community | Enterprise |
|---|---|---|
| System policies (63 built-in) | ✅ | ✅ |
| Custom tenant policies | ✅ 30 limit | ✅ Unlimited |
| MAP orchestration | ✅ Basic | ✅ Advanced (nested coordinators, sagas) |
| MCP connectors | ✅ PostgreSQL, Redis | ✅ + Salesforce, Snowflake, Amadeus, Jira |
| LLM providers | ✅ OpenAI, Anthropic, Gemini, Ollama | ✅ + AWS Bedrock (HIPAA) |
| Workflow Control Plane (WCP) | ✅ | |
| Compliance frameworks | ✅ |
Compare Editions | Request Demo
Next Steps
Explore Examples
- Healthcare AI Assistant - HIPAA-compliant medical assistant
- E-commerce Recommendations - Product recommendation engine
- Customer Support Chatbot - Automated support agent
- Trip Planner - Multi-agent travel planning
Learn More
- Policy Syntax Reference - Complete policy language guide
- MCP Connectors - Available data connectors
- API Reference - Complete API documentation
- Security Best Practices - Production security guide
Get Help
- Documentation: https://docs.getaxonflow.com
- Email: [email protected]
- GitHub: https://github.com/getaxonflow/axonflow
All examples tested with AxonFlow v4.2.0
