Getting Started with Multi-Agent Planning
This guide walks you through setting up your first multi-agent workflow using AxonFlow's Multi-Agent Planning (MAP) system.
Prerequisites
- AxonFlow running locally or deployed (see Local Development)
- At least one LLM provider configured. Current runtime provider names include
openai,anthropic,gemini,azure-openai,bedrock, andollama. - Basic understanding of YAML configuration
Quick Start
1. Start AxonFlow
# Clone and start AxonFlow
git clone https://github.com/getaxonflow/axonflow.git
cd axonflow
docker compose up -d
Verify services are running:
# Check Agent health
curl http://localhost:8080/health
# Check Orchestrator health
curl http://localhost:8081/health
2. Define Your First Agent
Create an agent configuration file. Agents use a Kubernetes-style YAML format with all required and commonly-used fields:
# config/agents/research-agent.yaml
apiVersion: axonflow.io/v1 # Required: API version
kind: AgentConfig # Required: resource type
metadata:
name: research-agent # Required: unique identifier (lowercase, hyphens)
domain: generic # Required: domain grouping
labels: # Optional: key-value labels for organization
environment: development
team: platform
spec:
type: specialist # Required: specialist or coordinator
description: Research and summarize information on any topic # Required
capabilities: # Required: used by planning engine to select agents
- research
- summarization
- analysis
llm: # Required for llm-call step type
provider: openai # openai, anthropic, gemini, azure-openai, bedrock, ollama
model: gpt-4 # model identifier
temperature: 0.7 # sampling temperature (0.0-2.0)
maxTokens: 2000 # maximum response tokens
timeout: 60s # Optional: execution timeout (default: 60s)
retryPolicy: # Optional: retry on transient failures
maxRetries: 2
initialDelay: 1s
maxDelay: 10s
backoffMultiplier: 2.0
retryableErrors:
- timeout
- rate_limit
promptTemplate: |
You are a research assistant. Your task is to research and provide
comprehensive information about the given topic.
Topic: {{input.query}}
Provide a well-structured response with key findings.
See Agent Configuration for the full schema reference including all fields, connector configuration, and prompt template variables.
3. Load Agents
Community Edition loads file-based agent definitions from the orchestrator agent directory. Place your configuration where the orchestrator can read it:
# Copy agent to config directory (adjust path for your setup)
cp config/agents/research-agent.yaml /path/to/axonflow/config/agents/
Or mount the directory in docker-compose:
# docker-compose.override.yaml
services:
orchestrator:
volumes:
- ./config/agents:/etc/axonflow/agents:ro
Restart the orchestrator to load agents:
docker compose restart orchestrator
You can verify that the orchestrator loaded your agents through the Agent entry point:
curl http://localhost:8080/api/v1/agents
4. Generate a Plan
AxonFlow's current MAP flow is a two-step lifecycle:
- generate and store a plan
- execute the stored plan when you are ready
MAP requests should route through the Agent on 8080, because plan generation and execution depend on the authenticated context the Agent forwards to the Orchestrator. The examples below use the Agent because that is the correct public/community entry point.
Send your request through the Agent's /api/request endpoint:
curl -X POST http://localhost:8080/api/request \
-H "Content-Type: application/json" \
-d '{
"query": "Research the benefits of remote work for software teams",
"request_type": "multi-agent-plan",
"context": {
"domain": "generic"
}
}'
Response:
{
"success": true,
"plan_id": "plan_1765851929_abc123",
"steps": [
{
"id": "step_1",
"name": "research-benefits",
"type": "llm-call",
"agent": "research-agent"
}
],
"metadata": {
"execution_mode": "sequential"
}
}
Separating generation from execution lets you inspect the steps, estimate cost, run approvals, or update the plan before it starts consuming provider calls and connector actions.
5. Execute the Stored Plan
After you have reviewed the generated steps, execute the plan with its plan_id:
curl -X POST http://localhost:8080/api/request \
-H "Content-Type: application/json" \
-d '{
"query": "",
"request_type": "execute-plan",
"context": {
"plan_id": "plan_1765851929_abc123"
}
}'
Response:
{
"success": true,
"data": {
"plan_id": "plan_1765851929_abc123",
"status": "completed",
"result": "## Benefits of Remote Work for Software Teams\n\n### 1. Increased Productivity\n- Fewer office distractions...",
"steps": [
{
"id": "step_1",
"name": "research-benefits",
"type": "llm-call",
"agent": "research-agent",
"status": "completed"
}
]
},
"metadata": {
"execution_time_ms": 2340,
"tasks_executed": 1
}
}
Multi-Step Example
Here's a more complex example with multiple agents working together:
Define Multiple Agents
# config/agents/travel-agents.yaml
apiVersion: axonflow.io/v1
kind: AgentConfig
metadata:
name: flight-search
domain: travel
spec:
type: specialist
description: Search for flight options
capabilities:
- flight_search
- fare_comparison
llm:
provider: openai
model: gpt-4
promptTemplate: |
Search for flights based on:
- Origin: {{input.origin}}
- Destination: {{input.destination}}
- Date: {{input.date}}
Return top 3 flight options with prices.
---
apiVersion: axonflow.io/v1
kind: AgentConfig
metadata:
name: hotel-search
domain: travel
spec:
type: specialist
description: Search for hotel accommodations
capabilities:
- hotel_search
- rate_comparison
llm:
provider: openai
model: gpt-4
promptTemplate: |
Find hotels in {{input.destination}} for {{input.dates}}.
Budget: {{input.budget}}
Return top 3 hotel options.
---
apiVersion: axonflow.io/v1
kind: AgentConfig
metadata:
name: trip-planner
domain: travel
spec:
type: coordinator
description: Coordinate travel planning
capabilities:
- trip_planning
- itinerary_creation
delegatesTo:
- flight-search
- hotel-search
llm:
provider: openai
model: gpt-4
promptTemplate: |
Create a complete travel itinerary combining:
- Flights: {{steps.flight-search.output}}
- Hotels: {{steps.hotel-search.output}}
Provide a summary with total estimated cost.
Generate the Multi-Agent Plan
With multiple agents defined, send your travel planning request through the Agent:
curl -X POST http://localhost:8080/api/request \
-H "Content-Type: application/json" \
-d '{
"query": "Plan a 3-day trip to Mumbai from Delhi, December 20-23",
"request_type": "multi-agent-plan",
"context": {
"domain": "travel"
}
}'
Response:
The generated plan shows the dependency structure. Steps with the same dependency level can run in parallel after execution starts:
{
"success": true,
"plan_id": "plan_travel_xyz789",
"steps": [
{
"id": "step_1",
"name": "flight-search",
"type": "llm-call",
"agent": "flight-search"
},
{
"id": "step_2",
"name": "hotel-search",
"type": "llm-call",
"agent": "hotel-search"
},
{
"id": "step_3",
"name": "create-itinerary",
"type": "llm-call",
"agent": "trip-planner",
"depends_on": ["step_1", "step_2"]
}
],
"metadata": {
"execution_mode": "auto"
}
}
The orchestrator automatically detects that flight-search and hotel-search have no dependencies on each other and can run them in parallel once execution begins. The create-itinerary step waits for both to complete before executing.
Execute the Travel Plan
curl -X POST http://localhost:8080/api/request \
-H "Content-Type: application/json" \
-d '{
"query": "",
"request_type": "execute-plan",
"context": {
"plan_id": "plan_travel_xyz789"
}
}'
Using the SDK
For production applications, use the AxonFlow SDKs which handle authentication and routing automatically. The current SDKs mirror the same lifecycle: generate first, then execute.
TypeScript SDK
import { AxonFlow } from '@axonflow/sdk';
const client = new AxonFlow({
endpoint: 'http://localhost:8080', // Agent URL
// For self-hosted: no license key needed if SELF_HOSTED_MODE=true
// For cloud: licenseKey: 'your-license-key'
});
// Generate a plan
const plan = await client.generatePlan(
'Research AI governance best practices',
'generic' // domain hint
);
console.log(`Plan ID: ${plan.planId}`);
console.log(`Steps: ${plan.steps.length}`);
for (const step of plan.steps) {
console.log(` - ${step.name} (${step.type})`);
}
// Execute the stored plan
const execution = await client.executePlan(plan.planId);
console.log(`Execution status: ${execution.status}`);
console.log(`Execution metadata:`, execution.metadata);
Python SDK
from axonflow import AxonFlow
async with AxonFlow(
endpoint="http://localhost:8080",
# For self-hosted: no license_key needed if SELF_HOSTED_MODE=true
# For cloud: license_key="your-license-key"
) as client:
# Generate a plan
plan = await client.generate_plan(
query="Research AI governance best practices",
domain="generic"
)
print(f"Plan ID: {plan.plan_id}")
print(f"Generated steps: {len(plan.steps)}")
for step in plan.steps:
print(f" - {step.name} ({step.type})")
# Execute the stored plan
execution = await client.execute_plan(plan.plan_id)
print(f"Execution status: {execution.status}")
print(f"Result preview: {execution.result[:200]}...")
Next Steps
Now that you have a basic multi-agent workflow running:
- Agent Configuration - Learn the full agent YAML schema
- Step Types - Use connectors, conditionals, and more
- Planning Patterns - Advanced orchestration patterns
- LLM Overview - Configure providers, routing, and runtime behavior
- API Reference - Complete API documentation
Troubleshooting
Agent Not Found
If you get "agent not found" errors:
- Check agent file is in the correct directory
- Verify YAML syntax is valid
- Restart orchestrator to reload agents
- Check orchestrator logs:
docker compose logs orchestrator
Plan Generation Fails
If plan generation fails:
- Verify the domain matches your agent's domain
- Check LLM provider is configured and has valid credentials
- Review the query for clarity
Step Execution Timeout
If steps timeout:
- Increase timeout in agent config:
spec.timeout: 120s - Check LLM provider status
- Simplify the prompt template
