Workflow Control Plane
"LangChain runs the workflow. AxonFlow decides when it's allowed to move forward."
The Workflow Control Plane provides governance gates for external orchestrators like LangChain, LangGraph, and CrewAI. Instead of modifying your orchestrator's code, you simply add checkpoint calls to AxonFlow before each step executes.
Overview
External orchestrators (LangChain, LangGraph, CrewAI) are great at workflow execution, but enterprises need governance controls. The Workflow Control Plane solves this by providing:
- Step Gates - Policy checkpoints before each workflow step
- Decision Types - Allow, block, or require approval
- Policy Integration - Reuses AxonFlow's policy engine
- Audit Trail - Every step decision is recorded
Quick Start
1. Start AxonFlow
docker compose up -d
2. Create a Workflow
curl -X POST http://localhost:8080/api/v1/workflows \
-H "Content-Type: application/json" \
-d '{
"workflow_name": "code-review-pipeline",
"source": "langgraph",
"total_steps": 3
}'
Response:
{
"workflow_id": "wf_abc123",
"workflow_name": "code-review-pipeline",
"status": "in_progress"
}
3. Check Step Gate
Before executing each step, check if it's allowed:
curl -X POST http://localhost:8080/api/v1/workflows/wf_abc123/steps/step-1/gate \
-H "Content-Type: application/json" \
-d '{
"step_name": "Generate Code",
"step_type": "llm_call",
"model": "gpt-4",
"provider": "openai"
}'
Response (allowed):
{
"decision": "allow",
"step_id": "step-1"
}
Response (blocked):
{
"decision": "block",
"step_id": "step-1",
"reason": "GPT-4 not allowed in production",
"policy_ids": ["policy_gpt4_block"]
}
4. Complete Workflow
curl -X POST http://localhost:8080/api/v1/workflows/wf_abc123/complete
SDK Integration
Python
from axonflow import AxonFlow
from axonflow.workflow import (
CreateWorkflowRequest,
StepGateRequest,
StepType,
GateDecision,
)
async with AxonFlow(endpoint="http://localhost:8080") as client:
# Create workflow
workflow = await client.create_workflow(
CreateWorkflowRequest(
workflow_name="code-review-pipeline",
source="langgraph"
)
)
# Check gate before each step
gate = await client.step_gate(
workflow_id=workflow.workflow_id,
step_id="step-1",
request=StepGateRequest(
step_name="Generate Code",
step_type=StepType.LLM_CALL,
model="gpt-4"
)
)
if gate.is_allowed():
# Execute your step
result = execute_step()
await client.mark_step_completed(workflow.workflow_id, "step-1")
elif gate.is_blocked():
print(f"Blocked: {gate.reason}")
# Complete workflow
await client.complete_workflow(workflow.workflow_id)
LangGraph Adapter
For LangGraph workflows, use the specialized adapter:
from axonflow import AxonFlow
from axonflow.adapters import AxonFlowLangGraphAdapter
async with AxonFlow(endpoint="http://localhost:8080") as client:
adapter = AxonFlowLangGraphAdapter(client, "my-workflow")
# Start workflow
await adapter.start_workflow(total_steps=3)
# Before each LangGraph node
if await adapter.check_gate("generate", "llm_call", model="gpt-4"):
result = await generate_code(state)
await adapter.step_completed("generate")
# Complete workflow
await adapter.complete_workflow()
Go
client := axonflow.NewClient(axonflow.AxonFlowConfig{
Endpoint: "http://localhost:8080",
})
// Create workflow
workflow, _ := client.CreateWorkflow(axonflow.CreateWorkflowRequest{
WorkflowName: "code-review-pipeline",
Source: axonflow.WorkflowSourceLangGraph,
})
// Check gate
gate, _ := client.StepGate(workflow.WorkflowID, "step-1", axonflow.StepGateRequest{
StepName: "Generate Code",
StepType: axonflow.StepTypeLLMCall,
Model: "gpt-4",
})
if gate.IsAllowed() {
// Execute step
client.MarkStepCompleted(workflow.WorkflowID, "step-1", nil)
}
client.CompleteWorkflow(workflow.WorkflowID)
TypeScript
import { AxonFlow } from "@axonflow/sdk";
const axonflow = new AxonFlow({ endpoint: "http://localhost:8080" });
// Create workflow
const workflow = await axonflow.createWorkflow({
workflowName: "code-review-pipeline",
source: "langgraph",
});
// Check gate
const gate = await axonflow.stepGate(workflow.workflowId, "step-1", {
stepName: "Generate Code",
stepType: "llm_call",
model: "gpt-4",
});
if (gate.decision === "allow") {
// Execute step
await axonflow.markStepCompleted(workflow.workflowId, "step-1");
}
await axonflow.completeWorkflow(workflow.workflowId);
Java
AxonFlow client = AxonFlow.create(AxonFlowConfig.builder()
.endpoint("http://localhost:8080")
.build());
// Create workflow
CreateWorkflowResponse workflow = client.createWorkflow(
CreateWorkflowRequest.builder()
.workflowName("code-review-pipeline")
.source(WorkflowSource.LANGGRAPH)
.build()
);
// Check gate
StepGateResponse gate = client.stepGate(
workflow.getWorkflowId(),
"step-1",
StepGateRequest.builder()
.stepName("Generate Code")
.stepType(StepType.LLM_CALL)
.model("gpt-4")
.build()
);
if (gate.isAllowed()) {
// Execute step
client.markStepCompleted(workflow.getWorkflowId(), "step-1", null);
}
client.completeWorkflow(workflow.getWorkflowId());
Gate Decisions
| Decision | Description | Action |
|---|---|---|
allow | Step is allowed to proceed | Execute the step |
block | Step is blocked by policy | Skip or abort workflow |
require_approval | Human approval required | Wait for approval (Enterprise) |
Step Types
| Type | Description | Example |
|---|---|---|
llm_call | LLM API call | OpenAI, Anthropic, Bedrock |
tool_call | Tool/function execution | Code execution, file operations |
connector_call | MCP connector call | Database, API integrations |
human_task | Human-in-the-loop task | Manual review, approval |
Workflow Sources
| Source | Description |
|---|---|
langgraph | LangGraph workflow |
langchain | LangChain workflow |
crewai | CrewAI workflow |
external | Other external orchestrator |
Policy Configuration
Create policies with scope: workflow to control step execution:
Block Specific Models
{
"name": "block-gpt4-in-workflows",
"scope": "workflow",
"conditions": {
"step_type": "llm_call",
"model": "gpt-4"
},
"action": "block",
"reason": "GPT-4 not allowed in production workflows"
}
Require Approval for Deployments
{
"name": "require-approval-for-deploy",
"scope": "workflow",
"conditions": {
"step_type": "connector_call",
"step_name": "deploy"
},
"action": "require_approval",
"reason": "Deployment steps require human approval"
}
Block PII in Step Inputs
{
"name": "block-pii-in-workflow-inputs",
"scope": "workflow",
"conditions": {
"step_input.contains_pii": true
},
"action": "block",
"reason": "PII detected in workflow step input"
}
API Reference
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/v1/workflows | Create workflow |
| GET | /api/v1/workflows/{id} | Get workflow status |
| POST | /api/v1/workflows/{id}/steps/{step_id}/gate | Check step gate |
| POST | /api/v1/workflows/{id}/steps/{step_id}/complete | Mark step completed |
| POST | /api/v1/workflows/{id}/complete | Complete workflow |
| POST | /api/v1/workflows/{id}/abort | Abort workflow |
| POST | /api/v1/workflows/{id}/resume | Resume workflow |
| GET | /api/v1/workflows | List workflows |
Best Practices
1. Use Descriptive Step Names
# Good
await adapter.check_gate("generate_code", "llm_call")
await adapter.check_gate("review_code", "tool_call")
await adapter.check_gate("deploy_to_staging", "connector_call")
# Bad
await adapter.check_gate("step1", "llm_call")
await adapter.check_gate("step2", "tool_call")
2. Always Handle Block Decisions
gate = await client.step_gate(...)
if gate.is_blocked():
# Log the reason
logger.warning(f"Step blocked: {gate.reason}")
# Abort the workflow
await client.abort_workflow(workflow_id, gate.reason)
return
3. Use Context Manager for Cleanup
async with AxonFlowLangGraphAdapter(client, "my-workflow") as adapter:
await adapter.start_workflow()
# If exception occurs, workflow is automatically aborted
# If successful, workflow is automatically completed
4. Include Relevant Metadata
workflow = await client.create_workflow(
CreateWorkflowRequest(
workflow_name="code-review-pipeline",
metadata={
"environment": "production",
"team": "engineering",
"triggered_by": "github-action"
}
)
)
Community vs Enterprise
| Feature | Community | Enterprise |
|---|---|---|
| Step gates (allow/block) | Yes | Yes |
| Policy evaluation | Yes | Yes |
| SDK support (4 languages) | Yes | Yes |
| LangGraph adapter | Yes | Yes |
require_approval action | Returns decision | Routes to Portal HITL |
| Org-level policies | No | Yes |
| Cross-workflow analytics | No | Yes |
Troubleshooting
Gate Returns "allow" When Expected to Block
- Check if the policy exists and is enabled
- Verify the policy scope is
workflow - Check if conditions match the step request
Workflow Stuck in "in_progress"
- Ensure you call
complete_workflow()orabort_workflow() - Check for unhandled exceptions in your code
- Use the context manager for automatic cleanup
Connection Refused
- Ensure AxonFlow Agent is running:
docker compose ps - Check the endpoint URL matches your configuration
- Verify network connectivity
Examples
See the complete examples in examples/workflow-control/:
http/workflow-control.sh- HTTP/curl examplego/main.go- Go SDK examplepython/main.py- Python SDK examplepython/langgraph_example.py- LangGraph adapter exampletypescript/index.ts- TypeScript SDK examplejava/WorkflowControl.java- Java SDK example