Skip to main content

Workflow Control Plane

"LangChain runs the workflow. AxonFlow decides when it's allowed to move forward."

The Workflow Control Plane provides governance gates for external orchestrators like LangChain, LangGraph, and CrewAI. Instead of modifying your orchestrator's code, you simply add checkpoint calls to AxonFlow before each step executes.

Overview

External orchestrators (LangChain, LangGraph, CrewAI) are great at workflow execution, but enterprises need governance controls. The Workflow Control Plane solves this by providing:

  1. Step Gates - Policy checkpoints before each workflow step
  2. Decision Types - Allow, block, or require approval
  3. Policy Integration - Reuses AxonFlow's policy engine
  4. Audit Trail - Every step decision is recorded

Quick Start

1. Start AxonFlow

docker compose up -d

2. Create a Workflow

curl -X POST http://localhost:8080/api/v1/workflows \
-H "Content-Type: application/json" \
-d '{
"workflow_name": "code-review-pipeline",
"source": "langgraph",
"total_steps": 3
}'

Response:

{
"workflow_id": "wf_abc123",
"workflow_name": "code-review-pipeline",
"status": "in_progress"
}

3. Check Step Gate

Before executing each step, check if it's allowed:

curl -X POST http://localhost:8080/api/v1/workflows/wf_abc123/steps/step-1/gate \
-H "Content-Type: application/json" \
-d '{
"step_name": "Generate Code",
"step_type": "llm_call",
"model": "gpt-4",
"provider": "openai"
}'

Response (allowed):

{
"decision": "allow",
"step_id": "step-1"
}

Response (blocked):

{
"decision": "block",
"step_id": "step-1",
"reason": "GPT-4 not allowed in production",
"policy_ids": ["policy_gpt4_block"]
}

4. Complete Workflow

curl -X POST http://localhost:8080/api/v1/workflows/wf_abc123/complete

SDK Integration

Python

from axonflow import AxonFlow
from axonflow.workflow import (
CreateWorkflowRequest,
StepGateRequest,
StepType,
GateDecision,
)

async with AxonFlow(endpoint="http://localhost:8080") as client:
# Create workflow
workflow = await client.create_workflow(
CreateWorkflowRequest(
workflow_name="code-review-pipeline",
source="langgraph"
)
)

# Check gate before each step
gate = await client.step_gate(
workflow_id=workflow.workflow_id,
step_id="step-1",
request=StepGateRequest(
step_name="Generate Code",
step_type=StepType.LLM_CALL,
model="gpt-4"
)
)

if gate.is_allowed():
# Execute your step
result = execute_step()
await client.mark_step_completed(workflow.workflow_id, "step-1")
elif gate.is_blocked():
print(f"Blocked: {gate.reason}")

# Complete workflow
await client.complete_workflow(workflow.workflow_id)

LangGraph Adapter

For LangGraph workflows, use the specialized adapter:

from axonflow import AxonFlow
from axonflow.adapters import AxonFlowLangGraphAdapter

async with AxonFlow(endpoint="http://localhost:8080") as client:
adapter = AxonFlowLangGraphAdapter(client, "my-workflow")

# Start workflow
await adapter.start_workflow(total_steps=3)

# Before each LangGraph node
if await adapter.check_gate("generate", "llm_call", model="gpt-4"):
result = await generate_code(state)
await adapter.step_completed("generate")

# Complete workflow
await adapter.complete_workflow()

Go

client := axonflow.NewClient(axonflow.AxonFlowConfig{
Endpoint: "http://localhost:8080",
})

// Create workflow
workflow, _ := client.CreateWorkflow(axonflow.CreateWorkflowRequest{
WorkflowName: "code-review-pipeline",
Source: axonflow.WorkflowSourceLangGraph,
})

// Check gate
gate, _ := client.StepGate(workflow.WorkflowID, "step-1", axonflow.StepGateRequest{
StepName: "Generate Code",
StepType: axonflow.StepTypeLLMCall,
Model: "gpt-4",
})

if gate.IsAllowed() {
// Execute step
client.MarkStepCompleted(workflow.WorkflowID, "step-1", nil)
}

client.CompleteWorkflow(workflow.WorkflowID)

TypeScript

import { AxonFlow } from "@axonflow/sdk";

const axonflow = new AxonFlow({ endpoint: "http://localhost:8080" });

// Create workflow
const workflow = await axonflow.createWorkflow({
workflowName: "code-review-pipeline",
source: "langgraph",
});

// Check gate
const gate = await axonflow.stepGate(workflow.workflowId, "step-1", {
stepName: "Generate Code",
stepType: "llm_call",
model: "gpt-4",
});

if (gate.decision === "allow") {
// Execute step
await axonflow.markStepCompleted(workflow.workflowId, "step-1");
}

await axonflow.completeWorkflow(workflow.workflowId);

Java

AxonFlow client = AxonFlow.create(AxonFlowConfig.builder()
.endpoint("http://localhost:8080")
.build());

// Create workflow
CreateWorkflowResponse workflow = client.createWorkflow(
CreateWorkflowRequest.builder()
.workflowName("code-review-pipeline")
.source(WorkflowSource.LANGGRAPH)
.build()
);

// Check gate
StepGateResponse gate = client.stepGate(
workflow.getWorkflowId(),
"step-1",
StepGateRequest.builder()
.stepName("Generate Code")
.stepType(StepType.LLM_CALL)
.model("gpt-4")
.build()
);

if (gate.isAllowed()) {
// Execute step
client.markStepCompleted(workflow.getWorkflowId(), "step-1", null);
}

client.completeWorkflow(workflow.getWorkflowId());

Gate Decisions

DecisionDescriptionAction
allowStep is allowed to proceedExecute the step
blockStep is blocked by policySkip or abort workflow
require_approvalHuman approval requiredWait for approval (Enterprise)

Step Types

TypeDescriptionExample
llm_callLLM API callOpenAI, Anthropic, Bedrock
tool_callTool/function executionCode execution, file operations
connector_callMCP connector callDatabase, API integrations
human_taskHuman-in-the-loop taskManual review, approval

Workflow Sources

SourceDescription
langgraphLangGraph workflow
langchainLangChain workflow
crewaiCrewAI workflow
externalOther external orchestrator

Policy Configuration

Create policies with scope: workflow to control step execution:

Block Specific Models

{
"name": "block-gpt4-in-workflows",
"scope": "workflow",
"conditions": {
"step_type": "llm_call",
"model": "gpt-4"
},
"action": "block",
"reason": "GPT-4 not allowed in production workflows"
}

Require Approval for Deployments

{
"name": "require-approval-for-deploy",
"scope": "workflow",
"conditions": {
"step_type": "connector_call",
"step_name": "deploy"
},
"action": "require_approval",
"reason": "Deployment steps require human approval"
}

Block PII in Step Inputs

{
"name": "block-pii-in-workflow-inputs",
"scope": "workflow",
"conditions": {
"step_input.contains_pii": true
},
"action": "block",
"reason": "PII detected in workflow step input"
}

API Reference

MethodEndpointDescription
POST/api/v1/workflowsCreate workflow
GET/api/v1/workflows/{id}Get workflow status
POST/api/v1/workflows/{id}/steps/{step_id}/gateCheck step gate
POST/api/v1/workflows/{id}/steps/{step_id}/completeMark step completed
POST/api/v1/workflows/{id}/completeComplete workflow
POST/api/v1/workflows/{id}/abortAbort workflow
POST/api/v1/workflows/{id}/resumeResume workflow
GET/api/v1/workflowsList workflows

Best Practices

1. Use Descriptive Step Names

# Good
await adapter.check_gate("generate_code", "llm_call")
await adapter.check_gate("review_code", "tool_call")
await adapter.check_gate("deploy_to_staging", "connector_call")

# Bad
await adapter.check_gate("step1", "llm_call")
await adapter.check_gate("step2", "tool_call")

2. Always Handle Block Decisions

gate = await client.step_gate(...)

if gate.is_blocked():
# Log the reason
logger.warning(f"Step blocked: {gate.reason}")
# Abort the workflow
await client.abort_workflow(workflow_id, gate.reason)
return

3. Use Context Manager for Cleanup

async with AxonFlowLangGraphAdapter(client, "my-workflow") as adapter:
await adapter.start_workflow()
# If exception occurs, workflow is automatically aborted
# If successful, workflow is automatically completed

4. Include Relevant Metadata

workflow = await client.create_workflow(
CreateWorkflowRequest(
workflow_name="code-review-pipeline",
metadata={
"environment": "production",
"team": "engineering",
"triggered_by": "github-action"
}
)
)

Community vs Enterprise

FeatureCommunityEnterprise
Step gates (allow/block)YesYes
Policy evaluationYesYes
SDK support (4 languages)YesYes
LangGraph adapterYesYes
require_approval actionReturns decisionRoutes to Portal HITL
Org-level policiesNoYes
Cross-workflow analyticsNoYes

Troubleshooting

Gate Returns "allow" When Expected to Block

  1. Check if the policy exists and is enabled
  2. Verify the policy scope is workflow
  3. Check if conditions match the step request

Workflow Stuck in "in_progress"

  1. Ensure you call complete_workflow() or abort_workflow()
  2. Check for unhandled exceptions in your code
  3. Use the context manager for automatic cleanup

Connection Refused

  1. Ensure AxonFlow Agent is running: docker compose ps
  2. Check the endpoint URL matches your configuration
  3. Verify network connectivity

Examples

See the complete examples in examples/workflow-control/:

  • http/workflow-control.sh - HTTP/curl example
  • go/main.go - Go SDK example
  • python/main.py - Python SDK example
  • python/langgraph_example.py - LangGraph adapter example
  • typescript/index.ts - TypeScript SDK example
  • java/WorkflowControl.java - Java SDK example