Skip to main content

SDK Integration

All four AxonFlow SDKs (Python, Go, TypeScript, Java) support the full Workflow Control Plane API. Each SDK provides typed request and response objects, authentication handling, and convenience methods for the gate-check-complete lifecycle. In addition, all four SDKs include a LangGraph adapter that manages workflow registration, step gate checks, and finalization behind a simplified interface.

For orchestrators that do not run in one of these four languages, see External Orchestrators for the raw HTTP API.

Python

The Python SDK uses an async client with context manager support. All WCP methods are available on the AxonFlow client directly.

from axonflow import AxonFlow
from axonflow.workflow import (
CreateWorkflowRequest,
StepGateRequest,
MarkStepCompletedRequest,
WorkflowSource,
StepType,
)

async with AxonFlow(
endpoint="http://localhost:8080",
client_id="my-workflow-app",
client_secret="your-secret",
) as client:
# Register the workflow
workflow = await client.create_workflow(
CreateWorkflowRequest(
workflow_name="code-review-pipeline",
source=WorkflowSource.EXTERNAL,
)
)

# Check gate before each step
gate = await client.step_gate(
workflow_id=workflow.workflow_id,
step_id="step-1",
request=StepGateRequest(
step_name="Generate Code",
step_type=StepType.LLM_CALL,
model="gpt-4",
provider="openai",
),
)

if gate.is_allowed():
result = execute_step()
await client.mark_step_completed(
workflow_id=workflow.workflow_id,
step_id="step-1",
request=MarkStepCompletedRequest(output={"code": result}),
)
elif gate.is_blocked():
print(f"Blocked: {gate.reason}")
await client.abort_workflow(workflow.workflow_id, gate.reason)

await client.complete_workflow(workflow.workflow_id)

The step_gate response includes policies_evaluated and policies_matched lists, which you can inspect to understand exactly which policies were checked and which ones contributed to the decision.

Execution Boundary Semantics

By default, step gate decisions are idempotent: calling step_gate twice with the same (workflow_id, step_id) returns the cached decision without re-running the policy evaluator. This prevents accidental approval-counter drift, ensures consistent auditability, and provides a clean foundation for checkpoint-based resume.

The response includes cached (bool) and decision_source ("fresh" or "cached") so callers always know whether they received a fresh evaluation or a cached one.

To force a fresh evaluation (for example, after external state changes), pass retry_policy:

# Default: idempotent (cached decision returned)
gate = await client.step_gate(workflow_id, "step-1", StepGateRequest(
step_type=StepType.TOOL_CALL,
))
# gate.cached == True, gate.decision_source == "cached"

# Force fresh evaluation
gate = await client.step_gate(workflow_id, "step-1", StepGateRequest(
step_type=StepType.TOOL_CALL,
retry_policy=RetryPolicy.REEVALUATE,
))
# gate.cached == False, gate.decision_source == "fresh"

The same retry_policy parameter is available in all four SDKs.

First-class retry state

The cached and decision_source booleans are preserved for backward compatibility, but the recommended way to reason about retries is the retry_context object now returned on every gate response. It surfaces gate_count, prior_completion_status, last_decision, optional prior_output, and the caller-supplied idempotency_key — enough state for a resumed agent to decide "skip and reuse" versus "re-run" without guessing. See Retry Semantics & Idempotency for the full field reference, the include_prior_output query parameter, and the idempotency_key rules.

TypeScript LangGraph Adapter

import { AxonFlow, AxonFlowLangGraphAdapter, WorkflowBlockedError } from "@axonflow/sdk";

const client = new AxonFlow({
endpoint: "http://localhost:8080",
clientId: "my-langgraph-app",
clientSecret: "your-secret",
});

const adapter = new AxonFlowLangGraphAdapter(client, "my-langgraph-workflow", {
source: "langgraph",
autoBlock: true, // Throws WorkflowBlockedError on block
});

try {
await adapter.startWorkflow();

// Before each LangGraph node
if (await adapter.checkGate("generate_code", "llm_call", {
model: "gpt-4", provider: "openai" })) {
const result = await generateCode(state);
await adapter.stepCompleted("generate_code", { output: result });
}

// Per-tool governance within a tools node
if (await adapter.checkToolGate("web_search", "function", {
toolInput: { query: "latest news" } })) {
const searchResult = await webSearch({ query: "latest news" });
await adapter.toolCompleted("web_search", { output: searchResult });
}

await adapter.completeWorkflow();
} catch (e) {
if (e instanceof WorkflowBlockedError) {
console.log(`Blocked: ${e.reason}`);
}
await adapter.abortWorkflow(String(e));
}

Go LangGraph Adapter

import axonflow "github.com/getaxonflow/axonflow-sdk-go/v5"

client := axonflow.NewClient(axonflow.AxonFlowConfig{
Endpoint: "http://localhost:8080",
ClientID: "my-langgraph-app",
ClientSecret: "your-secret",
})

adapter := axonflow.NewLangGraphAdapter(client, "my-langgraph-workflow")

ctx := context.Background()
workflowID, err := adapter.StartWorkflow(ctx, nil, "")
if err != nil {
log.Fatal(err)
}

// Before each LangGraph node
allowed, err := adapter.CheckGate(ctx, "generate_code", axonflow.StepTypeLLMCall,
&axonflow.CheckGateOptions{Model: "gpt-4", Provider: "openai"})
if err != nil {
// WorkflowBlockedError or WorkflowApprovalRequiredError
log.Fatal(err)
}
if allowed {
result := generateCode(state)
adapter.StepCompleted(ctx, "generate_code",
&axonflow.StepCompletedOptions{Output: result})
}

// Per-tool governance
allowed, _ = adapter.CheckToolGate(ctx, "web_search", "function",
&axonflow.CheckToolGateOptions{ToolInput: map[string]interface{}{"query": "latest news"}})
if allowed {
searchResult := webSearch("latest news")
adapter.ToolCompleted(ctx, "web_search",
&axonflow.ToolCompletedOptions{Output: searchResult})
}

adapter.CompleteWorkflow(ctx)

Java LangGraph Adapter

import com.getaxonflow.sdk.AxonFlow;
import com.getaxonflow.sdk.adapters.*;

AxonFlow client = AxonFlow.create(AxonFlowConfig.builder()
.endpoint("http://localhost:8080")
.clientId("my-langgraph-app")
.clientSecret("your-secret")
.build());

// Use try-with-resources for automatic cleanup
try (LangGraphAdapter adapter = LangGraphAdapter.builder(client, "my-langgraph-workflow")
.autoBlock(true)
.build()) {

adapter.startWorkflow();

// Before each LangGraph node
if (adapter.checkGate("generate_code", "llm_call",
CheckGateOptions.builder().model("gpt-4").provider("openai").build())) {
Object result = generateCode(state);
adapter.stepCompleted("generate_code",
StepCompletedOptions.builder().output(Map.of("code", result)).build());
}

// Per-tool governance
if (adapter.checkToolGate("web_search", "function",
CheckToolGateOptions.builder()
.toolInput(Map.of("query", "latest news")).build())) {
Object searchResult = webSearch("latest news");
adapter.toolCompleted("web_search",
ToolCompletedOptions.builder().output(Map.of("results", searchResult)).build());
}

adapter.completeWorkflow();
} catch (WorkflowBlockedError e) {
System.out.println("Blocked: " + e.getReason());
}

Go

The Go SDK uses a synchronous client. Error handling follows Go conventions with explicit error returns.

import axonflow "github.com/getaxonflow/axonflow-sdk-go/v5"

client := axonflow.NewClient(axonflow.AxonFlowConfig{
Endpoint: "http://localhost:8080",
ClientID: "my-workflow-app",
ClientSecret: "your-secret",
})

workflow, err := client.CreateWorkflow(axonflow.CreateWorkflowRequest{
WorkflowName: "code-review-pipeline",
Source: axonflow.WorkflowSourceExternal,
})
if err != nil {
log.Fatal(err)
}

gate, err := client.StepGate(workflow.WorkflowID, "step-1", axonflow.StepGateRequest{
StepName: "Generate Code",
StepType: axonflow.StepTypeLLMCall,
Model: "gpt-4",
Provider: "openai",
})
if err != nil {
log.Fatal(err)
}

if gate.IsAllowed() {
result := executeStep()
client.MarkStepCompleted(workflow.WorkflowID, "step-1", &axonflow.MarkStepCompletedRequest{
Output: map[string]interface{}{"code": result},
})
} else if gate.IsBlocked() {
client.AbortWorkflow(workflow.WorkflowID, gate.Reason)
}

client.CompleteWorkflow(workflow.WorkflowID)

TypeScript

The TypeScript SDK provides an async client with full type definitions for all WCP request and response objects.

import { AxonFlow } from "@axonflow/sdk";

const axonflow = new AxonFlow({
endpoint: "http://localhost:8080",
clientId: "my-workflow-app",
clientSecret: "your-secret",
});

const workflow = await axonflow.createWorkflow({
workflow_name: "code-review-pipeline",
source: "external",
});

const gate = await axonflow.stepGate(workflow.workflow_id, "step-1", {
step_name: "Generate Code",
step_type: "llm_call",
model: "gpt-4",
provider: "openai",
});

if (gate.decision === "allow") {
const result = await executeStep();
await axonflow.markStepCompleted(workflow.workflow_id, "step-1", {
output: { code: result },
});
} else if (gate.decision === "block") {
await axonflow.abortWorkflow(workflow.workflow_id, gate.reason);
}

await axonflow.completeWorkflow(workflow.workflow_id);

Java

The Java SDK uses the builder pattern for request objects. The client is synchronous and thread-safe.

AxonFlow client = AxonFlow.create(AxonFlowConfig.builder()
.endpoint("http://localhost:8080")
.clientId("my-workflow-app")
.clientSecret("your-secret")
.build());

CreateWorkflowResponse workflow = client.createWorkflow(
CreateWorkflowRequest.builder()
.workflowName("code-review-pipeline")
.source(WorkflowSource.EXTERNAL)
.build()
);

StepGateResponse gate = client.stepGate(
workflow.getWorkflowId(),
"step-1",
StepGateRequest.builder()
.stepName("Generate Code")
.stepType(StepType.LLM_CALL)
.model("gpt-4")
.provider("openai")
.build()
);

if (gate.isAllowed()) {
Object result = executeStep();
client.markStepCompleted(
workflow.getWorkflowId(),
"step-1",
MarkStepCompletedRequest.builder()
.output(Map.of("code", result))
.build()
);
} else if (gate.isBlocked()) {
client.abortWorkflow(workflow.getWorkflowId(), gate.getReason());
}

client.completeWorkflow(workflow.getWorkflowId());

LangGraph Adapter

All four SDKs include a LangGraph adapter that simplifies the WCP integration for LangGraph-style workflows. The adapter manages workflow registration, auto-generates step IDs from step names, and provides check_gate/step_completed methods that map directly to LangGraph node execution patterns.

Python LangGraph Adapter

The Python adapter supports async with for automatic lifecycle management. When the context manager exits normally, it calls complete_workflow. When it exits due to an exception, it calls abort_workflow.

from axonflow import AxonFlow
from axonflow.adapters import AxonFlowLangGraphAdapter, WorkflowBlockedError
from axonflow.workflow import WorkflowSource

async with AxonFlow(
endpoint="http://localhost:8080",
client_id="my-langgraph-app",
client_secret="your-secret",
) as client:
adapter = AxonFlowLangGraphAdapter(
client=client,
workflow_name="my-langgraph-workflow",
source=WorkflowSource.LANGGRAPH,
auto_block=True, # Raises WorkflowBlockedError on block
)

try:
async with adapter:
await adapter.start_workflow(trace_id="langsmith-run-abc123")

if await adapter.check_gate(
step_name="generate_code",
step_type="llm_call",
model="gpt-4",
provider="openai",
):
result = await generate_code(state)
await adapter.step_completed(
step_name="generate_code",
output=result,
tokens_in=250,
tokens_out=800,
cost_usd=0.015,
)

except WorkflowBlockedError as e:
print(f"Blocked: {e.reason}")
# Workflow is automatically aborted by context manager

TypeScript LangGraph Adapter

import { AxonFlow, AxonFlowLangGraphAdapter } from "@axonflow/sdk";

const client = new AxonFlow({ endpoint: "http://localhost:8080" });
const adapter = new AxonFlowLangGraphAdapter(client, "my-workflow");

await adapter.startWorkflow();

if (await adapter.checkGate("generate", "llm_call", { model: "gpt-4" })) {
const result = await generateCode(state);
await adapter.stepCompleted("generate");
}

await adapter.completeWorkflow();

Go LangGraph Adapter

adapter := axonflow.NewLangGraphAdapter(client, "my-workflow")

ctx := context.Background()
adapter.StartWorkflow(ctx, nil, "")

allowed, _ := adapter.CheckGate(ctx, "generate", axonflow.StepTypeLLMCall,
&axonflow.CheckGateOptions{Model: "gpt-4"})
if allowed {
result := generateCode(state)
adapter.StepCompleted(ctx, "generate", nil)
}

adapter.CompleteWorkflow(ctx)

Java LangGraph Adapter

LangGraphAdapter adapter = LangGraphAdapter.builder(client, "my-workflow").build();

adapter.startWorkflow();

if (adapter.checkGate("generate", "llm_call",
CheckGateOptions.builder().model("gpt-4").build())) {
Object result = generateCode(state);
adapter.stepCompleted("generate");
}

adapter.completeWorkflow();

Per-Tool Governance Methods

All four LangGraph adapters include convenience methods for per-tool governance. Instead of manually constructing a ToolContext and passing it to check_gate, use check_tool_gate and tool_completed which set step_type to tool_call and populate the tool context automatically.

# Python: check_tool_gate / tool_completed
if await adapter.check_tool_gate("web_search", "function",
tool_input={"query": "latest AI research"}):
result = await web_search(query="latest AI research")
await adapter.tool_completed("web_search", output=result)
// TypeScript: checkToolGate / toolCompleted
if (await adapter.checkToolGate("web_search", "function")) {
const result = await webSearch(state);
await adapter.toolCompleted("web_search", { output: result });
}
// Go: CheckToolGate / ToolCompleted
allowed, _ = adapter.CheckToolGate(ctx, "web_search", "function", nil)
if allowed {
result := webSearch(state)
adapter.ToolCompleted(ctx, "web_search",
&axonflow.ToolCompletedOptions{Output: result})
}
// Java: checkToolGate / toolCompleted
if (adapter.checkToolGate("web_search", "function")) {
Object result = webSearch(state);
adapter.toolCompleted("web_search",
ToolCompletedOptions.builder().output(Map.of("results", result)).build());
}

These methods generate step names in the format tools/{tool_name} and include the tool name, type, and input in the step gate request. Policies can then match on tool_name, tool_type, and tool_input.* fields as described in Policy Configuration.

Best Practices

Use Descriptive Step Names

Step names appear in audit logs, policy conditions, and the workflow status API. Use names that describe the action being performed, not generic labels.

# Good: describes the action
await adapter.check_gate("generate_code", "llm_call")
await adapter.check_gate("review_code", "tool_call")
await adapter.check_gate("deploy_to_staging", "connector_call")

# Bad: no semantic meaning
await adapter.check_gate("step1", "llm_call")
await adapter.check_gate("step2", "tool_call")

Always Handle Block Decisions

A blocked step should never be silently ignored. Log the reason for debugging, then either abort the workflow or skip the step depending on your use case.

gate = await client.step_gate(...)

if gate.is_blocked():
logger.warning(f"Step blocked: {gate.reason}")
await client.abort_workflow(workflow_id, gate.reason)
return

Use Fail vs Abort Correctly

Call fail_workflow when an error condition occurs (a step throws an exception, an LLM call returns invalid output). Call abort_workflow when manually cancelling a workflow that has not errored. The distinction matters for audit logging and workflow analytics.

# Abort: manual cancellation, no error
await client.abort_workflow(workflow_id, "No longer needed")

# Fail: error condition
await client.fail_workflow(workflow_id, "Step 3 timed out after 60s")

Use the Context Manager for Cleanup

The Python LangGraph adapter's context manager ensures the workflow is always finalized, even when exceptions occur. This prevents workflows from getting stuck in in_progress state.

async with adapter:
await adapter.start_workflow()
# If exception occurs: workflow is automatically aborted
# If successful: workflow is automatically completed

Include Relevant Metadata

Metadata attached at workflow creation time is stored alongside the workflow record and visible in status queries. Use it to capture context that helps with debugging and filtering.

workflow = await client.create_workflow(
CreateWorkflowRequest(
workflow_name="code-review-pipeline",
metadata={
"environment": "production",
"team": "engineering",
"triggered_by": "github-action",
},
trace_id="langsmith-run-abc123",
)
)

Report Post-Execution Metrics

When marking a step as completed, include token counts and cost if available. These metrics feed into AxonFlow's cost tracking and are visible in the workflow status and audit trail.

await client.mark_step_completed(
workflow_id=workflow.workflow_id,
step_id="step-1",
request=MarkStepCompletedRequest(
output={"code": result},
tokens_in=250,
tokens_out=800,
cost_usd=0.015,
),
)