LangGraph + AxonFlow Integration
Prerequisites: Node.js 18+, AxonFlow running (Getting Started), npm install @langchain/langgraph @axonflow/sdk
The fastest way to govern a LangGraph graph is wrap_langgraph() in the Python SDK. One function call wraps your compiled graph with step gates, tool governance, and audit logging:
from axonflow.adapters import wrap_langgraph
governed = wrap_langgraph(graph, client=client, workflow_name="my-agent")
result = await governed.ainvoke({"query": "Summarize recent earnings"})
See the full LangGraph Wrapper reference for NodeConfig options, govern_tools for per-tool gates, streaming, and migration from the manual adapter.
Alternative (v6.0.0+): For tool-only governance without the full wrapper, use GovernedTool to wrap tools directly: governed = govern_tools(tools, client) then pass to ToolNode(governed). See Per-Tool Governance.
What Problem AxonFlow Solves for LangGraph Users
LangGraph is LangChain's framework for building stateful, multi-step AI agents as directed graphs. Each node in the graph represents a processing step (an LLM call, a tool invocation, a routing decision), and edges define the flow between steps. State persists across executions, enabling durable workflows that can pause, resume, and recover from failures. This makes LangGraph the go-to choice for production agent architectures that are too complex for simple chains.
The governance challenge with LangGraph is that each node in the graph can make independent LLM calls and tool invocations, and the graph's routing logic determines which nodes execute based on runtime state. Without governance, there is no policy enforcement at node transitions (a node with access to sensitive data can route freely to any connected node), no PII detection as data flows through the graph state, no per-node cost tracking, and no audit trail that captures the full execution path through the graph.
AxonFlow integrates with LangGraph through the TypeScript SDK using gateway mode. The recommended pattern is a governed node factory: a wrapper function that takes your existing node logic and returns a new node function that calls getPolicyApprovedContext() before execution and auditLLMCall() after. Each node gets its own policy check with the node name passed as context, so you can enforce different policies for different parts of the graph. AxonFlow governance can also participate in graph routing decisions: if a policy engine flags a request for review, a conditional edge can route to a human review node instead of proceeding automatically.
For teams using LangGraph with MCP tools, AxonFlow provides per-tool governance within graph nodes. Each tool call can be individually gate-checked using checkToolGate() and tracked via toolCompleted() on the LangGraph adapter, giving you policy enforcement and visibility at the individual tool level within each graph node.
What LangGraph Does Well
LangGraph is LangChain's framework for building stateful, multi-step agents with durable execution. When you outgrow simple chains, LangGraph is the natural evolution:
Graph-Based Workflows: Define agent logic as directed graphs with nodes and edges. Complex decision trees become visual and maintainable.
Durable Execution: State persists across executions. Agents can pause, resume, and recover from failures. Long-running workflows work reliably.
Human-in-the-Loop: Built-in patterns for human approval, intervention, and feedback at any node. Breakpoints are first-class citizens.
Streaming: Native streaming support for both intermediate steps and final outputs. Real-time visibility into agent execution.
LangChain Compatibility: Works seamlessly with LangChain components—chains, tools, memory. Migration path is clear.
LangGraph Cloud: Managed deployment with built-in observability via LangSmith. Scales without infrastructure work.
What LangGraph Doesn't Try to Solve
LangGraph focuses on stateful agent orchestration. These concerns are explicitly out of scope:
| Production Requirement | LangGraph's Position |
|---|---|
| Policy enforcement before node execution | Not provided—nodes execute based on graph logic, not policies |
| PII detection in state transitions | Not addressed—state can contain any data |
| SQL injection prevention | Not provided—must implement at node level |
| Per-user or per-workflow cost attribution | Not tracked—requires LangSmith (paid) for basic metrics |
| Audit trails for compliance | Requires LangSmith—not built into the framework |
| Cross-workflow access control | Not addressed—no permission model for graph access |
| Token budget enforcement | Not provided—nodes can consume unlimited tokens |
This isn't a criticism—it's a design choice. LangGraph handles orchestration. Governance is a separate concern.
Where Teams Hit Production Friction
Based on real enterprise deployments, here are the blockers that appear after the prototype works:
1. The Infinite Loop
A graph has a conditional edge: "if response is unclear, retry". The LLM keeps producing unclear responses. The graph keeps retrying. By Monday morning, 85,000 iterations have occurred.
LangGraph executed the graph correctly. Nothing was watching how many times it executed.
2. The State Explosion
A workflow collects customer data across multiple nodes. By the final node, the state contains full PII—SSNs, addresses, payment info. The state is persisted for debugging.
Now PII is in your checkpoint storage. LangGraph has no mechanism to filter sensitive data from state.
3. The "Show Me The Path" Request
A financial advisor agent made a recommendation. Compliance needs:
- What nodes executed and in what order?
- What data was in state at each transition?
- What external tools were called?
- Who was the requesting user?
LangGraph executed the workflow. Without LangSmith, the execution trace is gone.
4. The Security Review Block
Security review: BLOCKED
- No audit trail for graph execution paths
- PII can accumulate in workflow state
- No policy enforcement at node transitions
- Cost controls missing
- Access control for graphs not implemented
The stateful agent worked perfectly. It can't ship.
5. The Cross-Tenant State Leak
In a multi-tenant deployment, a workflow for Customer A accidentally accessed state from Customer B due to a misconfigured checkpoint. LangGraph persisted both—there's no tenant isolation at the framework level.
How AxonFlow Plugs In
AxonFlow doesn't replace LangGraph. It sits underneath it—providing the governance layer that LangGraph intentionally doesn't include:
┌─────────────────┐
│ Your App │
└────────┬────────┘
│
v
┌─────────────────┐
│ LangGraph │ <-- Nodes, Edges, State, Checkpoints
└────────┬────────┘
│
v
┌─────────────────────────────────┐
│ AxonFlow │
│ ┌───────────┐ ┌────────────┐ │
│ │ Policy │ │ Audit │ │
│ │ Enforce │ │ Trail │ │
│ └───────────┘ └────────────┘ │
│ ┌───────────┐ ┌────────────┐ │
│ │ PII │ │ Cost │ │
│ │ Detection│ │ Control │ │
│ └───────────┘ └────────────┘ │
└────────────────┬────────────────┘
│
v
┌─────────────────┐
│ LLM Provider │
└─────────────────┘
What this gives you:
- Every node transition logged with state summary and user context
- PII detected and blocked before entering state
- SQL injection attempts blocked at any node
- Cost tracked per workflow, per user, per node
- Compliance auditors can query the full execution path
What stays the same:
- Your LangGraph code doesn't change
- Graph definitions work as before
- No new abstractions to learn
Integration Patterns
Pattern 1: Governed Node Wrapper (TypeScript) — Recommended
Recommended default for most teams.
Wrap LangGraph nodes with AxonFlow governance:
import { StateGraph, END } from "@langchain/langgraph";
import { AxonFlow } from "@axonflow/sdk";
interface WorkflowState {
query: string;
context?: string;
response?: string;
route?: string;
}
// Create governed node factory
function createGovernedNode(
axonflow: AxonFlow,
userToken: string,
nodeName: string,
nodeLogic: (state: WorkflowState) => Promise<Partial<WorkflowState>>
) {
return async (state: WorkflowState): Promise<Partial<WorkflowState>> => {
const startTime = Date.now();
// Pre-check before node execution
const approval = await axonflow.getPolicyApprovedContext({
userToken,
query: state.query,
context: {
node: nodeName,
framework: "langgraph",
has_context: !!state.context,
},
});
if (!approval.approved) {
return {
response: `[Node ${nodeName} blocked: ${approval.blockReason}]`,
};
}
// Execute node logic
const result = await nodeLogic(state);
// Audit the execution
await axonflow.auditLLMCall({
contextId: approval.contextId,
responseSummary: JSON.stringify(result).slice(0, 200),
provider: "openai",
model: "gpt-4",
tokenUsage: { promptTokens: 100, completionTokens: 50, totalTokens: 150 },
latencyMs: Date.now() - startTime,
metadata: { node: nodeName },
});
return result;
};
}
// Build governed graph
async function buildGovernedGraph(userToken: string) {
const axonflow = new AxonFlow({
endpoint: process.env.AXONFLOW_ENDPOINT!,
clientId: process.env.AXONFLOW_CLIENT_ID!,
clientSecret: process.env.AXONFLOW_CLIENT_SECRET,
});
const graph = new StateGraph<WorkflowState>({
channels: {
query: { value: (a, b) => b ?? a },
context: { value: (a, b) => b ?? a },
response: { value: (a, b) => b ?? a },
route: { value: (a, b) => b ?? a },
},
});
// Add governed nodes
graph.addNode(
"input",
createGovernedNode(axonflow, userToken, "input", async (state) => {
return { context: `Processing: ${state.query}` };
})
);
graph.addNode(
"router",
createGovernedNode(axonflow, userToken, "router", async (state) => {
const route = state.query.toLowerCase().includes("search")
? "search"
: "analyze";
return { route };
})
);
graph.addNode(
"search",
createGovernedNode(axonflow, userToken, "search", async (state) => {
return { response: `Search results for: ${state.query}` };
})
);
graph.addNode(
"analyze",
createGovernedNode(axonflow, userToken, "analyze", async (state) => {
return { response: `Analysis of: ${state.query}` };
})
);
// Define edges
graph.setEntryPoint("input");
graph.addEdge("input", "router");
graph.addConditionalEdges("router", (state) => state.route || "analyze", {
search: "search",
analyze: "analyze",
});
graph.addEdge("search", END);
graph.addEdge("analyze", END);
return graph.compile();
}
// Usage
const workflow = await buildGovernedGraph("user-123");
const result = await workflow.invoke({ query: "Search for AI governance best practices" });
Pattern 2: Governed Graph with Go SDK — For Go services
For Go-based LangGraph implementations:
package main
import (
"context"
"fmt"
"time"
"github.com/getaxonflow/axonflow-sdk-go/v5"
)
// WorkflowState represents the graph state
type WorkflowState struct {
Query string
Context string
Response string
Route string
}
// GovernedNode wraps a node function with AxonFlow governance
type GovernedNode struct {
axonflow *axonflow.AxonFlowClient
userToken string
nodeName string
logic func(context.Context, WorkflowState) (WorkflowState, error)
}
func NewGovernedNode(
client *axonflow.AxonFlowClient,
userToken, nodeName string,
logic func(context.Context, WorkflowState) (WorkflowState, error),
) *GovernedNode {
return &GovernedNode{
axonflow: client,
userToken: userToken,
nodeName: nodeName,
logic: logic,
}
}
func (n *GovernedNode) Execute(ctx context.Context, state WorkflowState) (WorkflowState, error) {
startTime := time.Now()
// Pre-check
callCtx := map[string]interface{}{
"node": n.nodeName,
"framework": "langgraph",
}
result, err := n.axonflow.GetPolicyApprovedContext(n.userToken, state.Query, nil, callCtx)
if err != nil {
return state, fmt.Errorf("pre-check failed: %w", err)
}
if !result.Approved {
state.Response = fmt.Sprintf("[Node %s blocked: %s]", n.nodeName, result.BlockReason)
return state, nil
}
// Execute node logic
newState, err := n.logic(ctx, state)
if err != nil {
return state, err
}
// Audit (fire and forget for performance)
go func() {
_, _ = n.axonflow.AuditLLMCall(
result.ContextID,
truncate(newState.Response, 200),
"openai",
"gpt-4",
axonflow.TokenUsage{},
time.Since(startTime).Milliseconds(),
map[string]interface{}{"node": n.nodeName},
)
}()
return newState, nil
}
// Graph represents a governed LangGraph-style workflow
type Graph struct {
nodes map[string]*GovernedNode
edges map[string]string
}
func NewGraph() *Graph {
return &Graph{
nodes: make(map[string]*GovernedNode),
edges: make(map[string]string),
}
}
func (g *Graph) AddNode(name string, node *GovernedNode) {
g.nodes[name] = node
}
func (g *Graph) AddEdge(from, to string) {
g.edges[from] = to
}
func (g *Graph) Execute(ctx context.Context, startNode string, state WorkflowState) (WorkflowState, error) {
currentNode := startNode
currentState := state
for currentNode != "" {
node, exists := g.nodes[currentNode]
if !exists {
return currentState, fmt.Errorf("node not found: %s", currentNode)
}
var err error
currentState, err = node.Execute(ctx, currentState)
if err != nil {
return currentState, err
}
currentNode = g.edges[currentNode]
}
return currentState, nil
}
func truncate(s string, maxLen int) string {
if len(s) <= maxLen {
return s
}
return s[:maxLen]
}
Pattern 3: Human-in-the-Loop with Governance — For approval workflows
Add governance to human approval workflows:
import { AxonFlow } from "@axonflow/sdk";
interface HITLState {
query: string;
proposal?: string;
approved?: boolean;
approver?: string;
finalResponse?: string;
}
class GovernedHITLWorkflow {
private axonflow: AxonFlow;
constructor(axonflow: AxonFlow) {
this.axonflow = axonflow;
}
async generateProposal(
userToken: string,
state: HITLState
): Promise<HITLState> {
const approval = await this.axonflow.getPolicyApprovedContext({
userToken,
query: state.query,
context: { node: "generate_proposal", requires_approval: true },
});
if (!approval.approved) {
return { ...state, proposal: `[BLOCKED: ${approval.blockReason}]` };
}
// Generate proposal (your LLM logic here)
const proposal = `Proposed action for: ${state.query}`;
await this.axonflow.auditLLMCall({
contextId: approval.contextId,
responseSummary: proposal.slice(0, 200),
provider: "openai",
model: "gpt-4",
tokenUsage: { promptTokens: 100, completionTokens: 50, totalTokens: 150 },
latencyMs: 500,
metadata: { node: "generate_proposal", awaiting_human: true },
});
return { ...state, proposal };
}
async processApproval(
userToken: string,
state: HITLState,
approved: boolean,
approver: string
): Promise<HITLState> {
const policyCheck = await this.axonflow.getPolicyApprovedContext({
userToken,
query: `Approval decision: ${approved ? "approved" : "rejected"}`,
context: {
node: "human_approval",
approver,
decision: approved,
original_query: state.query,
},
});
await this.axonflow.auditLLMCall({
contextId: policyCheck.contextId,
responseSummary: `Human ${approved ? "approved" : "rejected"} proposal`,
provider: "human",
model: "human-approval",
tokenUsage: { promptTokens: 0, completionTokens: 0, totalTokens: 0 },
latencyMs: 0,
metadata: {
node: "human_approval",
approver,
decision: approved,
},
});
return { ...state, approved, approver };
}
async executeApproved(
userToken: string,
state: HITLState
): Promise<HITLState> {
if (!state.approved) {
return { ...state, finalResponse: "Proposal was not approved." };
}
const approval = await this.axonflow.getPolicyApprovedContext({
userToken,
query: state.proposal || "",
context: {
node: "execute_approved",
human_approved: true,
approver: state.approver,
},
});
if (!approval.approved) {
return { ...state, finalResponse: `[BLOCKED: ${approval.blockReason}]` };
}
// Execute the approved action
const response = `Executed approved action: ${state.proposal}`;
await this.axonflow.auditLLMCall({
contextId: approval.contextId,
responseSummary: response.slice(0, 200),
provider: "openai",
model: "gpt-4",
tokenUsage: { promptTokens: 100, completionTokens: 50, totalTokens: 150 },
latencyMs: 800,
metadata: {
node: "execute_approved",
approver: state.approver,
},
});
return { ...state, finalResponse: response };
}
}
State Management
LangGraph workflows carry state across nodes. This is one of LangGraph's core strengths: state persists through the entire graph execution, enabling complex multi-step reasoning where later nodes build on the outputs of earlier ones. AxonFlow integrates with this state model to provide governance at every transition without requiring you to restructure your graph.
The key insight is that governance metadata (context IDs, applied policies, accumulated costs) can be carried in the graph state alongside your application data. This means each node in the graph has visibility into what policies were applied at previous nodes, and the final state contains a complete governance record for the entire execution.
Policy State in Graph State
Add AxonFlow context to your graph state to carry governance metadata across nodes:
interface GovernedWorkflowState {
query: string;
context?: string;
response?: string;
route?: string;
// AxonFlow governance state
axonflow_context_id?: string;
axonflow_policies_applied?: string[];
axonflow_total_cost_usd?: number;
}
Accumulating Policy Decisions
Track policy decisions across the entire graph execution for a complete audit trail:
function createStatefulGovernedNode(
axonflow: AxonFlow,
userToken: string,
nodeName: string,
nodeLogic: (state: GovernedWorkflowState) => Promise<Partial<GovernedWorkflowState>>
) {
return async (state: GovernedWorkflowState): Promise<Partial<GovernedWorkflowState>> => {
const approval = await axonflow.getPolicyApprovedContext({
userToken,
query: state.query,
context: {
node: nodeName,
framework: "langgraph",
// Pass previous policy context for continuity
parent_context_id: state.axonflow_context_id,
},
});
if (!approval.approved) {
return { response: `[Node ${nodeName} blocked: ${approval.blockReason}]` };
}
const result = await nodeLogic(state);
// Accumulate governance metadata in state
return {
...result,
axonflow_context_id: approval.contextId,
axonflow_policies_applied: [
...(state.axonflow_policies_applied || []),
...approval.policiesApplied,
],
};
};
}
Conditional Routing Based on Policy
One of the most powerful patterns in the LangGraph + AxonFlow integration is using policy decisions to influence graph routing. Instead of treating governance as a passive observer, you can make AxonFlow a first-class participant in the graph's control flow. For example, if the policy engine flags a request as requiring human approval, a conditional edge can route the execution to a human review node rather than proceeding automatically:
// Route to a "human_review" node if the policy engine flags the request
graph.addConditionalEdges("classify", (state) => {
if (state.axonflow_policies_applied?.includes("require_approval")) {
return "human_review";
}
return state.route || "process";
}, {
human_review: "human_review",
process: "process",
});
Example Implementations
| Language | SDK | Example |
|---|---|---|
| Python | axonflow | workflow-control/python |
| TypeScript | @axonflow/sdk | langgraph/typescript |
| Go | axonflow-sdk-go | langgraph/go |
| Java | axonflow-sdk-java | workflow-control/java |
