Skip to main content

LangGraph + AxonFlow Integration

What LangGraph Does Well

LangGraph is LangChain's framework for building stateful, multi-step agents with durable execution. When you outgrow simple chains, LangGraph is the natural evolution:

Graph-Based Workflows: Define agent logic as directed graphs with nodes and edges. Complex decision trees become visual and maintainable.

Durable Execution: State persists across executions. Agents can pause, resume, and recover from failures. Long-running workflows work reliably.

Human-in-the-Loop: Built-in patterns for human approval, intervention, and feedback at any node. Breakpoints are first-class citizens.

Streaming: Native streaming support for both intermediate steps and final outputs. Real-time visibility into agent execution.

LangChain Compatibility: Works seamlessly with LangChain components—chains, tools, memory. Migration path is clear.

LangGraph Cloud: Managed deployment with built-in observability via LangSmith. Scales without infrastructure work.


What LangGraph Doesn't Try to Solve

LangGraph focuses on stateful agent orchestration. These concerns are explicitly out of scope:

Production RequirementLangGraph's Position
Policy enforcement before node executionNot provided—nodes execute based on graph logic, not policies
PII detection in state transitionsNot addressed—state can contain any data
SQL injection preventionNot provided—must implement at node level
Per-user or per-workflow cost attributionNot tracked—requires LangSmith (paid) for basic metrics
Audit trails for complianceRequires LangSmith—not built into the framework
Cross-workflow access controlNot addressed—no permission model for graph access
Token budget enforcementNot provided—nodes can consume unlimited tokens

This isn't a criticism—it's a design choice. LangGraph handles orchestration. Governance is a separate concern.


Where Teams Hit Production Friction

Based on real enterprise deployments, here are the blockers that appear after the prototype works:

1. The Infinite Loop

A graph has a conditional edge: "if response is unclear, retry". The LLM keeps producing unclear responses. The graph keeps retrying. By Monday morning, 85,000 iterations have occurred.

LangGraph executed the graph correctly. Nothing was watching how many times it executed.

2. The State Explosion

A workflow collects customer data across multiple nodes. By the final node, the state contains full PII—SSNs, addresses, payment info. The state is persisted for debugging.

Now PII is in your checkpoint storage. LangGraph has no mechanism to filter sensitive data from state.

3. The "Show Me The Path" Request

A financial advisor agent made a recommendation. Compliance needs:

  • What nodes executed and in what order?
  • What data was in state at each transition?
  • What external tools were called?
  • Who was the requesting user?

LangGraph executed the workflow. Without LangSmith, the execution trace is gone.

4. The Security Review Block

Security review: BLOCKED
- No audit trail for graph execution paths
- PII can accumulate in workflow state
- No policy enforcement at node transitions
- Cost controls missing
- Access control for graphs not implemented

The stateful agent worked perfectly. It can't ship.

5. The Cross-Tenant State Leak

In a multi-tenant deployment, a workflow for Customer A accidentally accessed state from Customer B due to a misconfigured checkpoint. LangGraph persisted both—there's no tenant isolation at the framework level.


How AxonFlow Plugs In

AxonFlow doesn't replace LangGraph. It sits underneath it—providing the governance layer that LangGraph intentionally doesn't include:

┌─────────────────┐
│ Your App │
└────────┬────────┘

v
┌─────────────────┐
│ LangGraph │ <-- Nodes, Edges, State, Checkpoints
└────────┬────────┘

v
┌─────────────────────────────────┐
│ AxonFlow │
│ ┌───────────┐ ┌────────────┐ │
│ │ Policy │ │ Audit │ │
│ │ Enforce │ │ Trail │ │
│ └───────────┘ └────────────┘ │
│ ┌───────────┐ ┌────────────┐ │
│ │ PII │ │ Cost │ │
│ │ Detection│ │ Control │ │
│ └───────────┘ └────────────┘ │
└────────────────┬────────────────┘

v
┌─────────────────┐
│ LLM Provider │
└─────────────────┘

What this gives you:

  • Every node transition logged with state summary and user context
  • PII detected and blocked before entering state
  • SQL injection attempts blocked at any node
  • Cost tracked per workflow, per user, per node
  • Compliance auditors can query the full execution path

What stays the same:

  • Your LangGraph code doesn't change
  • Graph definitions work as before
  • No new abstractions to learn

Integration Patterns

Pattern 1: Governed Node Wrapper (TypeScript)

Wrap LangGraph nodes with AxonFlow governance:

import { StateGraph, END } from "@langchain/langgraph";
import { AxonFlow } from "@axonflow/sdk";

interface WorkflowState {
query: string;
context?: string;
response?: string;
route?: string;
}

// Create governed node factory
function createGovernedNode(
axonflow: AxonFlow,
userToken: string,
nodeName: string,
nodeLogic: (state: WorkflowState) => Promise<Partial<WorkflowState>>
) {
return async (state: WorkflowState): Promise<Partial<WorkflowState>> => {
const startTime = Date.now();

// Pre-check before node execution
const approval = await axonflow.getPolicyApprovedContext({
userToken,
query: state.query,
context: {
node: nodeName,
framework: "langgraph",
has_context: !!state.context,
},
});

if (!approval.approved) {
return {
response: `[Node ${nodeName} blocked: ${approval.blockReason}]`,
};
}

// Execute node logic
const result = await nodeLogic(state);

// Audit the execution
await axonflow.auditLLMCall({
contextId: approval.contextId,
responseSummary: JSON.stringify(result).slice(0, 200),
provider: "openai",
model: "gpt-4",
tokenUsage: { promptTokens: 100, completionTokens: 50, totalTokens: 150 },
latencyMs: Date.now() - startTime,
metadata: { node: nodeName },
});

return result;
};
}

// Build governed graph
async function buildGovernedGraph(userToken: string) {
const axonflow = new AxonFlow({
endpoint: process.env.AXONFLOW_AGENT_URL!,
tenant: process.env.AXONFLOW_CLIENT_ID!,
});

const graph = new StateGraph<WorkflowState>({
channels: {
query: { value: (a, b) => b ?? a },
context: { value: (a, b) => b ?? a },
response: { value: (a, b) => b ?? a },
route: { value: (a, b) => b ?? a },
},
});

// Add governed nodes
graph.addNode(
"input",
createGovernedNode(axonflow, userToken, "input", async (state) => {
return { context: `Processing: ${state.query}` };
})
);

graph.addNode(
"router",
createGovernedNode(axonflow, userToken, "router", async (state) => {
const route = state.query.toLowerCase().includes("search")
? "search"
: "analyze";
return { route };
})
);

graph.addNode(
"search",
createGovernedNode(axonflow, userToken, "search", async (state) => {
return { response: `Search results for: ${state.query}` };
})
);

graph.addNode(
"analyze",
createGovernedNode(axonflow, userToken, "analyze", async (state) => {
return { response: `Analysis of: ${state.query}` };
})
);

// Define edges
graph.setEntryPoint("input");
graph.addEdge("input", "router");
graph.addConditionalEdges("router", (state) => state.route || "analyze", {
search: "search",
analyze: "analyze",
});
graph.addEdge("search", END);
graph.addEdge("analyze", END);

return graph.compile();
}

// Usage
const workflow = await buildGovernedGraph("user-123");
const result = await workflow.invoke({ query: "Search for AI governance best practices" });

Pattern 2: Governed Graph with Go SDK

For Go-based LangGraph implementations:

package main

import (
"context"
"fmt"
"time"

"github.com/getaxonflow/axonflow-sdk-go"
)

// WorkflowState represents the graph state
type WorkflowState struct {
Query string
Context string
Response string
Route string
}

// GovernedNode wraps a node function with AxonFlow governance
type GovernedNode struct {
axonflow *axonflow.AxonFlowClient
userToken string
nodeName string
logic func(context.Context, WorkflowState) (WorkflowState, error)
}

func NewGovernedNode(
client *axonflow.AxonFlowClient,
userToken, nodeName string,
logic func(context.Context, WorkflowState) (WorkflowState, error),
) *GovernedNode {
return &GovernedNode{
axonflow: client,
userToken: userToken,
nodeName: nodeName,
logic: logic,
}
}

func (n *GovernedNode) Execute(ctx context.Context, state WorkflowState) (WorkflowState, error) {
startTime := time.Now()

// Pre-check
callCtx := map[string]interface{}{
"node": n.nodeName,
"framework": "langgraph",
}

result, err := n.axonflow.ExecuteQuery(n.userToken, state.Query, "chat", callCtx)
if err != nil {
return state, fmt.Errorf("pre-check failed: %w", err)
}

if result.Blocked {
state.Response = fmt.Sprintf("[Node %s blocked: %s]", n.nodeName, result.BlockReason)
return state, nil
}

// Execute node logic
newState, err := n.logic(ctx, state)
if err != nil {
return state, err
}

// Audit (fire and forget for performance)
go func() {
n.axonflow.AuditLLMCall(axonflow.AuditRequest{
ContextID: result.ContextID,
ResponseSummary: truncate(newState.Response, 200),
Provider: "openai",
Model: "gpt-4",
LatencyMs: int(time.Since(startTime).Milliseconds()),
Metadata: map[string]interface{}{"node": n.nodeName},
})
}()

return newState, nil
}

// Graph represents a governed LangGraph-style workflow
type Graph struct {
nodes map[string]*GovernedNode
edges map[string]string
}

func NewGraph() *Graph {
return &Graph{
nodes: make(map[string]*GovernedNode),
edges: make(map[string]string),
}
}

func (g *Graph) AddNode(name string, node *GovernedNode) {
g.nodes[name] = node
}

func (g *Graph) AddEdge(from, to string) {
g.edges[from] = to
}

func (g *Graph) Execute(ctx context.Context, startNode string, state WorkflowState) (WorkflowState, error) {
currentNode := startNode
currentState := state

for currentNode != "" {
node, exists := g.nodes[currentNode]
if !exists {
return currentState, fmt.Errorf("node not found: %s", currentNode)
}

var err error
currentState, err = node.Execute(ctx, currentState)
if err != nil {
return currentState, err
}

currentNode = g.edges[currentNode]
}

return currentState, nil
}

func truncate(s string, maxLen int) string {
if len(s) <= maxLen {
return s
}
return s[:maxLen]
}

Pattern 3: Human-in-the-Loop with Governance

Add governance to human approval workflows:

import { AxonFlow } from "@axonflow/sdk";

interface HITLState {
query: string;
proposal?: string;
approved?: boolean;
approver?: string;
finalResponse?: string;
}

class GovernedHITLWorkflow {
private axonflow: AxonFlow;

constructor(axonflow: AxonFlow) {
this.axonflow = axonflow;
}

async generateProposal(
userToken: string,
state: HITLState
): Promise<HITLState> {
const approval = await this.axonflow.getPolicyApprovedContext({
userToken,
query: state.query,
context: { node: "generate_proposal", requires_approval: true },
});

if (!approval.approved) {
return { ...state, proposal: `[BLOCKED: ${approval.blockReason}]` };
}

// Generate proposal (your LLM logic here)
const proposal = `Proposed action for: ${state.query}`;

await this.axonflow.auditLLMCall({
contextId: approval.contextId,
responseSummary: proposal.slice(0, 200),
provider: "openai",
model: "gpt-4",
tokenUsage: { promptTokens: 100, completionTokens: 50, totalTokens: 150 },
latencyMs: 500,
metadata: { node: "generate_proposal", awaiting_human: true },
});

return { ...state, proposal };
}

async processApproval(
userToken: string,
state: HITLState,
approved: boolean,
approver: string
): Promise<HITLState> {
const policyCheck = await this.axonflow.getPolicyApprovedContext({
userToken,
query: `Approval decision: ${approved ? "approved" : "rejected"}`,
context: {
node: "human_approval",
approver,
decision: approved,
original_query: state.query,
},
});

await this.axonflow.auditLLMCall({
contextId: policyCheck.contextId,
responseSummary: `Human ${approved ? "approved" : "rejected"} proposal`,
provider: "human",
model: "human-approval",
tokenUsage: { promptTokens: 0, completionTokens: 0, totalTokens: 0 },
latencyMs: 0,
metadata: {
node: "human_approval",
approver,
decision: approved,
},
});

return { ...state, approved, approver };
}

async executeApproved(
userToken: string,
state: HITLState
): Promise<HITLState> {
if (!state.approved) {
return { ...state, finalResponse: "Proposal was not approved." };
}

const approval = await this.axonflow.getPolicyApprovedContext({
userToken,
query: state.proposal || "",
context: {
node: "execute_approved",
human_approved: true,
approver: state.approver,
},
});

if (!approval.approved) {
return { ...state, finalResponse: `[BLOCKED: ${approval.blockReason}]` };
}

// Execute the approved action
const response = `Executed approved action: ${state.proposal}`;

await this.axonflow.auditLLMCall({
contextId: approval.contextId,
responseSummary: response.slice(0, 200),
provider: "openai",
model: "gpt-4",
tokenUsage: { promptTokens: 100, completionTokens: 50, totalTokens: 150 },
latencyMs: 800,
metadata: {
node: "execute_approved",
approver: state.approver,
},
});

return { ...state, finalResponse: response };
}
}

Example Implementations

LanguageSDKExample
TypeScript@axonflow/sdklanggraph/typescript
Goaxonflow-sdk-golanggraph/go