Skip to main content

Per-Tool Governance

When a LangGraph tools node invokes multiple individual tools, each tool can be governed independently using ToolContext. This provides granular policy control — for example, allowing web_search but blocking code_executor within the same tools node.

How It Works

Instead of a single gate check for the entire tools node, you check each tool individually:

from axonflow.workflow import ToolContext, StepGateRequest, StepType

# Check gate with tool context
gate = await client.step_gate(
workflow_id=workflow.workflow_id,
step_id="step-tools-web_search",
request=StepGateRequest(
step_name="tools/web_search",
step_type=StepType.TOOL_CALL,
tool_context=ToolContext(
tool_name="web_search",
tool_type="function",
tool_input={"query": "latest AI research"},
),
),
)

The policy adapter propagates tool_name, tool_type, and tool_input.* keys into the policy evaluation context, enabling tool-aware rules.

The Python LangGraph adapter provides convenience methods for per-tool governance:

from axonflow import AxonFlow
from axonflow.adapters import AxonFlowLangGraphAdapter
from axonflow.workflow import WorkflowSource

async with AxonFlow(endpoint="http://localhost:8080") as client:
adapter = AxonFlowLangGraphAdapter(
client=client,
workflow_name="research-agent",
source=WorkflowSource.LANGGRAPH,
)

async with adapter:
await adapter.start_workflow(trace_id="langsmith-run-abc123")

# Standard LLM node gate
if await adapter.check_gate("plan_research", "llm_call", model="gpt-4"):
result = await plan_research(state)
await adapter.step_completed("plan_research", output=result)

# Per-tool governance within a tools node
if await adapter.check_tool_gate("web_search", "function",
tool_input={"query": "latest news"}):
search_result = await web_search(query="latest news")
await adapter.tool_completed("web_search", output=search_result)

if await adapter.check_tool_gate("sql_query", "mcp",
tool_input={"query": "SELECT * FROM users LIMIT 10"}):
db_result = await sql_query("SELECT * FROM users LIMIT 10")
await adapter.tool_completed("sql_query", output=db_result)

Raw HTTP

curl -X POST http://localhost:8080/api/v1/workflows/$WF_ID/steps/step-tools-web_search/gate \
-H "Content-Type: application/json" \
-d '{
"step_name": "tools/web_search",
"step_type": "tool_call",
"tool_context": {
"tool_name": "web_search",
"tool_type": "function",
"tool_input": {"query": "latest news"}
}
}'

Phase 1 Scope

Per-tool governance is currently in Phase 1 (context enrichment). ToolContext is optional and fully backward compatible. Future Phase 2 will add dedicated tool_call_policy types with tool name/type matching, per-tool rate limits, and tool allowlists/blocklists.

MCP Tool Interceptor (MultiServerMCPClient)

When using LangGraph's MultiServerMCPClient from langchain-mcp-adapters, you can wrap every MCP tool call with AxonFlow policy enforcement using the mcp_tool_interceptor() factory method. This enforces the full mcp_check_input → handler → mcp_check_output pattern automatically.

Basic Usage

from langchain_mcp_adapters.client import MultiServerMCPClient
from axonflow import AxonFlow
from axonflow.adapters import AxonFlowLangGraphAdapter

async with AxonFlow(endpoint="http://localhost:8080") as client:
adapter = AxonFlowLangGraphAdapter(client, "my-workflow")

mcp_client = MultiServerMCPClient(
{"lookup": {"url": "http://localhost:8000/mcp", "transport": "http"}},
tool_interceptors=[adapter.mcp_tool_interceptor()],
)
tools = await mcp_client.get_tools()
# All tool calls through mcp_client are now policy-enforced

The interceptor:

  1. Derives connector_type from the incoming request (defaults to "{server_name}.{tool_name}")
  2. Calls mcp_check_input(...) — raises PolicyViolationError if the input is blocked
  3. Calls handler(request) to execute the tool
  4. Calls mcp_check_output(...) — raises PolicyViolationError if the result is hard-blocked; returns redacted_data in place of the original result if redaction was applied

MCPInterceptorOptions

Use MCPInterceptorOptions to customize the interceptor's behaviour:

from axonflow.adapters import AxonFlowLangGraphAdapter, MCPInterceptorOptions

opts = MCPInterceptorOptions(
# Custom connector type derivation — defaults to "{server_name}.{tool_name}"
connector_type_fn=lambda req: req.server_name,
# Operation type passed to mcp_check_input — defaults to "execute"
# Use "query" for known read-only tool calls
operation="query",
)

mcp_client = MultiServerMCPClient(
{"lookup": {"url": "http://localhost:8000/mcp", "transport": "http"}},
tool_interceptors=[adapter.mcp_tool_interceptor(opts)],
)
OptionTypeDefaultDescription
connector_type_fnCallable[[Any], str] | NoneNoneMaps an MCP request to a connector type string. Receives the MCPToolCallRequest object. Defaults to "{request.server_name}.{request.name}".
operationstr"execute"Operation type passed to mcp_check_input. Use "query" for read-only tools.

Redacted Output Passthrough

When the output policy applies redaction (rather than a hard block), the interceptor automatically substitutes the redacted_data returned by mcp_check_output for the original tool result. The calling LangGraph node receives the sanitised version transparently.