Governance
AxonFlow governance is the layer that helps teams move from "the demo worked" to "we can operate this safely in production." It sits across the Agent, Orchestrator, policies, and workflow APIs so you can answer questions that senior engineers, platform teams, and compliance reviewers all end up asking:
- What exactly did this agent do?
- Which policy blocked, redacted, warned, or escalated the request?
- Which workflow step needed approval?
- Which provider, model, tenant, or team consumed the budget?
- Can we export evidence for a security review or regulatory assessment?
If you are building multi-agent systems, tool-using assistants, or governed LLM workflows, this is the part of AxonFlow that turns policy decisions into something operators can inspect and act on.
What Governance Covers
AxonFlow governance spans five main areas:
| Area | What it helps you do |
|---|---|
| Audit logging | Capture request, response, workflow, and connector activity for incident review and compliance evidence |
| Policy enforcement | Apply system policies and tenant policies before and after execution |
| Code governance | Detect LLM-generated code, count unsafe patterns, and flag potential secret leakage |
| Cost management | Create budgets, check budgets, and understand where AI spend is coming from |
| Human oversight | Pause high-risk workflow steps for human review when require_approval is triggered |
That combination matters because real AI systems fail in more than one way. A deployment can be perfectly authenticated and still:
- over-fetch sensitive data from a connector
- leak PII in a generated answer
- burn through a team budget because an agent loop keeps retrying
- generate code with unsafe patterns
- require a human sign-off before a money movement, case escalation, or regulated decision proceeds
Where Governance Runs
Governance is not a separate subsystem you bolt on later. It is woven into the main request paths:
In practice that means:
- Proxy-mode traffic can be audited and budget-checked before the response reaches your application.
- MCP operations can be governed on request, response, and exfiltration phases.
- MAP and WCP workflows can record gate decisions, approvals, and execution history.
- Evaluation and Enterprise tiers unlock extra operating surfaces like policy simulation, evidence export, and richer human oversight flows.
A Real Governance Flow
Here is a production-style Python example that uses the current public SDK path instead of an older raw endpoint:
from axonflow import AxonFlow
async def main():
async with AxonFlow(
endpoint="http://localhost:8080",
client_id="platform-team",
client_secret="replace-me",
) as client:
response = await client.proxy_llm_call(
user_token="analyst-42",
query="Summarize customer escalations with any card numbers redacted",
request_type="chat",
context={"department": "support", "ticket_batch": "march-week-4"},
)
if response.policy_info:
print("Policies evaluated:", response.policy_info.policies_evaluated)
if response.budget_info:
print("Budget usage:", response.budget_info.used_usd, response.budget_info.limit_usd)
print(response.data)
This is the kind of request where governance adds real value:
- the query can be evaluated against system policies and tenant policies
- any code in the response can be marked with
code_artifact - budget enforcement can surface
budget_info - audit logs can later show who made the request, what happened, and why
Community, Evaluation, and Enterprise
The governance story gets stronger as teams move from local evaluation to real operating responsibility.
| Capability | Community | Evaluation | Enterprise |
|---|---|---|---|
| Request/response audit logging | ✅ | ✅ | ✅ |
| Policy decision visibility | ✅ | ✅ | ✅ |
| Code artifact detection | ✅ | ✅ | ✅ |
| Budget APIs and budget status | ✅ | ✅ | ✅ |
| Policy Simulation & Impact Report | ❌ | ✅ | ✅ |
| Evidence Export Pack | ❌ | ✅ | ✅ |
| Approval queue for governed workflow steps | ❌ | ✅ | ✅ |
| Visual approval UI and customer portal operations | ❌ | ❌ | ✅ |
| Long retention, scheduled reporting, advanced operating controls | ❌ | ❌ | ✅ |
This progression is intentional:
- Community gives engineers the core primitives to build and validate governed AI workflows.
- Evaluation adds the features most teams need once they begin rehearsing production controls and cross-functional reviews.
- Enterprise adds the operating model enterprises usually want once multiple teams, auditors, or compliance functions are involved.
What A Staff Engineer Usually Wants To Validate
When a senior or staff engineer evaluates AxonFlow governance, these are usually the make-or-break questions:
- Can I correlate policy outcomes, workflow state, and spend without stitching together three separate systems?
- Can I explain to security or compliance exactly how a blocked, redacted, or approved decision happened?
- Can I test policy changes before rollout instead of learning from production incidents?
- Can I keep humans in the loop only where it matters, without forcing manual review everywhere?
- Can I start in Community and know what I gain when I move to Evaluation or Enterprise?
The rest of this section exists to help answer those questions with concrete details, not just feature names.
Next Steps
- Audit Logging for request, workflow, and connector evidence
- Code Governance for LLM-generated code visibility and review
- Cost Management for budgets, alerts, and spend controls
- Human-in-the-Loop for approval-driven workflow gating
- Approvals And Exception Handling Patterns for practical approval queue, exception, and escalation design
- Execution Operations Playbook for how teams run these governance signals in production
- Policy Simulation & Impact Report for safe pre-deployment testing
- Evidence Export Pack for sharing governance evidence with reviewers
- Community vs Enterprise for the full edition comparison
