OAuth vs Governance Policies
One of the most important architecture ideas in AxonFlow is that authentication is not the same thing as governance.
OAuth, OIDC, SSO, service credentials, and network controls answer the question:
Who is allowed to call this system?
AxonFlow governance answers a different question:
What is this request, tool call, response, or workflow step actually allowed to do?
In traditional software, those questions are often close enough together that teams blur them. In AI systems, they diverge fast.
The Short Version
- OAuth / OIDC proves identity and controls access to endpoints.
- AxonFlow system policies and tenant policies inspect content and behavior at runtime.
You need both because a fully authenticated AI agent can still:
- generate an over-broad SQL query
- send sensitive data to the wrong tool
- echo PII in an answer
- exceed a budget
- trigger a workflow step that should require human approval
None of those failures are solved by authentication alone.
Side-By-Side Comparison
| Question | OAuth / OIDC | AxonFlow governance |
|---|---|---|
| Can this application call the API? | ✅ | ❌ |
| Can this prompt or tool call contain risky content? | ❌ | ✅ |
| Can this response contain PII or secrets? | ❌ | ✅ |
| Can this workflow step proceed without human review? | ❌ | ✅ |
| Can this team exceed its AI budget? | ❌ | ✅ |
| Can org-wide governance rules cascade to tenants? | ❌ | ✅ |
That is the core design principle: identity controls admission, governance controls behavior.
Why AI Makes The Difference Obvious
In a normal service integration, a developer writes the query or payload ahead of time. The risky parts can be code-reviewed before deployment.
In an AI system, the model generates content at runtime:
- prompts
- tool arguments
- SQL statements
- API payloads
- final user-facing responses
That means the dangerous part is often not "which service was called?" but "what did the model put inside the call?"
Example: Authenticated But Still Unsafe
Imagine a support agent that is legitimately allowed to use a database connector.
The user asks:
"How many enterprise customers are affected by the outage?"
The model could generate one of two SQL statements:
- good:
SELECT COUNT(*) FROM customers WHERE tier = 'enterprise' AND affected = true - bad:
SELECT name, email, ssn, annual_contract_value FROM customers WHERE tier = 'enterprise' AND affected = true
OAuth and connector credentials may be perfectly valid in both cases. The difference is that the second query is over-fetching sensitive data.
That is why AxonFlow evaluates the runtime content of the request, not just the existence of the permission.
Where Governance Applies
AxonFlow can evaluate several phases of an AI interaction:
| Phase | Example concern |
|---|---|
| Request / input | prompt or tool call contains prohibited content |
| Connector request | generated SQL or API call is riskier than intended |
| Response / output | result contains PII, secrets, or sensitive records |
| Workflow step | next step should block or require approval |
| Cost / operating limits | the call should warn, block, or downgrade because of budget policy |
This phase-aware model is why the terminology matters:
- system policies are the shared platform baseline
- tenant policies add organization- or tenant-specific rules
The endpoint names may still include older static or dynamic path names in some places, but the governance model itself is system-plus-tenant.
OAuth Is Still Necessary
None of this means OAuth is optional. You still need identity, session, and service authentication.
The right mental model is:
- identity decides whether the caller can enter
- governance decides what the runtime behavior is allowed to be
This is especially important in enterprises where the same authenticated platform may serve:
- internal assistants
- regulated workflows
- multi-tenant connectors
- external orchestrators
The identity layer keeps callers separated. The governance layer keeps behavior within acceptable bounds.
When The Difference Matters Most
Teams usually feel the gap between OAuth and governance when they start building:
- connector-using agents
- multi-step workflows
- regulated AI assistants
- customer-support and operations copilots
- company-wide AI platforms shared by several teams
At that point, "we already have auth" stops being a sufficient answer, because the risk has moved into the runtime content itself.
Why This Is Also An Upgrade Story
This page is also one of the clearest explanations of why teams often grow from Community into Evaluation or Enterprise:
- Community lets engineers prove the technical governance model.
- Evaluation adds policy simulation, evidence export, and approval-driven workflow testing.
- Enterprise adds the richer operating model that large organizations usually need around identity, review, and governance.
That progression makes sense because the more your AI system behaves like shared company infrastructure, the less "auth only" is enough.
Related Documentation
- Policy Overview for how system policies and tenant policies work
- Governance Overview for the operating model around audits, budgets, and oversight
- MCP Policy Enforcement for phase-aware connector governance
- Identity Overview for identity and authentication setup
