Lyzr AI + AxonFlow Integration
What The Current Governance Surface Gives Lyzr Operators
Lyzr deployments usually care about operating discipline across many agents, not just initial setup. The current Python SDK and workflow-control APIs help with that by adding:
explain_decision()for answering why a governed request was denied or held for review- audit search filters by
decision_id,policy_name, andoverride_id, which is useful when multiple departmental agents share one control plane - richer audit and decision correlation around the governed SDK calls your agents already make
The practical benefit is better operational support for enterprise agent fleets: faster answers to policy questions and cleaner audit correlation across departments.
For Lyzr operators, this is less about one blocked request and more about fleet management. When dozens of departmental agents share one control plane, platform teams need to answer questions like which policy blocked finance but not support, which temporary override changed behavior for one business unit, and how a paused workflow should be resumed without widening access accidentally. The newer explainability and audit-filter surface helps answer those questions with one governance model instead of ad hoc per-agent debugging.
The important limitation is that these improvements do not give generic Lyzr agent runs step-gated checkpoint recovery. If a governed call is blocked, the surrounding application still owns recovery. What the newer surface adds is much better explainability and audit correlation for the teams running those fleets.
Why Enterprise Agent Fleets Need Centralized Governance
Lyzr AI is an enterprise agent framework designed for rapid deployment of AI agents across business departments. It provides Agent Studio (a no-code/low-code interface for building agents), pre-built templates for HR, Sales, Support, and Finance use cases, and a HybridFlow architecture that combines LLMs with traditional ML models. Lyzr includes built-in Responsible AI modules for bias detection and content safety, along with Safe AI modules for human oversight and escalation. The framework is model-agnostic and supports deployment on cloud or on-premises infrastructure.
The governance challenge with Lyzr is one of scale and consistency. A typical enterprise Lyzr deployment involves dozens of agents across multiple departments, each with different configurations, different data access patterns, and different compliance requirements. Lyzr provides per-agent guardrails, but it does not provide a centralized governance layer that spans all agents. When an auditor asks "show me every request that touched customer data in the last 90 days," pulling logs from 50 individual agents in different formats is not the same as querying a unified audit trail. Similarly, per-agent cost tracking does not answer the question "which department spent the most on LLM calls this month?"
AxonFlow integrates with Lyzr through gateway mode using the Python SDK. The integration wraps each Lyzr agent with a GovernedLyzrAgent class that calls get_policy_approved_context() before the agent runs and audit_llm_call() after. By passing agent type and department as context metadata, you get a single governance layer that enforces consistent policies across all Lyzr agents while still allowing department-specific rules. The unified audit trail captures every agent interaction in the same format, making compliance queries straightforward. Your existing Lyzr agent configurations and Agent Studio workflows remain unchanged.
What Lyzr Does Well
Lyzr AI is an enterprise agent framework with built-in Responsible AI capabilities. Its strengths are compelling for enterprise deployment:
Agent Studio: No-code/low-code interface for building AI agents. Business users can create agents without engineering support.
Pre-Built Templates: Ready-to-use agents for HR, Sales, Support, Finance, and more. Deploy departmental agents in hours, not weeks.
HybridFlow Architecture: Blend LLMs with traditional ML models. Combine the reasoning of LLMs with the precision of specialized models.
Model-Agnostic Design: Avoid vendor lock-in. Swap between OpenAI, Anthropic, and other providers.
Responsible AI Modules: Built-in guardrails for bias detection and content safety.
Safe AI Modules: Patterns for human oversight and escalation.
What Lyzr Doesn't Try to Solve
Lyzr focuses on agent creation and deployment. These concerns are explicitly out of scope:
| Production Requirement | Lyzr's Position |
|---|---|
| Real-time policy enforcement | Not provided—relies on pre-configured guardrails |
| Cross-system audit trails | Not addressed—logging is agent-specific |
| Per-agent cost attribution | Not tracked—requires external monitoring |
| PII blocking at inference time | Relies on Safe AI modules—not automatic |
| SQL injection prevention | Not provided—must implement in agent logic |
| Centralized policy management | Not addressed—policies are per-agent |
| Token budget enforcement | Not provided—agents can consume unlimited tokens |
This isn't a criticism—it's a design choice. Lyzr handles agent creation. Governance at scale is a separate concern.
Where Teams Hit Production Friction
Based on real enterprise deployments, here are the blockers that appear after the prototype works:
1. The 50-Agent Sprawl
A company deploys 50 Lyzr agents across departments. Each has different configurations. Auditors ask:
- Are all agents following the same policies?
- Which agents have access to PII?
- What's the total cost across all agents?
Lyzr deployed each agent successfully. Centralized governance wasn't in scope.
2. The Department Cost Surprise
Three departments use Lyzr agents heavily. The combined bill arrives. Who spent what? Lyzr has no built-in cross-agent cost attribution.
3. The Audit Request
Auditor: "Show me every request that touched customer data in the last 90 days."
Team: "We have logs from each agent, but they're in different formats..."
Auditor: "That's not a unified audit trail."
4. The Security Review Block
Security review: BLOCKED
- No centralized audit trail
- Policy enforcement varies by agent
- No unified PII detection
- Cost controls are per-agent
- Access control requires manual setup per agent
The Lyzr agents work perfectly. The governance story doesn't scale.
5. The "Who Changed What?" Question
An agent's behavior changed. Nobody knows when or who made the change. Lyzr provides agent versioning, but policy changes aren't tracked centrally.
How AxonFlow Plugs In
AxonFlow doesn't replace Lyzr. It sits underneath it—providing the governance layer that unifies all Lyzr agents:
┌─────────────────┐
│ Your App │
└────────┬────────┘
│
v
┌─────────────────┐
│ Lyzr Agents │ <-- HR, Sales, Support, Finance...
└────────┬────────┘
│
v
┌─────────────────────────────────┐
│ AxonFlow │
│ ┌───────────┐ ┌────────────┐ │
│ │ Policy │ │ Audit │ │
│ │ Enforce │ │ Trail │ │
│ └───────────┘ └────────────┘ │
│ ┌───────────┐ ┌────────────┐ │
│ │ PII │ │ Cost │ │
│ │ Detection│ │ Control │ │
│ └───────────┘ └────────────┘ │
└────────────────┬────────────────┘
│
v
┌─────────────────┐
│ LLM Provider │
└─────────────────┘
What this gives you:
- Every agent action logged in a unified audit trail
- PII detected and blocked across all agents
- SQL injection attempts blocked regardless of agent
- Cost tracked per agent, per department, per user
- Full audit trail queryable across all agents at once
What stays the same:
- Your Lyzr agents don't change
- Agent Studio workflows work as before
- No changes to agent configurations
Integration Pattern
Wrap Lyzr agent calls with AxonFlow governance:
import os
import time
from axonflow import AxonFlow, TokenUsage
class GovernedLyzrAgent:
"""Lyzr agent wrapper with AxonFlow governance."""
def __init__(
self,
lyzr_agent,
agent_type: str = "lyzr",
):
self.lyzr_agent = lyzr_agent
self.agent_type = agent_type
def run(
self,
user_token: str,
query: str,
context: dict = None,
) -> str:
"""Execute Lyzr agent with AxonFlow governance."""
start_time = time.time()
with AxonFlow.sync(
endpoint=os.getenv("AXONFLOW_ENDPOINT", "http://localhost:8080"),
client_id=f"lyzr-{self.agent_type}",
client_secret=os.getenv("AXONFLOW_CLIENT_SECRET"),
) as axonflow:
# 1. Pre-check
ctx = axonflow.get_policy_approved_context(
user_token=user_token,
query=query,
context={
**(context or {}),
"agent_type": self.agent_type,
"framework": "lyzr",
},
)
if not ctx.approved:
raise PermissionError(f"Blocked: {ctx.block_reason}")
try:
# 2. Execute Lyzr agent
response = self.lyzr_agent.run(query)
latency_ms = int((time.time() - start_time) * 1000)
# 3. Audit
axonflow.audit_llm_call(
context_id=ctx.context_id,
response_summary=response[:200] if len(response) > 200 else response,
provider="openai",
model="gpt-4",
token_usage=TokenUsage(
prompt_tokens=100, completion_tokens=50, total_tokens=150
),
latency_ms=latency_ms,
metadata={"agent_type": self.agent_type},
)
return response
except Exception as e:
latency_ms = int((time.time() - start_time) * 1000)
axonflow.audit_llm_call(
context_id=ctx.context_id,
response_summary=f"Error: {str(e)}",
provider="openai",
model="gpt-4",
token_usage=TokenUsage(
prompt_tokens=0, completion_tokens=0, total_tokens=0
),
latency_ms=latency_ms,
metadata={"error": str(e)},
)
raise
# Usage
governed_agent = GovernedLyzrAgent(
lyzr_agent=your_lyzr_agent,
agent_type="lyzr-hr",
)
response = governed_agent.run(
user_token="user-jwt-token",
query="What is our PTO policy?",
context={"department": "hr"},
)
Enterprise Features
When running with AxonFlow Enterprise, Lyzr agents gain additional governance capabilities:
Department-Level Policy Enforcement
Apply different policies to different Lyzr agent types by passing department context:
# HR agent: strict PII policies
hr_agent = GovernedLyzrAgent(
lyzr_agent=hr_lyzr_agent,
agent_type="lyzr-hr",
)
response = hr_agent.run(
user_token="hr-user",
query="What is employee 12345's salary?",
context={"department": "hr", "data_classification": "confidential"},
)
# Marketing agent: relaxed content policies
marketing_agent = GovernedLyzrAgent(
lyzr_agent=marketing_lyzr_agent,
agent_type="lyzr-marketing",
)
response = marketing_agent.run(
user_token="marketing-user",
query="Draft a campaign email for spring sale",
context={"department": "marketing", "data_classification": "public"},
)
Cross-Agent Cost Tracking
Track spending across all Lyzr agents from a single dashboard:
# Query cost breakdown by agent type
usage = await client.get_usage_breakdown("agent", "monthly")
for item in usage.items:
print(f"{item.name}: ${item.cost_usd:.2f}")
# Output:
# lyzr-hr: $450.00
# lyzr-sales: $1,200.00
# lyzr-support: $800.00
Unified Audit Trail
All Lyzr agent interactions are logged with consistent schema, queryable across agents:
curl -X POST http://localhost:8080/api/v1/audit/search \
-H "Content-Type: application/json" \
-d '{
"client_id": "lyzr-hr",
"start_time": "2026-02-01T00:00:00Z",
"limit": 50
}'
Tool-Level Governance (Python SDK v6.0.0+)
If your Lyzr agents use LangChain tools, GovernedTool wraps them with input/output governance. See Per-Tool Governance for details and framework comparison.
More Examples
| Pattern | Language | Link |
|---|---|---|
| Multi-Department Factory | Python | lyzr/python |
| Decorator Pattern | Python | lyzr/decorator |
| TypeScript Service | TypeScript | lyzr/typescript |
