Lyzr AI + AxonFlow Integration
Overview
Lyzr AI is an enterprise agent framework with built-in Responsible AI and Safe AI modules. Their Agent Studio enables no-code/low-code agent creation with pre-built templates for HR, Sales, Support, Finance, and more.
AxonFlow adds governance, compliance, and audit trails to ensure Lyzr agents operate within enterprise policies before reaching production.
Together, they enable enterprises to deploy autonomous AI agents with full control, observability, and compliance.
Why Use AxonFlow with Lyzr?
Lyzr Strengths
- No-code Agent Studio for rapid development
- Pre-built agent templates (HR, Sales, Support, Finance, etc.)
- HybridFlow architecture blending LLMs and ML models
- Model-agnostic design (avoid vendor lock-in)
- Built-in Responsible AI and Safe AI modules
AxonFlow Strengths
- Real-time policy enforcement (block/allow/modify at inference time)
- Per-agent governance (different policies for different agent types)
- Cross-system audit trails (full request/response logging)
- Cost control (budget limits, rate limiting)
- PII protection (automatic masking before model inference)
The Perfect Combination
Lyzr handles: Agent creation, template workflows, responsible AI checks
AxonFlow handles: Inference governance, audit logging, policy enforcement, cost control
Integration Architecture
AxonFlow integrates with Lyzr agents using Gateway Mode, which wraps LLM calls with policy pre-checks and audit logging:
[Lyzr Agent]
|
v
[AxonFlow Pre-Check] --> Policy Evaluation
|
v (if approved)
[LLM Provider]
|
v
[AxonFlow Audit] --> Compliance Logging
|
v
[Response to Lyzr]
Quick Start
Prerequisites
- AxonFlow running locally or deployed (see Getting Started)
- Lyzr agent deployed or Agent Studio access
- AxonFlow Python or TypeScript SDK
Install Dependencies
pip install axonflow openai
Python SDK Integration
Since Lyzr makes LLM calls internally, integrate AxonFlow using the Gateway Mode pattern by wrapping your Lyzr agent's LLM interactions.
Basic Integration
import asyncio
import time
from axonflow import AxonFlow, TokenUsage
class GovernedLyzrAgent:
"""Lyzr agent wrapper with AxonFlow governance."""
def __init__(
self,
axonflow_url: str,
client_id: str,
client_secret: str,
lyzr_agent,
agent_type: str = "lyzr",
license_key: str = None,
):
self.axonflow = AxonFlow(
agent_url=axonflow_url,
client_id=client_id,
client_secret=client_secret,
license_key=license_key,
)
self.lyzr_agent = lyzr_agent
self.agent_type = agent_type
async def run(
self,
user_token: str,
query: str,
context: dict = None,
) -> str:
"""Execute Lyzr agent with AxonFlow governance."""
start_time = time.time()
async with self.axonflow:
# 1. Pre-check with AxonFlow
ctx = await self.axonflow.get_policy_approved_context(
user_token=user_token,
query=query,
context={
**(context or {}),
"agent_type": self.agent_type,
"framework": "lyzr",
},
)
if not ctx.approved:
raise PermissionError(f"Blocked: {ctx.block_reason}")
try:
# 2. Execute Lyzr agent
response = self.lyzr_agent.run(query)
latency_ms = int((time.time() - start_time) * 1000)
# 3. Audit the call
await self.axonflow.audit_llm_call(
context_id=ctx.context_id,
response_summary=response[:200] if len(response) > 200 else response,
provider="openai",
model="gpt-4",
token_usage=TokenUsage(
prompt_tokens=100,
completion_tokens=50,
total_tokens=150,
),
latency_ms=latency_ms,
metadata={"agent_type": self.agent_type},
)
return response
except Exception as e:
latency_ms = int((time.time() - start_time) * 1000)
await self.axonflow.audit_llm_call(
context_id=ctx.context_id,
response_summary=f"Error: {str(e)}",
provider="openai",
model="gpt-4",
token_usage=TokenUsage(prompt_tokens=0, completion_tokens=0, total_tokens=0),
latency_ms=latency_ms,
metadata={"error": str(e)},
)
raise
# Usage
async def main():
governed_agent = GovernedLyzrAgent(
axonflow_url="http://localhost:8080",
client_id="lyzr-hr-agent",
client_secret="your-client-secret",
lyzr_agent=your_lyzr_agent, # Your Lyzr agent instance
agent_type="lyzr-hr",
)
response = await governed_agent.run(
user_token="user-jwt-token",
query="What is our PTO policy?",
context={"department": "hr"},
)
print(response)
asyncio.run(main())
Sync Usage
For synchronous code:
import time
from axonflow import AxonFlow, TokenUsage
def governed_lyzr_call(
user_token: str,
query: str,
lyzr_agent,
agent_type: str = "lyzr",
context: dict = None,
) -> str:
"""Synchronous Lyzr agent call with governance."""
start_time = time.time()
with AxonFlow.sync(
agent_url="http://localhost:8080",
client_id="lyzr-agent",
client_secret="your-client-secret",
) as axonflow:
# 1. Pre-check
ctx = axonflow.get_policy_approved_context(
user_token=user_token,
query=query,
context={**(context or {}), "agent_type": agent_type, "framework": "lyzr"},
)
if not ctx.approved:
raise PermissionError(f"Blocked: {ctx.block_reason}")
# 2. Execute Lyzr agent
response = lyzr_agent.run(query)
latency_ms = int((time.time() - start_time) * 1000)
# 3. Audit
axonflow.audit_llm_call(
context_id=ctx.context_id,
response_summary=response[:200],
provider="openai",
model="gpt-4",
token_usage=TokenUsage(prompt_tokens=100, completion_tokens=50, total_tokens=150),
latency_ms=latency_ms,
)
return response
Integration Patterns
Pattern 1: Decorator for Lyzr Agents
Create a simple decorator that governs all Lyzr agent calls:
from functools import wraps
from axonflow import AxonFlow, TokenUsage
def with_axonflow_governance(
axonflow_url: str,
client_id: str,
client_secret: str,
agent_type: str,
):
"""Decorator to add AxonFlow governance to Lyzr agent methods."""
def decorator(func):
@wraps(func)
def wrapper(user_token: str, query: str, *args, **kwargs):
import time
start_time = time.time()
with AxonFlow.sync(
agent_url=axonflow_url,
client_id=client_id,
client_secret=client_secret,
) as axonflow:
# Pre-check
ctx = axonflow.get_policy_approved_context(
user_token=user_token,
query=query,
context={"agent_type": agent_type},
)
if not ctx.approved:
raise PermissionError(ctx.block_reason or "Request blocked")
try:
result = func(user_token, query, *args, **kwargs)
latency_ms = int((time.time() - start_time) * 1000)
# Audit success
axonflow.audit_llm_call(
context_id=ctx.context_id,
response_summary=str(result)[:200],
provider="openai",
model="gpt-4",
token_usage=TokenUsage(prompt_tokens=100, completion_tokens=50, total_tokens=150),
latency_ms=latency_ms,
)
return result
except Exception as e:
latency_ms = int((time.time() - start_time) * 1000)
axonflow.audit_llm_call(
context_id=ctx.context_id,
response_summary=f"Error: {e}",
provider="openai",
model="gpt-4",
token_usage=TokenUsage(prompt_tokens=0, completion_tokens=0, total_tokens=0),
latency_ms=latency_ms,
)
raise
return wrapper
return decorator
# Usage
@with_axonflow_governance(
axonflow_url="http://localhost:8080",
client_id="lyzr-support",
client_secret="your-client-secret",
agent_type="lyzr-support",
)
def support_agent_query(user_token: str, query: str):
# Your Lyzr agent logic here
return lyzr_support_agent.run(query)
Pattern 2: Multi-Department Agent Factory
Create governed agents for different departments:
from dataclasses import dataclass
from axonflow import AxonFlow, TokenUsage
@dataclass
class DepartmentConfig:
department: str
data_tier: str
model: str
class GovernedLyzrAgentFactory:
"""Factory for creating department-specific governed Lyzr agents."""
CONFIGS = {
"hr": DepartmentConfig("hr", "sensitive", "gpt-4"),
"sales": DepartmentConfig("sales", "standard", "gpt-4"),
"support": DepartmentConfig("support", "standard", "gpt-3.5-turbo"),
"finance": DepartmentConfig("finance", "restricted", "gpt-4"),
}
def __init__(
self,
axonflow_url: str,
client_secret: str,
):
self.axonflow_url = axonflow_url
self.client_secret = client_secret
def create_governed_runner(self, department: str, lyzr_agent):
"""Create a governed runner for a department's Lyzr agent."""
if department not in self.CONFIGS:
raise ValueError(f"Unknown department: {department}")
config = self.CONFIGS[department]
def run(user_token: str, query: str) -> str:
import time
start_time = time.time()
with AxonFlow.sync(
agent_url=self.axonflow_url,
client_id=f"lyzr-{department}",
client_secret=self.client_secret,
) as axonflow:
ctx = axonflow.get_policy_approved_context(
user_token=user_token,
query=query,
context={
"department": config.department,
"data_tier": config.data_tier,
},
)
if not ctx.approved:
raise PermissionError(ctx.block_reason)
result = lyzr_agent.run(query)
latency_ms = int((time.time() - start_time) * 1000)
axonflow.audit_llm_call(
context_id=ctx.context_id,
response_summary=result[:200],
provider="openai",
model=config.model,
token_usage=TokenUsage(prompt_tokens=100, completion_tokens=50, total_tokens=150),
latency_ms=latency_ms,
metadata={"department": config.department},
)
return result
return run
# Usage
factory = GovernedLyzrAgentFactory(
axonflow_url="http://localhost:8080",
client_secret="your-client-secret",
)
# Create governed runners for each department's agent
hr_runner = factory.create_governed_runner("hr", lyzr_hr_agent)
sales_runner = factory.create_governed_runner("sales", lyzr_sales_agent)
support_runner = factory.create_governed_runner("support", lyzr_support_agent)
# Use the governed runners
response = hr_runner(user_token="jwt-token", query="What is our PTO policy?")
TypeScript SDK Integration
For Node.js/TypeScript backends that orchestrate Lyzr agents:
Install Dependencies
npm install @axonflow/sdk openai
TypeScript Governance Client
import { GatewayModeClient } from '@axonflow/sdk';
import OpenAI from 'openai';
interface LyzrAgentConfig {
department: string;
dataTier: 'standard' | 'sensitive' | 'restricted';
model: string;
}
interface GovernedResponse {
response: string;
contextId: string;
blocked?: boolean;
reason?: string;
}
class LyzrGovernanceService {
private axonflow: GatewayModeClient;
private openai: OpenAI;
constructor(
axonflowUrl: string,
clientSecret: string,
openaiKey: string,
licenseKey?: string
) {
this.axonflow = new GatewayModeClient({
agentUrl: axonflowUrl,
clientId: 'lyzr-ts-service',
clientSecret,
licenseKey,
});
this.openai = new OpenAI({ apiKey: openaiKey });
}
async governedAgentCall(
userToken: string,
query: string,
config: LyzrAgentConfig,
systemPrompt: string
): Promise<GovernedResponse> {
const startTime = Date.now();
// 1. Pre-check with AxonFlow
const preCheck = await this.axonflow.preCheck({
userToken,
query,
context: {
department: config.department,
data_tier: config.dataTier,
agent_framework: 'lyzr',
},
});
if (!preCheck.approved) {
return {
response: '',
contextId: preCheck.contextId,
blocked: true,
reason: preCheck.blockReason,
};
}
try {
// 2. Make LLM call (simulating Lyzr agent behavior)
const completion = await this.openai.chat.completions.create({
model: config.model,
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: query },
],
});
const response = completion.choices[0]?.message?.content || '';
const latencyMs = Date.now() - startTime;
// 3. Audit the call
await this.axonflow.auditLLMCall({
contextId: preCheck.contextId,
responseSummary: response.slice(0, 200),
provider: 'openai',
model: config.model,
tokenUsage: {
promptTokens: completion.usage?.prompt_tokens || 0,
completionTokens: completion.usage?.completion_tokens || 0,
totalTokens: completion.usage?.total_tokens || 0,
},
latencyMs,
metadata: { department: config.department },
});
return {
response,
contextId: preCheck.contextId,
};
} catch (error) {
const latencyMs = Date.now() - startTime;
await this.axonflow.auditLLMCall({
contextId: preCheck.contextId,
responseSummary: `Error: ${error instanceof Error ? error.message : 'Unknown'}`,
provider: 'openai',
model: config.model,
tokenUsage: { promptTokens: 0, completionTokens: 0, totalTokens: 0 },
latencyMs,
metadata: { error: true },
});
throw error;
}
}
}
// Department-specific agent factories
const AGENT_CONFIGS: Record<string, LyzrAgentConfig> = {
hr: { department: 'hr', dataTier: 'sensitive', model: 'gpt-4' },
sales: { department: 'sales', dataTier: 'standard', model: 'gpt-4' },
support: { department: 'support', dataTier: 'standard', model: 'gpt-3.5-turbo' },
finance: { department: 'finance', dataTier: 'restricted', model: 'gpt-4' },
};
const SYSTEM_PROMPTS: Record<string, string> = {
hr: 'You are an HR assistant. Help with HR policies, benefits, and procedures.',
sales: 'You are a sales assistant. Help with sales processes and customer information.',
support: 'You are a support assistant. Help resolve customer issues.',
finance: 'You are a finance assistant. Help with financial queries and reports.',
};
// Usage
const service = new LyzrGovernanceService(
'http://localhost:8080',
process.env.AXONFLOW_CLIENT_SECRET!,
process.env.OPENAI_API_KEY!
);
async function handleHRQuery(userToken: string, query: string) {
return service.governedAgentCall(
userToken,
query,
AGENT_CONFIGS.hr,
SYSTEM_PROMPTS.hr
);
}
// Express.js integration example
import express from 'express';
const app = express();
app.use(express.json());
app.post('/api/lyzr/:department', async (req, res) => {
const { department } = req.params;
const { user_token, query } = req.body;
const config = AGENT_CONFIGS[department];
if (!config) {
return res.status(400).json({ error: `Unknown department: ${department}` });
}
try {
const result = await service.governedAgentCall(
user_token,
query,
config,
SYSTEM_PROMPTS[department]
);
if (result.blocked) {
return res.status(403).json({
error: 'Request blocked by policy',
reason: result.reason,
});
}
res.json(result);
} catch (error) {
res.status(500).json({ error: 'Internal server error' });
}
});
app.listen(3000, () => console.log('Lyzr governance service running on :3000'));
Multi-Tenant Support
class MultiTenantLyzrService {
private services: Map<string, LyzrGovernanceService> = new Map();
private axonflowUrl: string;
private openaiKey: string;
constructor(axonflowUrl: string, openaiKey: string) {
this.axonflowUrl = axonflowUrl;
this.openaiKey = openaiKey;
}
private getService(tenantId: string, tenantSecret: string): LyzrGovernanceService {
const key = `${tenantId}:${tenantSecret}`;
if (!this.services.has(key)) {
this.services.set(key, new LyzrGovernanceService(
this.axonflowUrl,
tenantSecret,
this.openaiKey
));
}
return this.services.get(key)!;
}
async handleTenantRequest(
tenantId: string,
tenantSecret: string,
userToken: string,
query: string,
department: string
): Promise<GovernedResponse> {
const service = this.getService(tenantId, tenantSecret);
const config = AGENT_CONFIGS[department];
return service.governedAgentCall(
userToken,
query,
config,
SYSTEM_PROMPTS[department]
);
}
}
AxonFlow Policy Configuration
Create policies that match your Lyzr agent types:
{
"policies": [
{
"name": "lyzr-hr-policy",
"description": "Policy for Lyzr HR agents",
"enabled": true,
"rules": [
{
"type": "content_filter",
"config": {
"blocked_patterns": ["salary details", "performance reviews"],
"action": "block"
}
},
{
"type": "pii_protection",
"config": {
"fields": ["ssn", "salary", "address"],
"action": "mask"
}
}
]
},
{
"name": "lyzr-sales-policy",
"description": "Policy for Lyzr Sales agents",
"enabled": true,
"rules": [
{
"type": "rate_limit",
"config": {
"requests_per_minute": 60,
"action": "throttle"
}
}
]
}
]
}
Best Practices
1. Always Use Context IDs
The context_id from pre-check must be passed to audit for proper correlation:
pre_check = governance.pre_check(user_token, query)
# Store context_id immediately
context_id = pre_check["context_id"]
# ... make LLM call ...
governance.audit_llm_call(context_id=context_id, ...)
2. Handle Blocked Requests Gracefully
pre_check = governance.pre_check(user_token, query)
if not pre_check.get("approved"):
# Log the block reason
logger.warning(f"Request blocked: {pre_check.get('block_reason')}")
# Return user-friendly message
return "I'm unable to help with that request."
3. Always Audit, Even on Errors
try:
result = lyzr_agent.run(query)
governance.audit_llm_call(context_id, result[:200], ...)
except Exception as e:
governance.audit_llm_call(context_id, f"Error: {e}", ...)
raise
Troubleshooting
Common Issues
Issue: Pre-check returns 401 Unauthorized
- Verify
X-Client-Secretheader is correct - Check
X-License-Keyif using enterprise features - Ensure client_id is registered in AxonFlow
Issue: Audit calls failing
- Verify context_id is from a valid pre-check (not expired)
- Check that AxonFlow agent is healthy (
/healthendpoint)
Issue: All requests being blocked
- Review policy configuration in AxonFlow
- Check if rate limits are exceeded
- Verify user_token permissions