SDK Integration
AxonFlow provides official SDKs for seamless integration with your applications.
Available SDKs
| SDK | Package | Status |
|---|---|---|
| Python | axonflow | Stable |
| TypeScript | @axonflow/sdk | Stable |
| Go | github.com/getaxonflow/axonflow-go | Stable |
| Java | com.axonflow:axonflow-sdk | Stable |
Integration Modes
AxonFlow supports two integration modes depending on your architecture:
| Mode | Description | Best For |
|---|---|---|
| Proxy Mode | Route LLM calls through AxonFlow | Quick setup, full governance |
| Gateway Mode | Pre-flight policy checks, direct LLM calls | Low latency, existing infrastructure |
Not sure which to choose? See Choosing a Mode.
Quick Example (Python)
Async Usage (Recommended)
import asyncio
from axonflow import AxonFlow
async def main():
async with AxonFlow(
agent_url="https://your-agent.axonflow.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
# Execute a governed query
response = await client.execute_query(
user_token="user-jwt-token",
query="What is AI governance?",
request_type="chat"
)
print(response.data)
asyncio.run(main())
Sync Usage
from axonflow import AxonFlow
with AxonFlow.sync(
agent_url="https://your-agent.axonflow.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
response = client.execute_query(
user_token="user-jwt-token",
query="What is AI governance?",
request_type="chat"
)
print(response.data)
Gateway Mode (Lowest Latency)
Gateway Mode lets you make direct LLM calls while AxonFlow handles governance:
from axonflow import AxonFlow
async with AxonFlow(
agent_url="https://your-agent.axonflow.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
# Pre-check: Get policy approval before LLM call
context = await client.get_policy_approved_context(
prompt="Analyze this data",
request_type="chat"
)
# Make your own LLM call directly
llm_response = await your_llm_call(context.approved_prompt)
# Audit: Log the response for compliance
await client.audit_llm_response(
context_id=context.context_id,
response=llm_response
)
Authentication
All SDKs support authentication via:
- Client ID + Secret - For server-to-server communication
- User Tokens - For per-user audit trails
See Authentication for details.
Next Steps
- Choose your SDK and follow the getting started guide
- Review Integration Modes to pick the right architecture
- Explore Framework Integration for LangChain, CrewAI, etc.