SDK Integration
AxonFlow provides official SDKs for seamless integration with your applications.
Available SDKs
| SDK | Package | Version | Status |
|---|---|---|---|
| Python | axonflow | 6.9.0 | Stable |
| TypeScript | @axonflow/sdk | 6.2.0 | Stable |
| Go | github.com/getaxonflow/axonflow-sdk-go/v8 | 6.0.0 | Stable |
| Java | com.getaxonflow:axonflow-sdk | 6.2.0 | Stable |
| Rust | axonflow-sdk-rust | 0.1.0 | Preview |
Requirements
| SDK | Language Version | Package Manager |
|---|---|---|
| Python | Python 3.10+ | pip |
| TypeScript | Node.js 14+ | npm / yarn |
| Go | Go 1.21+ | go modules |
| Java | Java 11+ | Maven / Gradle |
| Rust | Rust 1.78+ | Cargo |
Integration Modes
AxonFlow supports two integration modes depending on your architecture:
| Mode | Description | Best For |
|---|---|---|
| Proxy Mode | Route LLM calls through AxonFlow | Quick setup, full governance |
| Gateway Mode | Pre-flight policy checks, direct LLM calls | Low latency, existing infrastructure |
Feature Support
The four stable SDKs (Python / TypeScript / Go / Java) ship the full feature set. The Rust SDK is in preview at v0.1.0 with the baseline shown below; subsequent Rust releases will fill in the rest.
| Feature | Python | TypeScript | Go | Java | Rust (preview) |
|---|---|---|---|---|---|
| Proxy Mode | Yes | Yes | Yes | Yes | Yes |
| Gateway Mode (pre-check + audit) | Yes | Yes | Yes | Yes | Audit only |
| System Policy Management | Yes | Yes | Yes | Yes | No |
| Tenant Policy Management | Yes | Yes | Yes | Yes | No |
| Connector Management | Yes | Yes | Yes | Yes | List + query + install |
| Multi-Agent Planning (MAP) | Yes | Yes | Yes | Yes | Generate + execute + status + cancel |
| Audit Log Search | Yes | Yes | Yes | Yes | No |
| Health Check | Yes | Yes | Yes | Yes | No |
| failWorkflow() | Yes | Yes | Yes | Yes | No |
| HITL Queue API (Enterprise) | Yes | Yes | Yes | Yes | No |
| OpenAI Interceptor (invisible governance) | Yes | Yes | Yes | Yes | Yes |
Not sure which to choose? See Choosing a Mode.
Quick Example (Python)
Async Usage (Recommended)
import asyncio
from axonflow import AxonFlow
async def main():
async with AxonFlow(
endpoint="https://your-axonflow.example.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
# Execute a governed query
response = await client.proxy_llm_call(
user_token="user-jwt-token",
query="What is AI governance?",
request_type="chat"
)
print(response.data)
asyncio.run(main())
Sync Usage
from axonflow import AxonFlow
with AxonFlow.sync(
endpoint="https://your-axonflow.example.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
response = client.proxy_llm_call(
user_token="user-jwt-token",
query="What is AI governance?",
request_type="chat"
)
print(response.data)
Gateway Mode (Lowest Latency)
Gateway Mode lets you make direct LLM calls while AxonFlow handles governance:
from axonflow import AxonFlow
async with AxonFlow(
endpoint="https://your-axonflow.example.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
# Pre-check: Get policy approval before LLM call
context = await client.get_policy_approved_context(
user_token="user-jwt-token",
query="Analyze this data"
)
# Make your own LLM call directly
llm_response = await your_llm_call(context.approved_data)
# Audit: Log the response for compliance
await client.audit_llm_call(
context_id=context.context_id,
response_summary=str(llm_response)[:200],
provider="openai",
model="gpt-4",
token_usage=TokenUsage(prompt_tokens=50, completion_tokens=100, total_tokens=150),
latency_ms=500
)
Authentication
All SDKs support authentication via:
- Client ID + Secret - For server-to-server communication
- User Tokens - For per-user audit trails
See Authentication for details.
Next Steps
- Choose your SDK and follow the getting started guide
- Review Integration Modes to pick the right architecture
- Explore Framework Integration for LangChain, CrewAI, etc.
- See AI Agent Runtimes for OpenClaw, Anthropic Computer Use, and Claude Agent SDK
