SDK Integration
AxonFlow provides official SDKs for seamless integration with your applications.
Available SDKs
| SDK | Package | Version | Status |
|---|---|---|---|
| Python | axonflow | 3.2.0 | Stable |
| TypeScript | @axonflow/sdk | 3.2.0 (source), npm 2.3.0 | Stable |
| Go | github.com/getaxonflow/axonflow-sdk-go/v3 | v3.2.0 | Stable |
| Java | com.getaxonflow:axonflow-sdk | 3.2.0 | Stable |
Requirements
| SDK | Language Version | Package Manager |
|---|---|---|
| Python | Python 3.8+ | pip |
| TypeScript | Node.js 18+ | npm / yarn |
| Go | Go 1.21+ | go modules |
| Java | Java 11+ | Maven / Gradle |
Integration Modes
AxonFlow supports two integration modes depending on your architecture:
| Mode | Description | Best For |
|---|---|---|
| Proxy Mode | Route LLM calls through AxonFlow | Quick setup, full governance |
| Gateway Mode | Pre-flight policy checks, direct LLM calls | Low latency, existing infrastructure |
Feature Support
All SDKs support the core feature set:
| Feature | Python | TypeScript | Go | Java |
|---|---|---|---|---|
| Proxy Mode (executeQuery) | Yes | Yes | Yes | Yes |
| Gateway Mode (pre-check + audit) | Yes | Yes | Yes | Yes |
| Static Policy Management | Yes | Yes | Yes | Yes |
| Dynamic Policy Management | Yes | Yes | Yes | Yes |
| Connector Management | Yes | Yes | Yes | Yes |
| Multi-Agent Planning (MAP) | Yes | Yes | Yes | Yes |
| Audit Log Search | Yes | Yes | Yes | Yes |
| Health Check | Yes | Yes | Yes | Yes |
Not sure which to choose? See Choosing a Mode.
Quick Example (Python)
Async Usage (Recommended)
import asyncio
from axonflow import AxonFlow
async def main():
async with AxonFlow(
endpoint="https://your-axonflow.example.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
# Execute a governed query
response = await client.execute_query(
user_token="user-jwt-token",
query="What is AI governance?",
request_type="chat"
)
print(response.data)
asyncio.run(main())
Sync Usage
from axonflow import AxonFlow
with AxonFlow.sync(
endpoint="https://your-axonflow.example.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
response = client.execute_query(
user_token="user-jwt-token",
query="What is AI governance?",
request_type="chat"
)
print(response.data)
Gateway Mode (Lowest Latency)
Gateway Mode lets you make direct LLM calls while AxonFlow handles governance:
from axonflow import AxonFlow
async with AxonFlow(
endpoint="https://your-axonflow.example.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
# Pre-check: Get policy approval before LLM call
context = await client.get_policy_approved_context(
prompt="Analyze this data",
request_type="chat"
)
# Make your own LLM call directly
llm_response = await your_llm_call(context.approved_prompt)
# Audit: Log the response for compliance
await client.audit_llm_response(
context_id=context.context_id,
response=llm_response
)
Authentication
All SDKs support authentication via:
- Client ID + Secret - For server-to-server communication
- User Tokens - For per-user audit trails
See Authentication for details.
Next Steps
- Choose your SDK and follow the getting started guide
- Review Integration Modes to pick the right architecture
- Explore Framework Integration for LangChain, CrewAI, etc.