Skip to main content

SDK Integration

AxonFlow provides official SDKs for seamless integration with your applications.

Available SDKs

SDKPackageVersionStatus
Pythonaxonflow3.2.0Stable
TypeScript@axonflow/sdk3.2.0 (source), npm 2.3.0Stable
Gogithub.com/getaxonflow/axonflow-sdk-go/v3v3.2.0Stable
Javacom.getaxonflow:axonflow-sdk3.2.0Stable

Requirements

SDKLanguage VersionPackage Manager
PythonPython 3.8+pip
TypeScriptNode.js 18+npm / yarn
GoGo 1.21+go modules
JavaJava 11+Maven / Gradle

Integration Modes

AxonFlow supports two integration modes depending on your architecture:

ModeDescriptionBest For
Proxy ModeRoute LLM calls through AxonFlowQuick setup, full governance
Gateway ModePre-flight policy checks, direct LLM callsLow latency, existing infrastructure

Feature Support

All SDKs support the core feature set:

FeaturePythonTypeScriptGoJava
Proxy Mode (executeQuery)YesYesYesYes
Gateway Mode (pre-check + audit)YesYesYesYes
Static Policy ManagementYesYesYesYes
Dynamic Policy ManagementYesYesYesYes
Connector ManagementYesYesYesYes
Multi-Agent Planning (MAP)YesYesYesYes
Audit Log SearchYesYesYesYes
Health CheckYesYesYesYes

Not sure which to choose? See Choosing a Mode.

Quick Example (Python)

import asyncio
from axonflow import AxonFlow

async def main():
async with AxonFlow(
endpoint="https://your-axonflow.example.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
# Execute a governed query
response = await client.execute_query(
user_token="user-jwt-token",
query="What is AI governance?",
request_type="chat"
)
print(response.data)

asyncio.run(main())

Sync Usage

from axonflow import AxonFlow

with AxonFlow.sync(
endpoint="https://your-axonflow.example.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
response = client.execute_query(
user_token="user-jwt-token",
query="What is AI governance?",
request_type="chat"
)
print(response.data)

Gateway Mode (Lowest Latency)

Gateway Mode lets you make direct LLM calls while AxonFlow handles governance:

from axonflow import AxonFlow

async with AxonFlow(
endpoint="https://your-axonflow.example.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
# Pre-check: Get policy approval before LLM call
context = await client.get_policy_approved_context(
prompt="Analyze this data",
request_type="chat"
)

# Make your own LLM call directly
llm_response = await your_llm_call(context.approved_prompt)

# Audit: Log the response for compliance
await client.audit_llm_response(
context_id=context.context_id,
response=llm_response
)

Authentication

All SDKs support authentication via:

  • Client ID + Secret - For server-to-server communication
  • User Tokens - For per-user audit trails

See Authentication for details.

Next Steps