Rust SDK — Getting Started
Current Version: 0.1.0 (preview)
The Rust SDK is in preview. It covers a baseline subset of the surface available in the established Python / TypeScript / Go / Java SDKs (Proxy Mode, Audit, basic MAP, basic MCP, OpenAI interceptor). Subsequent releases will add the full governance / workflow / cost / compliance surface.
Track upcoming work in the Rust SDK issues.
Installation
Add this to your Cargo.toml:
[dependencies]
axonflow-sdk-rust = "0.1"
tokio = { version = "1", features = ["full"] }
Requires Rust 1.78+.
Quick Start
Use Proxy Mode when you want AxonFlow to evaluate policy and forward the call in a single SDK call.
use axonflow_sdk_rust::{AxonFlowClient, AxonFlowConfig};
use std::collections::HashMap;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = AxonFlowConfig::new(std::env::var("AXONFLOW_ENDPOINT")?)
.with_auth(
std::env::var("AXONFLOW_CLIENT_ID")?,
std::env::var("AXONFLOW_CLIENT_SECRET")?,
);
let client = AxonFlowClient::new(config)?;
let mut context = HashMap::new();
context.insert("temperature".to_string(), serde_json::json!(0.2));
context.insert("max_tokens".to_string(), serde_json::json!(120));
let response = client
.proxy_llm_call(
"user-token",
"What is the capital of France?",
"chat",
context,
)
.await?;
if response.blocked {
eprintln!("Blocked: {}", response.block_reason.unwrap_or_default());
} else if response.success {
println!("{:?}", response.data);
}
Ok(())
}
The SDK speaks HTTP Basic auth. With no credentials configured, it defaults to Basic base64("community:") so it works out-of-the-box against a self-hosted community deployment. With with_auth(...), it sends Basic base64("client_id:client_secret").
For a self-hosted community deployment without auth, just construct the client with no credentials:
let client = AxonFlowClient::new(AxonFlowConfig::new("http://localhost:8080"))?;
For enterprise mode with a license key:
let config = AxonFlowConfig::new(std::env::var("AXONFLOW_ENDPOINT")?)
.with_auth(client_id, client_secret)
.with_license_key(std::env::var("AXONFLOW_LICENSE_KEY")?);
Integration Modes
Proxy Mode
proxy_llm_call evaluates policy and forwards to the orchestrator's configured LLM provider in one call. See Proxy Mode for the cross-SDK overview.
Gateway Mode (audit-only)
If you call your LLM directly and only want to log calls for compliance:
use axonflow_sdk_rust::{AuditRequest, TokenUsage};
client
.audit_llm_call(&AuditRequest {
context_id: "request-id-from-your-llm".to_string(),
response_summary: "Summary of the response".to_string(),
provider: "openai".to_string(),
model: "gpt-4".to_string(),
token_usage: TokenUsage {
prompt_tokens: 100,
completion_tokens: 50,
total_tokens: 150,
},
latency_ms: 250,
metadata: None,
})
.await?;
Invisible Governance via the OpenAI Interceptor
If you have an OpenAI-compatible client, wrap it. AxonFlow runs a policy pre-check before each call, blocks on violations, and audits asynchronously after the response.
use axonflow_sdk_rust::interceptors::openai::{
ChatCompletionRequest, ChatMessage, OpenAIChatCompleter, WrappedOpenAIClient,
};
// Implement OpenAIChatCompleter for your existing client, then:
let governed = WrappedOpenAIClient::new(my_openai_client, axonflow_client, "user-123");
let response = governed
.create_chat_completion(ChatCompletionRequest {
model: "gpt-4".to_string(),
messages: vec![ChatMessage {
role: "user".to_string(),
content: "Hello".to_string(),
}],
..Default::default()
})
.await?;
Multi-Agent Planning (MAP)
let plan = client
.generate_plan("Plan a 3-day trip to Paris", "travel", None)
.await?;
println!("Plan {} with {} steps", plan.plan_id, plan.steps.len());
let execution = client.execute_plan(&plan.plan_id, None).await?;
println!("Status: {}", execution.status);
let status = client.get_plan_status(&plan.plan_id).await?;
let _ = client.cancel_plan(&plan.plan_id, Some("user_cancelled")).await?;
MCP Connectors
let connectors = client.list_connectors().await?;
for conn in &connectors {
println!("{} ({}) — installed: {}", conn.name, conn.r#type, conn.installed);
}
Configuration
use std::time::Duration;
use axonflow_sdk_rust::{AxonFlowConfig, RetryConfig, CacheConfig, Mode};
let config = AxonFlowConfig::new("http://localhost:8080")
.with_auth("client-id", "client-secret")
.with_mode(Mode::Production) // Production = fail-open on 5xx; Sandbox = propagate
.with_timeout(Duration::from_secs(30)) // for non-MAP requests
.with_map_timeout(Duration::from_secs(120)) // for plan generation/execution
.with_retry(RetryConfig {
enabled: true,
max_attempts: 3,
initial_delay: Duration::from_secs(1),
})
.with_cache(CacheConfig {
enabled: true,
ttl: Duration::from_secs(60),
});
Error Handling
use axonflow_sdk_rust::AxonFlowError;
match client.list_connectors().await {
Ok(connectors) => println!("{}", connectors.len()),
Err(AxonFlowError::ApiError { status: 403, message }) => {
eprintln!("Policy violation: {}", message);
}
Err(AxonFlowError::ApiError { status: 429, .. }) => {
eprintln!("Rate limited; back off and retry");
}
Err(AxonFlowError::Unavailable(_)) => {
eprintln!("Platform unavailable (Production mode would fail-open here)");
}
Err(e) => eprintln!("Other error: {}", e),
}
Telemetry Opt-Out
The SDK sends an anonymous heartbeat at most once per machine every 7 days for licensing compliance. To disable:
export AXONFLOW_TELEMETRY=off
Scope: AXONFLOW_TELEMETRY=off disables the heartbeat described above. On self-hosted and in-VPC deployments, that heartbeat is the only data the SDK sends to AxonFlow. On Community SaaS (try.getaxonflow.com) the hosted service also processes operational data (registrations, audit logs, policy enforcement records, workflow state, plan data, request-header metadata aggregated for usage analytics) as part of running the platform; that flow is governed by the Privacy Policy, not by this env var. See Telemetry for full details.
What's not in v0.2.0 yet
This is a preview release. The following surfaces from the established SDKs are coming in subsequent releases:
health_check,execute_query,get_policy_approved_context- Full MAP:
resume_plan,rollback_plan,update_plan,get_plan_versions mcp_check_input/mcp_check_output- Gemini / Bedrock / Ollama interceptors (OpenAI and Anthropic supported)
- Governance: policy CRUD + simulation, HITL queue (decision history
list_decisionsandexplain_decisionship in v0.2.0) - Workflows + executions, cost / budgets / circuit breaker
- MASFEAT compliance, webhooks, audit search
Track progress on the Rust SDK issues.
Repository
- GitHub: getaxonflow/axonflow-sdk-rust
- crates.io:
axonflow-sdk-rust(planned) - docs.rs: docs.rs/axonflow-sdk-rust (after first crates.io publish)
- License: MIT
