Writing Tests for AxonFlow
This guide explains how to write effective tests when contributing to AxonFlow. The most important rule is to test the behavior that real users depend on. For AxonFlow, that often means request handling, policy outcomes, connector behavior, workflow execution, and audit visibility, not just pure helper functions.
Test File Conventions
Naming Patterns
| Test Type | File Pattern | Example |
|---|---|---|
| Unit tests | *_test.go | policy_test.go |
| Integration tests | *_integration_test.go | db_integration_test.go |
| Benchmarks | *_bench_test.go | agent_bench_test.go |
File Location
Tests should be co-located with the code they test:
platform/agent/
├── policy.go # Implementation
├── policy_test.go # Unit tests for policy.go
├── db_auth.go # Implementation
└── db_auth_test.go # Unit tests for db_auth.go
Writing Unit Tests
Basic Structure
package agent
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestFunctionName_Scenario_ExpectedBehavior(t *testing.T) {
input := "test input"
expected := "expected output"
result := FunctionUnderTest(input)
assert.Equal(t, expected, result)
}
Test Naming Convention
Use descriptive names that explain what is being tested:
func TestValidatePolicy_EmptyInput_ReturnsError(t *testing.T)
func TestValidatePolicy_ValidJSON_ParsesSuccessfully(t *testing.T)
func TestValidatePolicy_MissingRequiredField_ReturnsValidationError(t *testing.T)
Table-Driven Tests
Table-driven tests are especially useful in AxonFlow because the same code path often needs to cover multiple policy categories, tenant states, connector types, or routing decisions.
func TestValidateInput(t *testing.T) {
tests := []struct {
name string
input string
expected bool
errMsg string
}{
{
name: "valid input",
input: "hello",
expected: true,
},
{
name: "empty input",
input: "",
expected: false,
errMsg: "input cannot be empty",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, err := ValidateInput(tt.input)
assert.Equal(t, tt.expected, result)
if tt.errMsg != "" {
require.Error(t, err)
assert.Contains(t, err.Error(), tt.errMsg)
} else {
require.NoError(t, err)
}
})
}
}
Mocking External Dependencies
Database Mocking with go-sqlmock
func TestGetUser_ReturnsUser(t *testing.T) {
db, mock, err := sqlmock.New()
require.NoError(t, err)
defer db.Close()
rows := sqlmock.NewRows([]string{"id", "name", "email"}).
AddRow(1, "John Doe", "[email protected]")
mock.ExpectQuery("SELECT id, name, email FROM users WHERE id = ?").
WithArgs(1).
WillReturnRows(rows)
repo := NewUserRepository(db)
user, err := repo.GetUser(1)
require.NoError(t, err)
assert.Equal(t, "John Doe", user.Name)
require.NoError(t, mock.ExpectationsWereMet())
}
HTTP Mocking with httptest
Use httptest whenever the value of the test is request or response behavior. That is often better than mocking tiny internal helpers because it tests what an SDK or API consumer would actually observe.
func TestExternalAPICall(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
assert.Equal(t, "GET", r.Method)
assert.Equal(t, "/api/data", r.URL.Path)
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(`{"status":"success","data":{"id":1}}`))
}))
defer server.Close()
client := NewAPIClient(server.URL)
result, err := client.FetchData()
require.NoError(t, err)
assert.Equal(t, 1, result.Data.ID)
}
Writing Integration Tests
Integration tests verify component interactions with real dependencies.
Build Tag Convention
//go:build integration
package agent_test
func TestDatabaseIntegration(t *testing.T) {
// Runs with: go test -tags=integration
}
In AxonFlow, integration tests are often the first place you catch:
- request-shape drift on
/api/request - policy evaluation differences between Agent and Orchestrator
- connector registration or auth regressions
- issues that only appear with PostgreSQL, Redis, or real HTTP paths live together
Practical Test Commands
# Focused unit tests
go test ./platform/agent/...
go test ./platform/orchestrator/...
# Full platform unit tests
go test ./platform/...
# Integration tests (community: use docker compose up -d)
# The docker-compose.test.yml file is available in the enterprise repository.
docker compose up -d
go test ./platform/... -tags=integration
# Benchmarks when performance matters
go test ./platform/... -bench=. -benchmem
What to Test for AxonFlow Features
When choosing what to write, use this mental checklist:
| Area | What good tests prove |
|---|---|
| System policies | the request is allowed, blocked, or redacted exactly as expected |
| Tenant policies | runtime configuration and tenant-specific behavior change outcomes correctly |
| MCP connectors | registration, auth, request translation, and output checks behave correctly |
| Gateway mode | pre-check and audit calls preserve the right contract |
| Proxy mode | the request path through /api/request is stable |
| Workflows and MAP | plan generation, execution, replay, and failure handling work end to end |
| Licensing and tier gates | limits and feature gates match the active tier |
This matters because a lot of regressions in AxonFlow are “API still compiles, but user-visible behavior changed.”
Test Quality Guidelines
- Prefer assertions on externally visible behavior over private implementation detail.
- Keep fixtures realistic enough to reflect the request shapes users actually send.
- When a bug came from a specific incident or regression, add a test that proves that exact case.
- If a feature is tier-gated, test both the allowed and denied paths when practical.
- If a request can be blocked by policy, assert on the blocked behavior explicitly rather than only on the happy path.
If your change touches docs or examples, also verify that the example still matches the current SDK method names and runtime endpoints. Docs regressions are often caught too late if they are treated as “not real tests.”
Before Submitting
Run these checks locally before pushing:
go test ./platform/... # Unit tests
go test -race ./platform/... # Race condition detection
golangci-lint run ./platform/... # Linting
go test -cover ./platform/... # Coverage check
CI enforces coverage thresholds (currently 76% for orchestrator, agent, and connector packages). If your change drops coverage, add tests rather than requesting a threshold reduction.
Benchmarks
Performance-sensitive code should include benchmarks:
func BenchmarkPolicyEvaluation(b *testing.B) {
engine := setupTestEngine()
req := testRequest()
b.ResetTimer()
for i := 0; i < b.N; i++ {
engine.Evaluate(req)
}
}
Run benchmarks with go test -bench=. -benchmem ./platform/....
