Writing Tests for AxonFlow
This guide explains how to write effective tests when contributing to AxonFlow. Following these guidelines ensures your contributions meet our quality standards and can be merged efficiently.
Test File Conventions
Naming Patterns
| Test Type | File Pattern | Example |
|---|---|---|
| Unit tests | *_test.go | policy_test.go |
| Integration tests | *_integration_test.go | db_integration_test.go |
| Benchmarks | *_bench_test.go | agent_bench_test.go |
File Location
Tests should be co-located with the code they test:
platform/agent/
├── policy.go # Implementation
├── policy_test.go # Unit tests for policy.go
├── db_auth.go # Implementation
└── db_auth_test.go # Unit tests for db_auth.go
Writing Unit Tests
Basic Structure
package agent
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestFunctionName_Scenario_ExpectedBehavior(t *testing.T) {
// Arrange
input := "test input"
expected := "expected output"
// Act
result := FunctionUnderTest(input)
// Assert
assert.Equal(t, expected, result)
}
Test Naming Convention
Use descriptive names that explain what's being tested:
// Good: Clear scenario and expected behavior
func TestValidatePolicy_EmptyInput_ReturnsError(t *testing.T)
func TestValidatePolicy_ValidJSON_ParsesSuccessfully(t *testing.T)
func TestValidatePolicy_MissingRequiredField_ReturnsValidationError(t *testing.T)
// Bad: Vague or numbered names
func TestValidatePolicy(t *testing.T)
func TestValidatePolicy2(t *testing.T)
func TestValidatePolicyError(t *testing.T)
Table-Driven Tests
For testing multiple scenarios, use table-driven tests:
func TestValidateInput(t *testing.T) {
tests := []struct {
name string
input string
expected bool
errMsg string
}{
{
name: "valid input",
input: "hello",
expected: true,
errMsg: "",
},
{
name: "empty input",
input: "",
expected: false,
errMsg: "input cannot be empty",
},
{
name: "input too long",
input: string(make([]byte, 1001)),
expected: false,
errMsg: "input exceeds maximum length",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, err := ValidateInput(tt.input)
assert.Equal(t, tt.expected, result)
if tt.errMsg != "" {
require.Error(t, err)
assert.Contains(t, err.Error(), tt.errMsg)
} else {
require.NoError(t, err)
}
})
}
}
Mocking External Dependencies
Database Mocking with go-sqlmock
import (
"testing"
"github.com/DATA-DOG/go-sqlmock"
"github.com/stretchr/testify/require"
)
func TestGetUser_ReturnsUser(t *testing.T) {
// Create mock database
db, mock, err := sqlmock.New()
require.NoError(t, err)
defer db.Close()
// Set up expectations
rows := sqlmock.NewRows([]string{"id", "name", "email"}).
AddRow(1, "John Doe", "[email protected]")
mock.ExpectQuery("SELECT id, name, email FROM users WHERE id = ?").
WithArgs(1).
WillReturnRows(rows)
// Execute test
repo := NewUserRepository(db)
user, err := repo.GetUser(1)
// Assert
require.NoError(t, err)
assert.Equal(t, "John Doe", user.Name)
// Verify all expectations were met
require.NoError(t, mock.ExpectationsWereMet())
}
HTTP Mocking with httptest
import (
"net/http"
"net/http/httptest"
"testing"
)
func TestExternalAPICall(t *testing.T) {
// Create mock server
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Verify request
assert.Equal(t, "GET", r.Method)
assert.Equal(t, "/api/data", r.URL.Path)
// Return mock response
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write([]byte(`{"status": "success", "data": {"id": 1}}`))
}))
defer server.Close()
// Use server.URL as the API endpoint
client := NewAPIClient(server.URL)
result, err := client.FetchData()
require.NoError(t, err)
assert.Equal(t, 1, result.Data.ID)
}
Context and Timeout Testing
func TestOperation_Timeout(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
defer cancel()
// Simulate slow operation
result, err := SlowOperation(ctx)
require.Error(t, err)
assert.ErrorIs(t, err, context.DeadlineExceeded)
}
Writing Integration Tests
Integration tests verify component interactions with real dependencies.
Build Tag Convention
//go:build integration
package agent_test
import (
"testing"
)
func TestDatabaseIntegration(t *testing.T) {
// This test only runs with: go test -tags=integration
}
Setup and Teardown
func TestMain(m *testing.M) {
// Setup: Start test database
pool, err := dockertest.NewPool("")
if err != nil {
log.Fatalf("Could not construct pool: %s", err)
}
resource, err := pool.Run("postgres", "15", []string{
"POSTGRES_PASSWORD=test",
"POSTGRES_DB=testdb",
})
if err != nil {
log.Fatalf("Could not start resource: %s", err)
}
// Run tests
code := m.Run()
// Teardown: Clean up
if err := pool.Purge(resource); err != nil {
log.Fatalf("Could not purge resource: %s", err)
}
os.Exit(code)
}
Writing Benchmarks
Basic Benchmark
func BenchmarkPolicyEvaluation(b *testing.B) {
// Setup
policy := createTestPolicy()
request := createTestRequest()
// Reset timer after setup
b.ResetTimer()
// Run benchmark
for i := 0; i < b.N; i++ {
_ = EvaluatePolicy(policy, request)
}
}
Benchmark with Different Inputs
func BenchmarkPolicyEvaluation(b *testing.B) {
sizes := []int{10, 100, 1000}
for _, size := range sizes {
b.Run(fmt.Sprintf("rules_%d", size), func(b *testing.B) {
policy := createPolicyWithRules(size)
request := createTestRequest()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = EvaluatePolicy(policy, request)
}
})
}
}
Memory Benchmarks
func BenchmarkMemoryAllocation(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
result := ProcessData(testData)
_ = result
}
}
CI Validation
When you submit a pull request, the CI pipeline validates your tests:
Automated Checks
-
All tests must pass
go test ./... -
Race condition detection
go test -race ./... -
Coverage requirements
- New code should have tests
- Critical paths require comprehensive coverage
-
Linting
golangci-lint run
Pre-Submit Checklist
Before submitting, verify locally:
# Run all tests
go test ./...
# Run with race detector
go test -race ./...
# Run linter
golangci-lint run
# Check coverage
go test -cover ./...
Best Practices
1. Test Behavior, Not Implementation
// Good: Tests observable behavior
func TestCalculateTotal_AppliesDiscount(t *testing.T) {
total := CalculateTotal(100, 0.2)
assert.Equal(t, 80.0, total)
}
// Bad: Tests internal implementation details
func TestCalculateTotal_CallsMultiply(t *testing.T) {
// This test is brittle and breaks on refactoring
}
2. Keep Tests Independent
Each test should be able to run in isolation:
// Good: Independent test with its own setup
func TestCreateUser(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
user := createTestUser(db)
assert.NotEmpty(t, user.ID)
}
// Bad: Depends on other tests running first
var globalUser User
func TestCreateUser(t *testing.T) {
globalUser = createUser() // Other tests depend on this
}
3. Use Meaningful Assertions
// Good: Clear assertion messages
assert.Equal(t, expected, actual, "user count should match after insert")
// Good: Use require for fatal conditions
require.NoError(t, err, "database connection should not fail")
4. Test Error Paths
Don't just test happy paths:
func TestValidate(t *testing.T) {
t.Run("valid input", func(t *testing.T) {
err := Validate(validInput)
require.NoError(t, err)
})
t.Run("nil input", func(t *testing.T) {
err := Validate(nil)
require.Error(t, err)
assert.Contains(t, err.Error(), "input required")
})
t.Run("invalid format", func(t *testing.T) {
err := Validate(invalidInput)
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid format")
})
}
Next Steps
- Testing Overview - Complete testing infrastructure
- Load Testing - Performance testing methodology