Skip to main content

Evidence Export Pack

Evidence Export is the feature teams use when governance stops being a local engineering concern and starts needing to be shared with other people.

That usually means one of these situations:

  • security wants a review pack instead of API access
  • compliance wants evidence that governance controls are running
  • a platform team needs a snapshot of governed activity before an upgrade or production rollout
  • leadership wants a concrete record of what AI workflows are actually doing

Instead of forcing reviewers to query the system directly, AxonFlow can bundle the relevant governance records into a single structured export.

What The Export Contains

Depending on the requested record types, an evidence pack can include:

  • audit_logs
  • workflow_steps
  • hitl_approvals

That makes it useful across both standard governed request traffic and workflow-oriented systems that include approval gates.

Endpoint

POST /api/v1/evidence/export

Example Request

curl -X POST "http://localhost:8080/api/v1/evidence/export" \
-H "Content-Type: application/json" \
-d '{
"start_date": "2026-02-15",
"end_date": "2026-03-01",
"types": ["audit_logs", "workflow_steps", "hitl_approvals"],
"limit": 1000
}'

Request Fields

FieldRequiredDescription
start_dateYesStart of the export window, using YYYY-MM-DD or RFC3339
end_dateNoEnd of the export window, defaults to now
typesNoRecord types to include; if omitted, all supported types are included
limitNoMaximum records to return, capped by tier

Response Shape

{
"export_id": "3a7c9e2f-1b4d-4e8a-9c6f-2d5e8b1a3c7f",
"tenant_id": "my-org",
"tier": "evaluation",
"date_range": {
"start": "2026-02-15T00:00:00Z",
"end": "2026-03-01T00:00:00Z"
},
"disclaimer": "NOT FOR REGULATORY SUBMISSION - EVALUATION LICENSE",
"record_count": 142,
"audit_logs": [],
"workflow_steps": [],
"hitl_approvals": [],
"exported_at": "2026-03-01T12:00:00Z",
"daily_usage": {
"used": 1,
"limit": 3
}
}

Response Fields

FieldMeaning
export_idunique identifier for the export
tenant_idtenant the data belongs to
tiercurrent tier at export time
date_rangeeffective export window
disclaimerevaluation watermark, omitted in enterprise
record_counttotal records returned
audit_logsaudit log rows included in the pack
workflow_stepsworkflow-step evidence included in the pack
hitl_approvalsapproval records included in the pack
exported_atexport timestamp
daily_usagequota information where the tier has limits

Evidence Summary

If you want to see what is available before exporting the full pack, use the summary endpoint:

GET /api/v1/evidence/summary
curl -X GET "http://localhost:8080/api/v1/evidence/summary" \

That endpoint returns counts over the tier’s evidence window, which is often the fastest way to sanity-check whether the records you care about are actually present.

Tier Behavior

CapabilityCommunityEvaluationEnterprise
Evidence Export
Evidence Summary
Max lookback window14 daysUnlimited
Max exports per day3Unlimited
Max records per export5,000Unlimited
Watermark / disclaimerEvaluation disclaimerOmitted
Scheduled and broader operational reporting

Evaluation is intentionally useful for pilots and internal reviews, but not a substitute for the long-term operating surface enterprises usually need.

Why This Matters For Production Readiness

Evidence export becomes valuable surprisingly early. Teams often think they will only need it for formal audits, but in practice it is also useful for:

  • pre-launch readiness reviews
  • internal architecture reviews
  • incident postmortems
  • comparing governance behavior before and after a policy change

It is one of the clearest examples of why Evaluation can be a strong bridge tier. You can build with Community, but once people outside the immediate engineering team need evidence, a shareable export surface becomes hard to replace with ad hoc scripts.

Typical Review Workflows

What makes this feature especially valuable is that it is not tied to one regulator or one review style. The same export mechanism can support:

  • internal platform readiness reviews
  • security investigations
  • privacy or compliance evidence gathering
  • executive or architecture reviews that need concrete artifacts

Security Review

Export a recent window of audit logs and workflow activity for a security or privacy review:

curl -X POST "http://localhost:8080/api/v1/evidence/export" \
-H "Content-Type: application/json" \
-d '{
"start_date": "2026-02-22",
"end_date": "2026-03-01",
"types": ["audit_logs", "workflow_steps"]
}'

Approval Review

Focus on human oversight decisions:

curl -X POST "http://localhost:8080/api/v1/evidence/export" \
-H "Content-Type: application/json" \
-d '{
"start_date": "2026-02-15",
"end_date": "2026-03-01",
"types": ["hitl_approvals", "workflow_steps"]
}'

Upgrade Baseline

Use GET /api/v1/evidence/summary to inspect what is available, then export a baseline pack before changing policies, infrastructure, or license tier.

What Reviewers Usually Want Included

A useful evidence pack usually tells a coherent story, not just a large one. In practice, reviewers often want:

  • the relevant time window
  • audit logs tied to the affected tenant or workflow
  • workflow-step records for governed execution
  • approval records where human review mattered

That is why the types filter matters. It lets teams export a reviewable slice instead of dumping every available record.

Why This Matters Commercially Too

Evidence export is one of the strongest examples of how the docs should guide readers from technical use to commercial fit:

  • Community is enough to build and understand the governance model.
  • Evaluation is where teams can start sharing bounded evidence with security and compliance stakeholders.
  • Enterprise is what larger organizations usually want when evidence handling becomes a recurring operating function rather than a one-off export.