EU AI Act Compliance
The EU AI Act — Regulation (EU) 2024/1689 — is the first comprehensive regulation specifically targeting AI systems. It applies to providers, deployers, importers, and distributors whose systems are used in the EU or whose output affects natural persons in the EU, regardless of where the provider is established.
Enforcement Timeline
All dates are fixed by Article 113 of Regulation (EU) 2024/1689, published in the Official Journal of the European Union on 12 July 2024 (EUR-Lex canonical text). For a per-article navigation of the same regulation, the community-maintained artificialintelligenceact.eu mirror is a convenient secondary reference.
| Date | Milestone |
|---|---|
| 1 August 2024 | Regulation entered into force (20 days after OJ publication) |
| 2 February 2025 | Chapter I + Article 5 prohibited practices apply; AI literacy obligations (Article 4) begin |
| 2 August 2025 | GPAI model rules, governance, notified-body rules, and penalty provisions (Articles 99–100) apply |
| 2 August 2026 | High-risk AI obligations (Annex III systems) apply — the main compliance milestone for most teams building governed AI products |
| 2 August 2027 | Article 6(1) obligations apply for high-risk AI that is a safety component of Annex I products; Article 111(3) grace period for pre-2025 GPAI models ends |
Penalties under Article 99: up to €35,000,000 or 7% of total worldwide annual turnover (whichever is higher) for breaches of Article 5 prohibitions. Lower caps apply to other obligations (€15M / 3%) and for supplying incorrect information (€7.5M / 1%).
Key articles for AI runtime controls
| Article | Requirement | AxonFlow coverage |
|---|---|---|
| Article 5 | Prohibited practices (social scoring, exploitative manipulation, untargeted facial-image scraping, real-time remote biometric ID in public spaces with narrow exceptions, etc.) | Tenant-policy engine enforces outright blocks at the governance boundary before requests reach the model |
| Article 6 | High-risk AI classification — systems in Annex III plus product safety components under Annex I | Policy engine; policies scoped to system/use-case; tenant can tag workflows as high-risk for differential governance |
| Article 12 | Record-keeping — high-risk systems must automatically record events (logs) over their lifetime | Audit trail with per-request, per-policy-evaluation, per-tool-call records |
| Article 13 | Transparency + information to deployers — instructions for use with capabilities, limitations, accuracy, robustness, cybersecurity; input-data specs; log-interpretation guidance | Transparency headers (X-AI-*) on governed responses; audit log carries input + policy context |
| Article 14 | Human oversight — designed-in oversight; ability to understand limitations, detect anomalies, avoid automation bias, decide not to use, override, or interrupt via a stop mechanism | HITL approval queue (Evaluation tier); tenant policies can enforce human review on high-risk decisions; kill-switch patterns via governance surfaces |
| Article 15 | Accuracy, robustness, cybersecurity | Circuit breaker, retry-aware policies (Evaluation+), governed provider routing |
| Article 43 | Conformity assessment for high-risk systems | Conformity workflow APIs (Enterprise) — see below |
Primary sources: Article 6, Article 12, Article 13, Article 14 on the community-maintained mirror; authoritative text via EUR-Lex.
Annex III — the 8 high-risk use-case categories
Article 6(2) points at Annex III, which enumerates 8 high-risk AI system categories. These are the teams whose obligations kick in on 2 August 2026:
- Biometrics — remote biometric identification, biometric categorisation by sensitive/protected attributes, emotion recognition.
- Critical infrastructure — safety components of critical digital infrastructure, road traffic, water/gas/heating/electricity supply.
- Education and vocational training — admission, assignment, outcome assessment, steering of learning, prohibited-behaviour monitoring during tests.
- Employment and worker management — recruitment, screening, performance and behaviour monitoring, task allocation, termination decisions.
- Essential services — eligibility for public benefits, credit scoring (excluding fraud detection), risk assessment and pricing in life/health insurance, emergency-services dispatch.
- Law enforcement — individual risk assessment, polygraphs, evidence reliability, profiling during investigations, crime analytics.
- Migration, asylum, border control — polygraphs, migration-risk assessment, assistance with asylum/visa/residence applications, biometric detection in these contexts.
- Administration of justice and democratic processes — assisting judicial authorities in researching/interpreting the law, AI influencing electoral outcomes or voting behaviour.
See the full list at Annex III. A system performing profiling of natural persons is always high-risk under Article 6(3), regardless of the derogation rules.
A concrete example: a healthcare AI team preparing for August 2026
Consider a European provider of a hospital-operations AI that triages patient referrals and helps schedule specialist appointments. It is not Annex I medical-device-grade (does not diagnose or prescribe) but it is Annex III category 5 (essential services — dispatch/prioritisation of emergency services and access to healthcare-adjacent benefits), so the 2 August 2026 obligations apply.
What the team has to prove:
- Article 12 (record-keeping): every triage decision logged over the system's lifetime, including input features, model version, and output.
- Article 13 (transparency to deployers): the hospital operating the system has instructions for use covering capabilities, limitations, accuracy, known failure modes, and log-interpretation guidance.
- Article 14 (human oversight): a clinician or triage coordinator can review, override, or stop the AI in the loop — not just after the fact.
- Article 15 (accuracy / robustness): the system has pre-deployment + ongoing accuracy evidence, is robust against adversarial input, and has cybersecurity controls.
- Article 43 (conformity assessment): the system has been conformity-assessed and CE-marked via the relevant procedure before market placement.
How that maps to AxonFlow tiers:
| Article | Community | Evaluation | Enterprise |
|---|---|---|---|
| Art. 12 — Record-keeping | Governed LLM and MCP paths with full audit per-request + per-policy-evaluation + per-tool-call | Same | Same + configurable retention (up to AuditRetentionDays=3650 = 10 years) fit for supervisory inspections |
| Art. 13 — Transparency | Transparency headers (X-AI-*) + structured audit log the deployer can read | Same | Same + export endpoints that ship the log in the deployer's required format |
| Art. 14 — Human oversight | Policies can emit require_approval decisions — but no queue to act on them | HITL queue (24h expiry, 100 pending cap) — clinician reviews before the system proceeds | Production HITL + portal + SLA escalation |
| Art. 15 — Accuracy + robustness | Circuit breaker, retry awareness on gates | Retry-aware tenant policies | Same + production accuracy reporting |
| Art. 43 — Conformity assessment | Not provided | Not provided | /api/v1/euaiact/conformity/* workflow — draft, submit, approve/reject, durable assessment record |
| Accuracy + bias tracking (ongoing, post-market) | Not provided | Not provided | /api/v1/euaiact/accuracy/* — record, bias, history, alerts |
| Art. 72 — Post-market monitoring export | Not provided | Not provided | /api/v1/euaiact/export/* with download for regulator-facing evidence |
The healthcare team's typical path: prove Articles 12, 13, and parts of 14 on Community, realise by spring 2026 that the 2 August 2026 deadline requires structured conformity assessment + accuracy monitoring + regulator export, and move into the Enterprise EU AI Act module. That pattern is why Community is a foundation, not a destination, for Annex III systems.
What Community covers
Community is enough to build the governance base layer an Annex III high-risk system needs:
- per-request audit logging covering inputs, policies evaluated, decision, and outputs
- policy enforcement for Article 5 prohibited-practice blocks and Article 6 high-risk-use policies
- PII detection + redaction as part of the Article 13 transparency story
- governed runtime boundaries for LLM and MCP traffic
- transparency headers (
X-AI-*) on governed responses
For the prototype phase of a high-risk system, Community is usually sufficient to prove the governance story before investing in the Enterprise module.
What Evaluation adds
The Evaluation tier unlocks the operational surfaces Article 14 makes necessary when the use case is actually high-risk:
- HITL approval queue with a 24-hour default expiry
- Policy simulation — test a new tenant policy against recent traffic before go-live
- Evidence export (with Evaluation-tier limits) — the first draft of an Article 13 / 72 export pipeline
What Enterprise adds — the EU AI Act module
The licensed EU AI Act module exposes dedicated workflow APIs for exports, conformity assessments, and accuracy/bias monitoring. Verified Enterprise API surface:
GET, POST /api/v1/euaiact/exportGET /api/v1/euaiact/export/{id}GET /api/v1/euaiact/export/{id}/downloadGET, POST /api/v1/euaiact/conformityGET, PUT /api/v1/euaiact/conformity/{id}POST /api/v1/euaiact/conformity/{id}/submitPOST /api/v1/euaiact/conformity/{id}/approvePOST /api/v1/euaiact/conformity/{id}/rejectGET /api/v1/euaiact/accuracyPOST /api/v1/euaiact/accuracy/recordPOST /api/v1/euaiact/accuracy/biasGET /api/v1/euaiact/accuracy/historyGET /api/v1/euaiact/accuracy/alertsGET, PUT /api/v1/euaiact/accuracy/alerts/{id}
These turn the raw audit + governance data into documented assessments and regulator-facing exports — structured evidence rather than bulk log dumps.
Practical fit by stage
Community is usually enough for
- architecture review
- early product development before Annex III high-risk obligations apply to the specific use case
- validating Article 12 record-keeping + Article 13 transparency foundations
- non-Annex-III or low-risk internal workflows
Evaluation or Enterprise is usually needed for
- any Annex III high-risk system approaching 2 August 2026
- structured Article 43 conformity assessment processes
- ongoing Article 15 accuracy and bias monitoring
- regulator- or auditor-facing Article 13 / 72 exports
- formal governance programs at the scale an EU-operating firm runs
Industry playbook
Annex III sectors cluster into a few recurring AxonFlow deployment shapes:
Healthcare AI (Annex III.5 — essential services; healthcare-adjacent)
Concrete flow in the "concrete example" above. The conformity + accuracy + export trio all matter.
Financial decisioning — credit scoring, insurance pricing (Annex III.5)
Community covers PII + audit; Enterprise adds the accuracy-and-bias tracking surfaces that Article 15 effectively requires once the system is deployed against EU data subjects.
Hiring and HR — recruitment, screening, performance monitoring (Annex III.4)
High-risk by default under Annex III.4. Article 14 human-oversight expectations + Article 10 data-governance provisions bite hardest. HITL (Evaluation tier or higher) is not optional for material hiring decisions.
Law-enforcement and migration AI (Annex III.6-7)
Strictest obligations. Article 14(5) requires two-person verification for biometric identification (with narrow national-law exceptions). Community covers the audit foundation; Enterprise covers the structured conformity + export record-keeping a national supervisory authority will ask for.
Why this matters for engineers
The gap between "we log things" and "we can survive regulatory review" is large. The Enterprise EU AI Act module closes that gap by making conformity, accuracy, and export structured — each is an actual workflow, not a log-scrape pipeline. For an Annex III high-risk system approaching 2 August 2026, that structure is what turns audit data into evidence a national competent authority will accept.
