Compliance Overview
AxonFlow helps teams build AI systems that are easier to govern, audit, and operate in regulated environments. The important distinction is this:
- the community runtime gives you the technical foundations needed to build compliant systems
- Evaluation and Enterprise add the regulator-specific workflow modules, APIs, retention controls, and operational features most organizations need before production rollout in regulated environments
That distinction matters because a public docs page should help engineers build with the community product today without overstating what the licensed compliance modules do.
What Community Already Gives You
The community build includes core building blocks that are useful across nearly every compliance framework:
- audit logging
- system policy enforcement
- tenant-policy support
- PII detection and redaction foundations
- request tracing and governed runtime boundaries for LLM and MCP access
These capabilities matter even before you adopt a regulator-specific workflow. They are the base layer senior engineers use to make AI systems reviewable and operationally sane.
What Evaluation and Enterprise Add
Evaluation and Enterprise builds add the deeper workflow modules that compliance, risk, and procurement teams usually expect for production:
- regulator-specific APIs and data models
- system registries and assessment workflows
- export and reporting endpoints
- higher-retention governance use cases
- approval and operational controls tailored to regulated environments
Framework Coverage
Each row below anchors the framework to durable, primary-source facts — publication dates, entry-into-force dates, specific circular or regulation references. For any forthcoming milestones (pending consultations, draft guidelines, finalisation), follow the regulator-specific page linked from the first column; each includes a current-status pointer to the primary source so the status never goes stale here.
| Framework | Regulatory instruments and key dates | Community Coverage | Evaluation / Enterprise Coverage |
|---|---|---|---|
| EU AI Act — Regulation (EU) 2024/1689 | Published in the OJ on 12 July 2024; entered into force 1 August 2024. Article 113 stages application in four windows: 2 February 2025 (Article 5 prohibitions + Article 4 AI literacy), 2 August 2025 (GPAI, governance, penalties under Articles 99–100), 2 August 2026 (high-risk Annex III obligations), 2 August 2027 (Article 6(1) Annex I safety-component systems; Article 111(3) grace period ends) | Audit, policy enforcement, PII and governed runtime foundations; X-AI-* transparency headers | Export, conformity assessment, and accuracy / bias workflow APIs |
| RBI FREE-AI | Framework for Responsible and Ethical Enablement of AI — committee report released 13 August 2025 with 7 Sutras, 6 Pillars, and 26 Recommendations. The report is guidance, not a binding circular; RBI may operationalise selected recommendations through Master Directions or circulars — check the RBI Master Directions register for the current status | Audit, policy enforcement, and baseline India-relevant PII protection (PAN + Aadhaar system policies, 12-type India detector) | AI system registry, validations, incidents, kill switches, board reports, audit exports |
| SEBI AI/ML | 2019 reporting circulars (2019/10, 2019/24, 2019/63) still operative. Regulation 16C — sole-liability rule — inserted by the SEBI (Intermediaries) (Amendment) Regulations, 2025 and in force since 10 February 2025. 20 June 2025 Consultation Paper on responsible AI/ML (comments closed 11 July 2025) — check SEBI's circulars page for the current status of downstream rules | Audit, policy enforcement, and baseline India-relevant PII protection | SEBI audit export + retention + readiness + dashboard workflows |
| MAS FEAT | FEAT Principles (MAS Information Paper, 12 November 2018) + Veritas Toolkit 2.0 (26 June 2023) are principles-based guidance. AI Risk Management Guidelines consultation issued 13 November 2025; comment window closed 31 January 2026. Check MAS publications for finalisation status | Audit, policy enforcement, and 5 Singapore PII system policies (NRIC, FIN, UEN, phone, postal) | Registry, FEAT assessments, and kill-switch workflows |
| DPDP Act 2023 | Assented 11 August 2023. DPDP Rules 2025 notified 13 November 2025 (Gazette G.S.R. 843(E)). Phase 1 procedural provisions commenced 14 November 2025; Phase 2 (Consent Manager registration) 13 November 2026; Phase 3 substantive compliance 13 May 2027 | Audit, policy enforcement, and baseline India PII detection aligned with DPDP purpose-limitation | Structured audit exports that scale to Data Principal rights handling once Phase 3 activates |
Region-Specific Detection Available in the Public Runtime
The community runtime already includes regional detection patterns that are useful for regulated applications:
- India: PAN and Aadhaar system policies, plus broader global PII detection
- Singapore: NRIC, FIN, UEN, phone, and postal-code system policies
- Europe and global use cases: email, phone, bank-account, IBAN, passport, SSN, card, and related patterns where applicable
This is often enough for teams to validate their product design, prove governance value internally, and demonstrate why a licensed deployment is the right next step before production scale.
Capability Matrix
| Capability | Community | Evaluation / Enterprise |
|---|---|---|
| Audit logging | Yes | Yes |
| System policy enforcement | Yes | Yes |
| Tenant-policy workflows | Yes | Yes |
| Regional PII detection foundations | Yes | Yes |
| Regulator-specific workflow APIs | No | Yes |
| Registry and assessment workflows | No | Yes |
| Regulator-facing exports and dashboards | No | Yes |
| Advanced regulated operations playbooks | No | Yes |
When Community Is Enough
Community is usually enough when you need to:
- build and assess a governed AI application
- test policy behavior against real prompts, responses, and connector traffic
- prove auditability and PII protection value to platform and security teams
- validate how AxonFlow fits into your existing multi-agent stack
When Teams Usually Need Evaluation or Enterprise
Most teams start pushing for Evaluation or Enterprise when one or more of these become real:
- formal compliance review before production go-live
- regulator- or board-facing reporting expectations
- human approval or governance workflows beyond basic policy blocking
- stronger retention, export, and enterprise operations needs
- wider rollout across multiple internal AI products or business units
That is the point where AxonFlow usually moves from “useful governance layer” to “required AI control plane.”
Industry Examples
| Industry | Common Compliance Pressure | Example Docs |
|---|---|---|
| Banking | Auditability, PII handling, incident controls | Banking Example |
| Healthcare | Data protection, human review, traceability | Healthcare Example |
| Travel | Customer-data protection, governed tool access | Trip Planner Example |
| E-commerce | Customer-data governance and workflow audit | E-commerce Example |
