RBI FREE-AI Compliance
The Reserve Bank of India released the Framework for Responsible and Ethical Enablement of Artificial Intelligence — FREE-AI — on 13 August 2025. It is the report of a committee constituted in December 2024 pursuant to the 6 December 2024 RBI Statement on Developmental and Regulatory Policies.
Important framing: FREE-AI is a committee report with 26 numbered recommendations, not a binding circular. Annexure IV of the report discusses "AI Specific Enhancements in RBI Master Directions" but does not itself amend any Master Direction; RBI may operationalise selected recommendations through subsequent Master Directions or circulars — check the RBI Master Directions register for current status. Regulated entities (REs) — banks, NBFCs, payment companies, cooperative banks — that align with FREE-AI ahead of formal circular notifications are building the controls RBI is already signalling it expects. Primary source: RBI FREE-AI report (August 2025).
What FREE-AI actually asks for
FREE-AI is structured around 7 Sutras (guiding principles), 6 Pillars (three Innovation Enablement, three Risk Mitigation), and 26 Recommendations distributed across those pillars. The most operationally material for AI runtime controls:
| Recommendation | Summary | What it means in practice |
|---|---|---|
| Rec 14 — Board-Approved AI Policy | Every RE must have a board-approved AI policy covering governance structure, accountability, risk appetite, operational safeguards, auditability, consumer protection, AI disclosures, model lifecycle, and liability framework. | Board-level AI policy with a low/medium/high risk classification. High-risk examples named in the report include credit underwriting, autonomous AI making financial decisions, and systems moving customer funds. |
| Rec 15 — Data Lifecycle Governance | Data governance frameworks must comply with applicable legislation, including the DPDP Act, across the full data life cycle. | Purpose-limited data handling aligned with DPDP Act phased rollout — substantive compliance activates 13 May 2027. |
| Rec 16 — AI System Governance Framework | Model governance across design, development, deployment, and decommissioning. Strong governance required before deploying autonomous AI systems; tasks AI can perform autonomously must be clearly defined vs instances where human oversight is required. | Human-in-the-loop on medium- and high-risk use cases; the RE remains liable for autonomous-AI outcomes. |
| Rec 17 — Product Approval Process | AI-specific risk assessment integrated into product approvals — fairness, bias, understandability, cybersecurity, compliance. The review team must be independent of the AI development and deployment teams. | Separation of duties; a specific gate before launch. |
| Rec 18 — Consumer Protection | Customers must be explicitly informed when interacting with AI and should always have the option to switch to a human. | Transparency disclosure on every customer-facing AI touchpoint. |
| Rec 20 — Red Teaming | Medium- and high-risk AI applications must have at least semi-annual red-teaming, plus trigger-based red teams on major model updates, after vulnerabilities, or on regulatory change. | Scheduled offensive exercise, plus an incident path for triggered events. |
| Rec 21 — Business Continuity Plan for AI | An AI model should be able to declare itself "unavailable" when it fails and trigger backup processes. Mandatory HITL reviews; periodic human validation (e.g., 1% sample) to detect silent model degradation. | Deterministic fallback path + sampled human review even when the model seems healthy. |
| Rec 22 — AI Incident Reporting | Tiered incident reporting framework at entity, sector, and national levels. Timely, good-faith reporting; time windows vary by severity and system-wide implications. | Structured incident pipeline that can escalate to RBI and sector peers. |
| Rec 23 — AI Inventory | Comprehensive internal AI inventory updated at least half-yearly, covering model type, use case, third-party/cloud/data dependencies, risk categorisation, and grievances. Sector-wide repository via the RBIH EmTech Repository. | Model registry with owners, dependencies, and risk rating — available for supervisory inspections. |
| Rec 24 — AI Audit Framework | Three-level risk-based audit: internal audits proportionate to risk; third-party audits for high-risk or complex use cases; biennial review of the audit framework itself. Audits cover Input Data, Model and Algorithm, Output and Behaviour. | Independent audit posture with preserved evidence; a BCP and human override check is part of the audit. |
| Rec 25 — Disclosures | AI governance disclosures in annual reports and websites, analogous to climate risk and cybersecurity disclosures today. | Board reporting + external disclosure pipeline. |
A concrete example: an Indian bank's customer-facing banking copilot
Here is how a private-sector Indian bank usually lands on AxonFlow for an RBI-aligned deployment. The bank is building a customer-service and account-servicing copilot that answers account questions, drafts KYC follow-ups, helps with credit-card dispute triage, and — carefully separated from everything else — looks up loan-eligibility hints.
What RBI asks for, mapped to AxonFlow tiers:
| FREE-AI requirement | Community | Evaluation | Enterprise |
|---|---|---|---|
| Indian-PII detection on every governed prompt / response / connector result (Rec 15 DPDP alignment) | sys_pii_pan, sys_pii_aadhaar in migrations; the platform india_pii_detector module covers 12 Indian identifier types (UPI, Aadhaar, PAN, IFSC, bank account, GSTIN, voter ID, driving licence, ration card, passport, Indian phone, pincode) | Same | Same + checksum-aware validation |
| Audit trail that survives a supervisory inspection (Rec 24) | Governed LLM + MCP paths + per-request audit records | Same | Same + 10-year retention (AuditRetentionDays=3650) |
| Human review on medium/high-risk actions (Rec 16, Rec 21 BCP HITL) | Policies can emit require_approval; no queue to act on them | HITL queue (24h default, 100 pending cap) | Production HITL queue + portal |
| Model fallback when the AI fails (Rec 21 "declare itself unavailable") | Policies can route traffic to fallback via tenant rules | Same | Same + /api/v1/rbi/killswitches first-class surfaces |
| AI system registry — Rec 23 | Not provided | Not provided | /api/v1/rbi/ai-systems with per-system records, owners, risk rating, dependencies |
| Model validation records — Rec 17 + 24 | Not provided | Not provided | /api/v1/rbi/validations with independent-reviewer approval transitions |
| Incident reporting — Rec 22 | Not provided | Not provided | /api/v1/rbi/incidents + /incidents/{id}/resolve |
| Board reports — Rec 14, 25 | Not provided | Not provided | /api/v1/rbi/reports with submit transitions for board-level disclosure |
| Audit exports for inspections | Raw audit log; build the export yourself | Same | /api/v1/rbi/audit-exports with /process + /download |
| RBI-readiness dashboard | Not provided | Not provided | /api/v1/rbi/dashboard |
What Community covers today
Community is enough to prototype and validate most FREE-AI-aligned governance controls before committing to the Enterprise module:
- audit logging with policy evaluation records
- system and tenant policy enforcement
- Indian-identifier detection on every governed prompt, response, and connector result (PAN and Aadhaar as dedicated
sys_pii_*system policies; the full 12-type India detector on theindia_pii_detectormodule) - governed runtime boundaries for LLM and MCP traffic — reviewable and monitorable
That makes Community a strong fit for engineering teams building the first iteration of a banking or fintech AI system before a compliance-driven rollout.
What Enterprise adds
The dedicated RBI module is Enterprise-only. The verified API surface:
GET, POST /api/v1/rbi/ai-systemsGET /api/v1/rbi/ai-systems/summaryGET, PUT, DELETE /api/v1/rbi/ai-systems/{id}GET, POST /api/v1/rbi/validationsGET, PUT /api/v1/rbi/validations/{id}GET, POST /api/v1/rbi/incidentsGET, PUT /api/v1/rbi/incidents/{id}POST /api/v1/rbi/incidents/{id}/resolveGET, POST /api/v1/rbi/killswitchesGET /api/v1/rbi/killswitches/{id}POST /api/v1/rbi/killswitches/{id}/deactivateGET, POST /api/v1/rbi/reportsGET, PUT /api/v1/rbi/reports/{id}POST /api/v1/rbi/reports/{id}/submitGET, POST /api/v1/rbi/audit-exportsGET, DELETE /api/v1/rbi/audit-exports/{id}POST /api/v1/rbi/audit-exports/{id}/processGET /api/v1/rbi/audit-exports/{id}/downloadGET /api/v1/rbi/policies/templatesGET /api/v1/rbi/policies/templates/{id}GET /api/v1/rbi/policies/categoriesGET /api/v1/rbi/dashboard
Industry playbook
Commercial banks — customer-service + credit copilots
The concrete flow above. Banks with large retail operations get the most value from the /ai-systems registry and /reports board-disclosure surface. The /killswitches surface matters because the RBI FREE-AI Rec 21 expectation — a model can declare itself unavailable and trigger backup — requires a first-class switch, not a config flag buried in an application.
NBFCs — loan-origination and collections AI
For NBFCs (Non-Banking Financial Companies), the most material FREE-AI recommendations are Rec 14 (board-approved AI policy with risk classification), Rec 16 (autonomous decision framework — NBFCs often use AI to price loans, flag collection priority, and score creditworthiness), and Rec 22 (incident reporting — smaller NBFCs don't have mature incident frameworks; AxonFlow's /incidents workflow gives them the structured surface to adopt). Community policy enforcement + audit covers Rec 15 (data lifecycle, DPDP alignment). Enterprise covers the registry, validation, and reporting pieces that NBFCs otherwise build on spreadsheets.
Payment companies — transaction-monitoring + fraud AI
Payment-system operators (PPIs, payment aggregators, fintechs under the PA-PG framework) face Rec 20 (at-least-semi-annual red teaming) and Rec 22 (tiered incident reporting) most directly, because payment-system failures cascade quickly. The /killswitches and /incidents endpoints become part of the real operational runbook, not compliance theatre.
Cooperative banks — internal research and analyst copilots
Smaller cooperative banks often start with internal-only AI use cases — research assistants, analyst copilots, operational knowledge systems. Community is usually sufficient for these (Rec 15 data-lifecycle alignment, Rec 23 simple internal inventory). The move to Enterprise tends to wait until the bank expands into customer-facing AI or starts being asked about board-reportable AI inventory.
Example upgrade trigger
A bank can start in Community to validate policy enforcement around customer prompts, internal copilots, and governed database access. The moment that same bank needs structured registry, incident, validation, or board-report workflows — whether driven by an internal audit, an RBI supervisory engagement, or a Master Direction operationalising FREE-AI — the Enterprise RBI module becomes the operationally credible path.
Related Docs
- Compliance Overview
- SEBI AI/ML Compliance — covers DPDP Act 2023 phased rollout that RBI FREE-AI Rec 15 points to
- MAS FEAT Compliance — the neighbouring jurisdiction's AI governance framework
- Banking Example
- PII Detection
- Human-in-the-Loop
- Assessing AxonFlow in Regulated Environments
- Enterprise RBI Guide
