MAS FEAT Compliance
Singapore's financial-services regulator, the Monetary Authority of Singapore (MAS), has layered its AI governance expectations over three instruments:
- the FEAT Principles (Information Paper on Principles to Promote Fairness, Ethics, Accountability and Transparency in the Use of Artificial Intelligence and Data Analytics in Singapore's Financial Sector, published 12 November 2018)
- the Veritas Initiative (2019-2023), an industry consortium that produced the open-source Veritas Toolkit 2.0 and five assessment methodologies across the FEAT pillars
- the draft AI Risk Management Guidelines, issued for public consultation on 13 November 2025 with the comment window closing 31 January 2026. Check MAS publications for finalisation status and the post-finalisation transition period.
FEAT Principles and the Veritas toolkits are principles-based guidance — not directly enforceable — but MAS evaluates firms against them during supervisory engagements. Once finalised, the AI Risk Management Guidelines will move these expectations onto a firmer supervisory footing for all MAS-licensed financial institutions, including branches and subsidiaries of foreign entities.
The FEAT Principles at a glance
MAS organises FEAT into four pillars covering 14 numbered principles in total (the original paper splits Transparency into proactive and on-request disclosure). The most operationally material for AI runtime controls are summarised here.
| Pillar | Principles | What it means for a runtime system |
|---|---|---|
| Fairness | 1–4 | AIDA-driven decisions must not systematically disadvantage individuals or groups unless justified; personal attributes used as inputs must be justified; accuracy and bias are reviewed regularly. |
| Ethics | 5–6 | AIDA must align with the firm's ethical standards; AI-driven decisions are held to at least the same ethical standards as human decisions. |
| Accountability | 7–11 | Internal approval authority named for each AIDA use; firms are accountable for in-house and third-party models; board and management awareness; data-subject channels for enquiry, appeal, and review. |
| Transparency | 12–14 | Proactive disclosure that AIDA is in use; clear on-request explanations of data and decisions affecting a data subject. |
Primary source: MAS Information Paper on FEAT (November 2018).
A concrete example: a Singapore retail bank customer-service copilot
Here is how a Singapore retail bank usually lands on AxonFlow for a FEAT-aligned deployment. The bank wants to roll out an AI-powered customer-service copilot that answers questions about accounts, flags suspicious transactions, and drafts replies to customer messages.
What the bank has to prove against FEAT:
- Fairness (Principle 2): when NRIC or FIN is used as an input (to pull account context), that use is justified and not spilling into the decision surface.
- Fairness (Principle 3): the model is reviewed regularly for unintentional bias in how it flags transactions or prioritises messages.
- Ethics (Principle 6): AI-drafted replies are held to the same conduct standards as human agents (no mis-selling, no tonal slippage).
- Accountability (Principle 8): the bank is on the hook for the model even if it's a third-party foundation model — the vendor SLA does not transfer MAS liability.
- Accountability (Principle 10): a customer whose claim was flagged by the copilot has a channel to enquire and appeal.
- Transparency (Principle 12): the customer is told, up front, that the reply they are reading was drafted with AI.
How that maps to AxonFlow tiers:
| Need | Community | Evaluation | Enterprise |
|---|---|---|---|
| NRIC / FIN detection on inbound messages + connector data | 5 Singapore system policies (sys_pii_singapore_nric, sys_pii_singapore_fin, sys_pii_singapore_uen, sys_pii_singapore_phone, sys_pii_singapore_postal) — pattern-based | Same | Same policies + checksum validation on NRIC / FIN / UEN |
| Audit of every AI-drafted reply (for Principle 6 review) | Governed LLM + MCP paths with full audit trail | Same | Same |
| Human review on material messages before they go out (Principle 7) | Policies can emit require_approval decisions but there is no queue to act on them | HITL approval queue (24-hour expiry, 100 pending cap) | Production HITL queue with escalation + portal |
| Regular bias + accuracy review (Principle 3) | Raw audit logs; team builds its own review pipeline | Same + policy simulation to test changes before go-live | Simulation + production-grade reporting |
| AI system inventory — who owns which model (Principle 9; also proposed in the draft AI Risk Management Guidelines) | Not provided — track externally | Not provided | /api/v1/masfeat/registry module with per-system records, owners, dependencies |
| FEAT assessment workflow (structured per Veritas Toolkit 2.0) | Not provided | Not provided | /api/v1/masfeat/assessments with submit/approve/reject transitions |
| Kill-switch for a misbehaving AI system | Not provided at module level | Not provided | /api/v1/masfeat/killswitch/{system_id} with configure/trigger/restore/history |
| Accountable officer can disclose AI use to the customer (Principle 12) | Transparency headers on governed responses | Same | Same + portal-managed disclosures |
The useful part of the split is that the bank can stand up the copilot on Community, prove the governance boundary is real with actual Singapore data, and only move into Evaluation or Enterprise when it has a concrete regulatory or operational reason — not on vendor pressure.
What Community covers today
Community ships with the 5 Singapore-specific system policies above, applied automatically on every governed LLM and MCP request:
sys_pii_singapore_nric # Singapore National Registration Identity Card
sys_pii_singapore_fin # Foreign Identification Number
sys_pii_singapore_uen # Unique Entity Number (businesses)
sys_pii_singapore_phone # Singapore phone numbers
sys_pii_singapore_postal # Singapore postal codes
Combined with policy enforcement, audit logging, and governed LLM + MCP execution, Community is enough to stand up a MAS-relevant copilot, chatbot, or research assistant and prove controls work on real Singapore data before committing to a licensed deployment.
What Evaluation adds
The Evaluation tier unlocks the operational surfaces that Principle 7 (named approval authority) and the draft Guidelines (human oversight on material decisions) effectively require when a use case is material:
- HITL approval queue — decisions with
require_approvalenter a queue with a 24-hour default expiry and a 100-pending cap; reviewers act via API - Policy simulation — test a new tenant policy against recent traffic before it touches production
- Evidence export (with Evaluation-tier limits)
At the Evaluation tier, the bank's AI risk team can run the copilot's pilot phase end-to-end — including the human-review step — without yet buying production operations.
What Enterprise adds
The MAS FEAT module itself is Enterprise-only. Verified API surface:
GET, POST /api/v1/masfeat/registryGET /api/v1/masfeat/registry/summaryGET, PUT, DELETE /api/v1/masfeat/registry/{id}GET, POST /api/v1/masfeat/assessmentsGET, PUT /api/v1/masfeat/assessments/{id}POST /api/v1/masfeat/assessments/{id}/submitPOST /api/v1/masfeat/assessments/{id}/approvePOST /api/v1/masfeat/assessments/{id}/rejectGET /api/v1/masfeat/killswitch/{system_id}POST /api/v1/masfeat/killswitch/{system_id}/configurePOST /api/v1/masfeat/killswitch/{system_id}/triggerPOST /api/v1/masfeat/killswitch/{system_id}/restoreGET /api/v1/masfeat/killswitch/{system_id}/history
Plus checksum-aware validation paths for NRIC, FIN, and UEN — useful when the pattern-based detection on Community is catching too many false positives on production traffic.
Industry playbook
Different MAS-licensed firms land on different bits of this first:
Banking — customer-service + fraud-triage copilot
Concrete flow: customer message arrives → Agent scans for NRIC / FIN / UEN → policy engine evaluates against tenant-defined fairness rules → draft reply generated via governed LLM call → if the action modifies accounts or communicates a decision, Evaluation-tier HITL queue routes the draft to a human agent → every step lands in the audit trail. At Enterprise, each deployed system appears in the MAS FEAT registry with an accountable owner, and a kill-switch is wired to the anomaly-detection pipeline.
Insurance — claims-triage and underwriting assistance
For insurers, FEAT Principles 1 and 2 bite hardest: a model that scores claim validity cannot use protected attributes as inputs unless the use is justified. AxonFlow Community handles the detection-and-redaction side on inbound claim text; the Enterprise FEAT assessment workflow (/api/v1/masfeat/assessments) is where the underwriting team records its Principle 2 justification and the Principle 3 accuracy + bias review on each model version.
Capital markets — trading analytics and compliance surveillance
Surveillance workflows are high-autonomy (Principle 7–9) and often touch customer order data. Community gives you the governance boundary; Enterprise adds the FEAT registry entry the compliance team will be asked to produce during a MAS Technology Risk Management (TRM) inspection.
Payments — transaction-monitoring AI
Same shape as banking, narrower data set. The kill-switch API matters most here — when a model starts misclassifying at scale, the on-call team needs a single API call to disable it and route to a deterministic fallback. /api/v1/masfeat/killswitch/{system_id}/trigger is that call.
Engineering perspective
Most teams that survey the MAS FEAT space end up in roughly the same place: Community is a genuine starting point (not a teaser), because the 5 Singapore system policies + governed execution + audit trail are enough to prototype a compliant copilot. The moment the use case crosses into "material" — medium or high on MAS's risk framing — you need at least the Evaluation tier for HITL and simulation, and often Enterprise for the registry, assessment, and kill-switch surfaces that become supervisory expectations once the draft Guidelines go live in 2026.
The most common mistake is trying to build the registry + assessment + kill-switch surfaces in-house on top of Community. Community is a great foundation, but those three surfaces are where AxonFlow's Enterprise module actually earns its keep: they are what a MAS inspector asks for, and they are tedious to rebuild correctly.
