Singapore MAS AI Guidelines: FEAT Principles, Veritas Framework & Technology Risk Management 2021
The Monetary Authority of Singapore (MAS) has established one of the world's most comprehensive regulatory frameworks for responsible AI use in financial services. The FEAT (Fairness, Ethics, Accountability, and Transparency) Principles (2019), the Veritas framework for responsible AI use (2022), and the Technology Risk Management Guidelines (2021) together create a structured AI governance requirement that applies to all MAS-regulated financial institutions. Singapore's approach — combining principles-based guidance with practical implementation toolkits — is increasingly being used as a model by other regulators globally.
MAS FEAT Principles — Fairness, Ethics, Accountability and Transparency in AI (2019)
Published: November 2019
Scope: All financial institutions regulated by MAS using AI in customer-facing decisions
Key principles: Fairness — AI decisions should not systematically disadvantage protected groups; Ethics — AI should be consistent with broad societal values; Accountability — clear accountability for AI decisions must be established; Transparency — customers should understand how AI affects their financial products
Binding effect: FEAT principles are supervisory expectations — MAS examination teams assess compliance with FEAT in technology risk examinations of MAS-regulated entities
Veritas evolution: MAS's Veritas consortium released practical implementation frameworks for FEAT compliance in 2022
Source: MAS FEAT Principles — mas.gov.sg
Regulatory Risks and Compliance Challenges
The MAS Veritas framework — developed through a consortium of MAS and financial industry participants — provides practical implementation guidance for FEAT compliance. Veritas Phase 1 (2022) focused on fairness assessment for credit scoring and customer marketing algorithms, providing a quantitative methodology for measuring algorithmic fairness that MAS-regulated institutions can apply to their AI systems. Veritas Phase 2 extended the framework to fraud detection and customer communications.
MAS's Technology Risk Management Guidelines (2021 revision, TRM Guidelines) establish baseline AI governance requirements for all MAS-regulated financial institutions. The TRM Guidelines require institutions to: establish AI governance policies and procedures; maintain a model risk management framework covering AI; conduct pre-deployment testing of AI systems; implement ongoing monitoring of AI performance; and ensure AI systems are auditable. The TRM Guidelines are supervised through MAS's technology risk examination program.
Claire's AI Compliance Solution
Claire Platform Capabilities
FEAT Compliance Assessment
Claire's fairness assessment module implements the Veritas Phase 1 and Phase 2 quantitative fairness methodology — running demographic parity, equal opportunity, and counterfactual fairness tests on AI credit and marketing models against Singapore's protected characteristics, generating documentation of FEAT compliance for MAS examination.
MAS Technology Risk Examination Preparation
Claire's AI governance documentation module aligns with MAS TRM Guidelines requirements — maintaining model policies, procedures, risk assessments, and monitoring records in the format that MAS technology risk examiners request in supervised institution examinations.
Veritas Framework Implementation
Claire provides implementation support for the MAS Veritas quantitative fairness framework — integrating fairness testing into the AI model development lifecycle and generating the standardized fairness metrics that Veritas recommends for financial AI in Singapore.
Compliance Checklist
AI Regulatory Compliance Requirements
AI model risk management framework: Governance applied to all quantitative models including AI.
Independent model validation: Annual independent validation of material AI models.
Examination-ready documentation: AI governance documentation maintained for regulatory access within 48 hours.
Third-party AI vendor oversight: Documentation of oversight activities for all AI vendors.
Fair lending monitoring: Monthly disparate impact analysis of AI credit decisions.
Consumer protection review: AI customer-facing tools reviewed for UDAAP and consumer protection compliance.
Data quality governance: Training data quality documented and reviewed for AI models.
Audit trail maintenance: Immutable audit trail of all AI decisions affecting consumers or regulatory obligations.
Board AI risk reporting: Quarterly AI risk reporting to board covering model performance and regulatory developments.
Incident response for AI failures: Written incident response plan for AI model failures with regulator notification protocols.
Frequently Asked Questions
Are MAS FEAT Principles legally binding?
The MAS FEAT Principles are supervisory expectations rather than legally binding regulations. However, MAS incorporates FEAT compliance into its technology risk examinations of supervised financial institutions. Non-compliance with FEAT principles can result in supervisory findings, requirements for remediation, and enhanced supervisory scrutiny. The practical effect for MAS-supervised institutions is that FEAT compliance is effectively required.
What is the MAS Veritas framework?
MAS Veritas is an industry framework developed by MAS and financial industry participants to provide practical implementation guidance for the FEAT Principles. Veritas Phase 1 (2022) provides a quantitative methodology for measuring algorithmic fairness in credit scoring and marketing algorithms. Phase 2 extended the framework to fraud detection. The Veritas toolkit is open-source and available on GitHub, allowing institutions to implement standardized fairness metrics for MAS examination purposes.
How do MAS Technology Risk Management Guidelines apply to AI?
The MAS TRM Guidelines (2021 revision) require all MAS-regulated entities to implement technology risk management frameworks that cover AI. Key AI-relevant requirements include: AI governance policies and procedures; pre-deployment testing of AI systems; ongoing performance monitoring; incident management for AI failures; and audit trails for material AI decisions. The Guidelines apply to AI systems developed internally and those provided by third-party vendors.
How does MAS's AI governance framework compare to EU approaches?
MAS's approach is more principles-based and collaborative than the EU AI Act's prescriptive risk-classification framework. MAS uses a combination of supervisory principles (FEAT), practical implementation toolkits (Veritas), and technology risk guidelines (TRM) rather than legally binding categorical requirements. However, the practical outcomes are similar — financial institutions in Singapore must document AI fairness testing, maintain audit trails, and ensure AI explainability, just as EU AI Act Article 13 and 14 require for high-risk AI.
What AI governance does MAS require for credit scoring?
MAS FEAT Principles and the Veritas framework specifically address credit scoring AI. MAS expects credit-scoring AI to be tested for demographic fairness — ensuring that protected groups are not systematically disadvantaged. The Veritas Phase 1 toolkit provides specific fairness metrics (demographic parity ratio, equal opportunity difference) that MAS considers appropriate for credit scoring. Institutions should implement these metrics as part of their pre-deployment and ongoing monitoring programs for credit AI.
Related: Finance AI Overview | AI Model Risk Management | Regulatory Compliance