EU AI Act Impact on FinTech: Credit Scoring as High-Risk AI, Conformity Assessments, and the February 2025 Prohibition Deadline
The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024. Its prohibition provisions became effective February 2, 2025. For FinTech firms operating in or serving EU markets, the Act is not a future consideration — it is current law. Credit scoring AI, automated customer onboarding systems, and AI-powered lending decisions are classified as high-risk AI systems under Annex III of the Act, triggering conformity assessment requirements, extensive documentation obligations, and human oversight mandates that most FinTech AI architectures were not designed to satisfy.
EU AI Act: Legislative Milestone Dates
Regulation: EU AI Act — Regulation (EU) 2024/1689 of the European Parliament and of the Council
Published: Official Journal of the European Union, July 12, 2024
Entered into force: August 1, 2024
Prohibitions effective: February 2, 2025
High-risk AI obligations effective: August 2, 2026
Official source: EUR-Lex Regulation 2024/1689 — eur-lex.europa.eu
1. Credit Scoring and Automated Onboarding as High-Risk AI Under Annex III
The EU AI Act’s high-risk classification framework is set out in Article 6 and Annex III. The classification operates on two levels: AI systems used in products covered by EU product safety legislation (Article 6(1)); and AI systems listed in Annex III as high-risk by function (Article 6(2)).
For FinTech firms, the critical Annex III classifications are in Point 5 (AI systems in employment, workers management, and access to self-employment) and Point 5(b) (AI systems intended to be used by or on behalf of public authorities for the evaluation of creditworthiness or credit scoring). However, the most significant and broadly applicable classification is in Point 5(b), specifically extended by Recital 58 to AI used for creditworthiness assessment more broadly — making AI credit scoring by private lenders, FinTechs, and Buy Now Pay Later providers potentially subject to high-risk designation.
Additionally, Annex III Point 1(a) classifies as high-risk AI systems used by or on behalf of public authorities for biometric categorisation — relevant for any FinTech using AI for identity verification or risk classification based on biometric or behavioural data. Point 8 (AI in critical infrastructure) may apply to payment infrastructure providers.
2. Prohibitions Effective February 2025: What FinTechs Must Have Already Stopped
The AI Act’s prohibition provisions in Article 5 became effective on February 2, 2025. These prohibitions apply regardless of whether a firm’s AI system is otherwise high-risk. For FinTech firms, the most relevant prohibitions are:
Social Scoring Prohibition (Article 5(1)(c))
The Act prohibits AI systems that evaluate or classify natural persons based on their social behaviour or personal characteristics over a period of time, producing social scores that lead to detrimental treatment of those persons. For FinTechs, this prohibition is most directly relevant to alternative data credit scoring models that use social media activity, lifestyle data, or social network characteristics as credit risk proxies. A credit model that incorporates social media sentiment, peer group financial behaviour, or platform usage patterns as features may constitute a prohibited social scoring system.
Emotional Recognition in Employment and Education Prohibition (Article 5(1)(f))
The Act prohibits AI systems for emotion recognition in the workplace and education institutions. For FinTechs operating AI-powered hiring systems, this prohibition applies immediately to any system that analyses candidate emotional states during recruitment processes.
Real-Time Remote Biometric Identification in Public Spaces (Article 5(1)(h))
While primarily aimed at law enforcement, this prohibition may have implications for FinTechs using real-time facial recognition in physical branch or service environments.
3. Article 9 Risk Management System Requirements
For high-risk AI systems, Article 9 requires providers to establish, implement, document, and maintain a risk management system throughout the AI system’s entire lifecycle. This risk management system must:
- Identify and analyse known and reasonably foreseeable risks: For credit scoring AI, this includes model discrimination risks, data quality risks, training distribution shift risks, adversarial manipulation risks, and the risk of systematic harm to vulnerable consumer groups.
- Estimate and evaluate risks that may emerge when the high-risk AI system is used: This requires scenario analysis of how the model’s outputs could cause harm in realistic deployment conditions, including edge cases and distribution shift scenarios.
- Evaluate other possible risks based on the analysis of data gathered from the post-market monitoring system: Article 9 creates an ongoing monitoring obligation — the risk management system must be updated as post-market performance data reveals risks not anticipated at deployment.
- Adopt appropriate and targeted risk management measures: For each identified risk, the system must document the specific control measure adopted and the basis for assessing that measure as adequate.
For FinTech credit scoring AI, the Article 9 risk management system requirement creates a documentation and governance discipline that is significantly more demanding than standard model risk management frameworks. The requirement to document identified risks, the measures adopted to address them, and the ongoing monitoring that reveals new risks creates a compliance record that both national competent authorities (NCAs) and the European AI Office can inspect.
4. Article 13 Transparency and Article 14 Human Oversight Requirements
Article 13 of the EU AI Act requires that high-risk AI systems are designed and developed in such a way as to ensure their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately. For credit scoring AI used by financial institutions as deployers, this creates a dual-layer transparency obligation:
First, the provider (the FinTech building the AI system) must design the system to be interpretable by deployers. Black-box models that cannot explain their decisional logic to the financial institution using them do not meet this requirement — the model must support meaningful human interpretation of its outputs.
Second, the deployer (the financial institution using the AI to make credit decisions) must be able to interpret the outputs and use them appropriately. This means deployers must have access to adequate documentation to understand the model’s logic, limitations, and appropriate use context — and must implement processes that use the AI output as an input to human decision-making, not as a replacement for it.
Article 14 requires that high-risk AI systems are designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period the AI system is in use. Specifically, Article 14(4) requires deployers to assign the task of human oversight to a natural person who has the necessary competence, training, and authority to carry out that role — and to ensure adequate resources for oversight.
5. Conformity Assessment Requirements Under Articles 43 and 9
High-risk AI systems listed in Annex III must undergo conformity assessment before being placed on the market or put into service. For most FinTech credit scoring AI (those not covered by specific EU sectoral legislation), the conformity assessment is conducted by the provider through a self-assessment process against the requirements of Chapter III, Section 2 of the AI Act.
The conformity assessment requires the provider to verify that their AI system complies with:
- Article 9 (Risk management system)
- Article 10 (Data and data governance)
- Article 11 (Technical documentation)
- Article 12 (Record-keeping)
- Article 13 (Transparency and provision of information to deployers)
- Article 14 (Human oversight)
- Article 15 (Accuracy, robustness, and cybersecurity)
The conformity assessment documentation must be maintained for 10 years after the AI system is placed on the market. This is a long retention obligation that requires systematic document management — not the informal documentation practices that characterise many FinTech AI development processes.
6. 12-Item EU AI Act FinTech Compliance Checklist
EU AI Act High-Risk AI Compliance Checklist — FinTech Credit and Onboarding Systems
High-risk classification assessment: Conduct a documented classification assessment for each AI system you develop or deploy in EU financial services. Determine whether each system falls within Annex III categories, particularly Points 5(b) (creditworthiness/credit scoring), Point 1 (biometric categorisation), or Point 8 (critical infrastructure). Document the assessment rationale for systems that are determined not to be high-risk.
Article 5 social scoring audit: Review all credit scoring models using alternative data against the Article 5(1)(c) social scoring prohibition, effective February 2, 2025. Identify any features derived from social behaviour, social network characteristics, or lifestyle data inference. Obtain legal assessment of whether these features constitute prohibited social scoring inputs. Remove or substitute prohibited features before February 2025 (or immediately if the review has not yet occurred).
Article 9 risk management system documentation: Implement and document a lifecycle risk management system for each high-risk AI system. The system must identify known and foreseeable risks, document risk assessment methodologies, record adopted mitigation measures, and be updated with findings from post-market monitoring. This must be a live system — not a one-time assessment document.
Article 10 data governance documentation: Document the data governance practices applied to training, validation, and testing datasets. For credit scoring AI, this must include data provenance, data quality assessment methodology, representativeness assessment of training data against the intended deployment population, and documentation of data minimisation measures applied.
Article 11 technical documentation preparation: Prepare and maintain the technical documentation required by Annex IV (referenced by Article 11). This documentation — a comprehensive technical description of the AI system — must be available to national competent authorities on request. Assess whether your current model documentation meets the Annex IV requirements and identify gaps.
Article 13 transparency implementation for deployers: If you are a provider (building the AI system), verify that the system’s design supports meaningful interpretation by deployers. Prepare deployer instructions that explain the model’s logic, known limitations, intended use context, and appropriate oversight procedures in terms that non-technical financial services professionals can act on.
Article 14 human oversight role designation: If you are a deployer (using a credit scoring AI in credit decisions), designate a named individual with the competency, training, and authority to exercise effective human oversight of the AI system. Document this designation, the individual’s qualifications, the oversight activities they perform, and the process by which they exercise their authority to override AI recommendations.
Conformity assessment process initiation: For high-risk AI systems, initiate the self-assessment conformity assessment process against the requirements of Chapter III Section 2. For most FinTech credit scoring systems, this is a self-assessment — there is no mandatory third-party notified body for most financial AI. Document the assessment process, findings, and any remediation actions taken before market placement.
EU AI database registration assessment: Article 71 requires providers of high-risk AI systems to register their systems in the EU AI database maintained by the European AI Office. Assess registration obligations for your high-risk AI systems and implement a registration process. Registration requires disclosure of system name, intended purpose, performance metrics, and contact details for the provider.
Post-market monitoring system design: Article 72 requires providers of high-risk AI systems to have a post-market monitoring system that actively collects data about system performance in production. For credit scoring AI, this must include ongoing accuracy monitoring, demographic fairness monitoring, and monitoring for distribution shift. Document the post-market monitoring system design and the feedback loop from monitoring findings to Article 9 risk management system updates.
Serious incident reporting readiness: Article 73 requires providers and deployers of high-risk AI systems to report serious incidents to national market surveillance authorities. Implement a process for identifying serious incidents (AI system failures that cause or could cause death, serious harm, or significant property damage), assessing reportability, and making reports within the required timeframe. The Act does not specify a single reporting deadline — follow national competent authority guidance for your jurisdiction.
10-year documentation retention implementation: Implement document management processes for AI Act compliance records that provide for 10-year retention from the date of market placement. For FinTechs with frequent model updates, determine whether each model update constitutes a new “placing on the market” triggering a new 10-year retention period, or whether it constitutes a modification to an existing system. This determination has material implications for your documentation management costs.
7. How Claire Supports EU AI Act Compliance for FinTech
Claire’s EU AI Act FinTech Compliance Architecture
Automated High-Risk Classification Assessment
Claire’s EU AI Act module conducts structured classification assessments for each client AI system against Annex III categories, Article 6 requirements, and the Commission’s guidance on borderline cases. Classification assessments are documented in a format aligned with national competent authority expectations and are updated when the Commission publishes new guidance or when the AI system’s intended use changes.
Article 9 Lifecycle Risk Management Documentation
Claire provides a structured risk management documentation framework that satisfies Article 9 requirements for high-risk AI systems. The framework guides risk identification, assessment, and mitigation documentation through the AI system lifecycle — from design through deployment to post-market monitoring. Documentation is maintained in a live system that is updated with post-market monitoring findings, creating the continuous risk management record the Act requires.
Annex IV Technical Documentation Generation
Claire generates Annex IV-compliant technical documentation for high-risk AI systems based on structured inputs from the client’s model development team. The documentation covers all required elements — general description, logic and intended purpose, training data documentation, accuracy metrics and testing methodology, and human oversight design — in a format ready for national competent authority review.
Social Scoring Feature Audit Tool
Claire’s Article 5(1)(c) audit tool analyses credit scoring model feature sets for potential social scoring characteristics — features derived from social behaviour, social network relationships, or lifestyle inference. Features identified as potentially prohibited are flagged with legal analysis notes for client review. This tool provides the systematic feature review that Article 5(1)(c) compliance requires for alternative data credit models.
8. The EU AI Act Timeline: What FinTechs Must Do Now
The EU AI Act is already in force. The prohibition provisions that became effective on February 2, 2025 mean that FinTechs operating AI systems with social scoring characteristics in EU markets are currently either compliant or non-compliant — there is no grace period remaining for prohibited practices.
The August 2026 deadline for high-risk AI obligations is approaching faster than most FinTech compliance timelines anticipate. Conformity assessment, technical documentation, risk management system implementation, and human oversight architecture are not six-month projects for a FinTech deploying multiple AI systems across multiple EU markets. The compliance build must start now to be ready for the compliance deadline.
Related reading:
Revolut Compliance Lessons |
FCA FinTech Enforcement 2024-25 |
UAE DIFC/ADGM Compliance |
CFPB AI Fair Lending