Artificial intelligence in the EU financial sector – balancing regulation and innovation

To harness the potential benefits of AI safely, financial institutions must ensure that they have appropriate guardrails in place to allow safe innovation. The EU’s AI Act will help shape how institutions govern artificial Intelligence across functions. Boards must focus on ensuring that AI risks are identified and addressed across the development, procurement and use of the technology. They must also prepare for diverging regulatory expectations across key markets. 

The European Union’s Artificial Intelligence Act (Regulation (EU) 2024/1689) [1], known as the EU AI Act (EU AIA), came into force in August 2024. The regulation establishes a harmonised and comprehensive legal framework for artificial intelligence across all 27 member states, to safeguard fundamental rights, democracy and environmental sustainability while promoting innovation. Its tiered compliance obligations will become progressively applicable over a phased timeline until 2030.

AI presents transformative opportunities for the financial sector, such as enhancing operational efficiency, improving risk assessment accuracy and enabling more personalised customer services. It also introduces complex risks, including cybersecurity threats, data privacy concerns, opaque decision-making and the potential to introduce bias and inequality. Financial institutions must therefore embed robust AI risks management into their existing governance frameworks.

Under the EU AIA, financial institutions will be required to ensure that their use of AI complies with such strict governance, transparency and risk management requirements, especially for high-risk applications

This article explores the EU AIA requirements for the financial sector, the potential operational impact on banks and the current state of AI integration across the sector. It also discusses how different jurisdictions are approaching AI regulation globally.

“ECB research finds that regulation and a lack of institutional quality are particularly detrimental to the expansion of high-tech sectors relative to more mature technologies. Investing in radical technologies is highly risky and needs a different set of framework conditions.”

Christine Lagarde, President of the ECB

Timelines of the EU Act implementation

  • Came into force in August 2024, with tiered compliance obligations to become progressively applicable over a phased timeline until 2030
  • A ban on AI systems presenting unacceptable risks from Febrary 2025.
  • The regulatory provisions governing general-purpose AI models from August 2025.
  • The rules applicable to the majority of high-risk systems, limited-risk or minimal-risk AI systems will take effect in August 2026.
  • The regulatory framework for high-risk AI systems that are deemed high risk because they are a safety component of a system that is subject to EU harmonisation rules will become applicable in August 2027.
  • Obligations for high-risk AI systems intended for use by public authorities that were on the market before the entry into force of the EU AIA will take effect from December 2030.

It should be noted that at the time of writing there is ongoing discussions to potentially delay the roll out of the EU AIA, due to combination of factors such as mounting pressure from industry, challenges with finalising technical standards and geopolitical considerations.

Some takeaways from the EU AI Act

The EU AIA introduces obligations across the value chain and covers the providers, deployers, distributors and importers of AI, as well as AI enabled product manufacturers. The most rigorous requirements will be for the providers of high-risk AI systems, but it also sets clear actions for deployers.  The regulation is applicable to organisations located within the EU and to the ones located elsewhere but which provide AI systems, products and services to the EU. Most financial institutions will not be providers (e.g. developing the LLMs in the AI tools like Open AI) or distributor (e.g. like Microsoft) – they will be deployers or users. However, it is still important to understand the risk-based classifications and obligations imposed on the providers and distributors, for context. We provide below a brief overview.

It is also important to note that the EU AIA introduces an approach that is consistent with the broader trend in EU banking regulation for enhanced risk control and more transparent governance, including the principles in Basel III package (CRR3/CRD6). In this sense, the EU AIA is fully aligned with the EU’s ongoing regulatory efforts to promote effective risk management.

Risk based regulation

  • Unacceptable risk: Systems that present an unacceptable level of risk. The building, buying and use of these will be prohibited in the EU.  These are AI systems considered to be a clear threat to the safety, livelihoods and rights of people, such as social scoring by governments and real-time remote biometric identification systems.
  • High risk: There are rigorous mandatory requirements for high-risk systems, with the most onerous being for providers of these systems.  High risk AI includes the system used in employment, education, surveillance systems and credit scoring. Some of the key requirements include:
    • Establishing and maintaining processes for risk management, quality management and data governance across the development lifecycle into deployment and monitoring
    • Maintaining appropriate technical documentation and record keeping to support transparency requirements
    • Pre-market conformity assessments and defining appropriate levels of human supervision in production
    • Post-market monitoring, including generating and maintaining system logs and identifying, addressing and reporting serious incidents
  • Limited risk: This category includes AI systems that interact directly with people (e.g., chatbots) and visual or audio “deepfake” content that has been manipulated by an AI system. They are systems where it is not obvious to humans that they are interacting with a machine or that the output generated by them is synthetic and not from humans.  These systems are allowed but have transparency obligations such that end users or those affected by them are clear that an AI system is being used.
  • Minimal risk: These are all other systems not in the above three categories and are allowed with no formal requirements.

General Purpose AI models (GPAI)

These include models like GPT-4o. GPAI models are separated out because they can be applied in many ways, with risks arising from both how they are used and the underlying model itself. For those, regulatory requirements are two-fold: at the model level (for providers such as OpenAI) and at the use-case level (for each new application developed by an organisation). There are also specific obligations for models deemed to pose a “systemic risk”, currently defined in terms of computing power used to train them.

Compliance and operational impact of the EU AI Act

Compliance with the EU AIA will require financial institutions to build new governance capabilities, adapt second line models, align operational processes and integrate AI oversight across the organisation.

The financial sector industry is expected to be a particularly intensive user of AI systems. Financial institutions are actively exploring ways to leverage AI in order to enhance the quality of their customer experience, optimise internal processes and meet evolving regulatory expectations.

Under the EU AIA, financial institutions are required to establish a process for assessing the potential consequences for individuals or groups of individuals or both and societies that can result from the development, provision or use of AI systems.

  • The AI system impact assessment shall determine the potential consequences an AI system’s deployment, intended use and foreseeable misuse has on individuals or groups of individuals or both and societies.
  • The AI system impact assessment shall take into account the specific technical and societal context where the AI system is deployed and applicable jurisdictions.
  • The result of the AI system impact assessment shall be documented. Where appropriate, the result of the system impact assessment can be made available to relevant interested parties as defined by the organisation.
  • The organisation shall consider the results of the AI system impact assessment in the risk assessment

The EU AIA therefore requires institutions to establish robust data governance frameworks that guarantee transparency, security and full respect for users’ rights. Achieving compliance with these requirements will necessitate that financial institutions undertake a range of initiatives, including:

  1. Developing an AI inventory: be able to accurately identify all AI systems they use — whether developed internally or sourced externally and must map the operational processes supported by these systems.
  2. Risk assessment and classification: each AI system must be evaluated and classified in accordance with the risk levels defined under the EU AIA, which will trigger the corresponding regulatory obligations.
  3. Ensuring compliance for high-risk systems, including:
    • Establishing a comprehensive AI risk management system.
    • Implementing a high standard of cybersecurity protections.
    • Ensuring effective human oversight of AI-driven processes and decision-making.
    • Meeting all applicable information and transparency obligations.

Responsible AI deployment must also address its impacts on employees and organisational dynamics.  An essential action will be developing a policy for the use of AI/ GenAI for the organisation.  Financial institutions should engage proactively in internal dialogue on how AI will affect roles, responsibilities and working conditions. Internal governance frameworks should explicitly cover the introduction of AI-based decision systems in HR and management processes, ensuring that employees’ rights, health and well-being are safeguarded. Structured engagement with employee representatives can help build trust, support workforce adaptation and mitigate operational risks as AI adoption progresses.

Foundational implementation actions for financial institutions to prioritise

To operationalise these compliance requirements and governance principles,  financial institutions should prioritise the following foundational actions:

  1. Ensuring that there is a consistent understanding of what is AI vs what is not AI in line with the EU AIA. The definition of AI in the EU AIA is broad and many organisations have a much narrower definition of what constitutes AI, thereby increasing the risk of under-governing and regulatory non-compliance.
  2. Identifying and inventorying current in-house developed and third-party AI systems and designing a robust process to identify and inventory new AI systems. 
  3. Risk assessing and categorising inventoried AI systems to determine their EU AIA risk classification and the applicable compliance requirements.
  4. Undertaking a historical exercise to help ensure that prohibited AI systems are not in use and designing a go forward process to stop the development or procurement of prohibited systems.
  5. Understanding the organisation’s role in the AI value chain for different AI systems and the associated obligations for different categories of risk.

“As financial institutions scale AI across core functions, the EU AI Act will compel them to industrialise their governance frameworks — ensuring that risks are properly managed, outcomes remain explainable and controllable and AI systems can be trusted at scale across the organisation.”

Gregory Marchat, Group Head of Financial Services Advisory, Forvis Mazars in the UK

Foreseen challenges of aligning with the EU AIA

 financial institutions are moving carefully as they scale AI, focusing first on lower risk used cases phased pilots in well-defined, lower-risk areas where governance, transparency and oversight can be tested. This allows firms to validate their frameworks and build internal capability as supervisory expectations continue to evolve.

Looking ahead, financial institutions will face several key challenges in aligning with the EU AIA. A first is cost: developing AI systems that meet business needs and comply with the regulation will require sustained investment, not only in technology but also in risk and audit capacity. Continuous monitoring, testing and dedicated audit programmes will be needed to ensure compliance as AI use expands.

Another challenge is integration. AI governance must align with existing frameworks (also from a second line perspective) — covering model risk, data governance, ICT risk and operational resilience — to avoid duplication and ensure consistency across the control environment.

Environmental impact is also emerging as a supervisory focus. Large-scale AI models can carry a significant carbon footprint. As new sustainability rules (CSRD, CSDDD)[2] take effect, financial institutions should expect growing interest in how AI fits into their environmental governance and risk transparency.

Delivering on AI governance will also require real engagement across the organisation. Risk, IT, compliance, HR and business teams all need to understand the opportunities and risks AI brings — and how these must be managed within the bank’s culture and accountability framework.

Global divergence in AI regulation

As artificial intelligence continues to evolve at pace, countries are taking markedly different approaches to how they govern it. Some are pushing ahead to unlock its economic potential with minimal oversight, while others are putting legal safeguards in place to manage the risks. This divergence reflects not only contrasting regulatory philosophies, but also deeper strategic bets on the future of AI.

For international banks, these differences are far from academic. They affect everything from how AI can be used in credit scoring and fraud detection, to how data is handled across borders. With AI now embedded in core financial operations, understanding the global regulatory landscape is becoming essential to managing compliance, operational risk and long-term competitiveness.

Here are some examples of recent approaches to AI regulation across the world.

  • Europe – robust regulatory safeguards. Europe has taken a pioneering step by putting regulatory safeguards ahead of unrestrained innovation. The EU AIA, the first comprehensive legal framework of its kind, aims to ensure AI systems are safe, transparent and accountable. It reflects a deliberate choice to embed public trust and fundamental rights into the development of AI—an approach that stands in contrast to the more hands-off strategies seen elsewhere.
  • United Kingdom – innovation sandbox. The UK government has taken a “wait and see” approach, aiming to test and understand AI systems before introducing formal regulation. It hopes to position the UK as a global hub for AI development, setting itself apart from the EU’s more prescriptive model.
  • United States – deregulation and industry pressure. The United States has moved away from stronger federal oversight, including having recently revoked some of the existing guardrails around AI development. At the same time, the US has urged Europe to reconsider its regulatory stance, arguing that stricter rules on transparency, risk management and copyright could stifle innovation. While AI companies still face some data protection obligations, the overall regulatory environment remains relatively light and fragmented.
  • ASEAN – light-touch governance. ASEAN has opted for a more flexible, light-touch approach, giving its member states room to explore the potential of AI without early legal constraints. The bloc has issued voluntary guidance to promote responsible AI use through proportionate and interoperable measures. The aim is to allow time to fully understand AI’s long-term impacts, risks and benefits before introducing binding regulation.
  • China – state-led content control. In March 2025, China finalised its measures for the labelling of AI-generated content, to come into effect on 1 September. These rules require AI-generated content to be clearly identified and prohibit discrimination based on gender, age or ethnicity. They also ban the creation of false or harmful information and the use of personal data without consent. While the regulation provides a structured framework for content traceability, it operates within a broader environment of state-led content control. Therefore, while the regulatory scope is narrower than the EU AIA, it is expected to be combined with strict limitations on what AI tools can generate or express.
  • Japan – ethical principles based. Japan’s House of Councillors is currently reviewing the “Bill on the promotion of research, development and utilisation of artificial intelligence-related technologies”. The bill promotes ethical principles in the use of AI, but places greater emphasis on national development than on strict legal controls. Unlike the EU’s binding rules, Japan’s approach relies on voluntary compliance and industry-led standards, favouring guidance over enforcement.

“AI is becoming embedded in banking value chains globally — but regulatory approaches remain highly fragmented. The EU’s AI Act sets detailed obligations, while the UK and US favour more principles-based regimes. Financial institutions operating globally will need adaptable governance frameworks that uphold consistent risk standards, without introducing uneven controls across jurisdictions.”

Eric Cloutier, Group Head of Banking Regulations / Head of Global FS RegCentre, Forvis Mazars in the UK

Ongoing regulatory developments shaping the EU AIA broader implementation

Alongside direct compliance preparations, financial institutions should also be aware of ongoing regulatory developments shaping the EU AIA’s broader implementation.

One key document under development is the GPAI Code of Practice[3], currently in its third draft by the EU Commission, designed to support the implementation of the EU AIA. It will be released as a voluntary code initially but is expected to become mandatory and apply to all EU countries. It is the critical “how to” document that will need to be acted upon by those who come under the scope of the EU AIA – providers, distributors and deployers.

In February 2025, the EU also published draft, non-binding guideline on AI system definition[4], defining what qualifies as an AI system and which practices are prohibited under the EU AIA. These are crucial for FS firms to assess whether tools like credit scoring or fraud detection fall under high-risk obligations and require compliance planning.

In parallel, the Commission ran a public consultation on its broader Apply AI Strategy[5] (ended 4 June 2025), to support coherent application of the EU AIA alongside other EU regulations. It is also encouraging early engagement through the voluntary AI Pact, which invites financial institutions to commit to preparing for key EU AIA obligations ahead of full implementation. In addition, there are ongoing policy discussions about broader regulatory simplification, aimed at streamlining rules across digital and sustainability legislation — including how the EU AIA fits into this wider context. These remain at an exploratory stage and no formal proposals or amendments have been made.

For financial institutions, these evolving initiatives require careful monitoring of developments across related frameworks, particularly where overlaps may emerge with DORA, PSD3/FIDA and CSRD. Engaging early offers banks an opportunity to help shape practical standards and ensure alignment with supervisory expectations.

Toward an increased European supervision of AI

The European supervisory framework is already preparing for a comprehensive review of the use of AI, notably in the financial sector. A key feature of the EU AIA is that it assigns primary responsibility for AI oversight in finance to national competent authorities (NCAs).

To carry out this role effectively, NCAs will need to build strong technological expertise, develop specialist supervisory capabilities and establish dedicated methodologies for auditing AI systems. Their supervisory activities will not only focus on compliance with the EU AIA, but also on promoting sound governance and robust risk management practices.

The growing importance of this area is reflected in the inclusion of AI oversight in the Single Supervisory Mechanism’s (SSM) supervisory priorities, as well as in national agendas. For example, in France, where the ACPR has made AI supervision a strategic priority for 2025.

NCAs are expected to pay particular attention to operational risks linked to the use of AI in banking, including risks arising from poor data quality, model drift and lack of explainability. In response, several authorities are now developing their own supervisory tools to prepare for full-scale audits of AI algorithms.

Banks should therefore anticipate on-site inspections in the coming years, aimed at verifying that AI systems are being used appropriately and that regulatory expectations—both under the EU AIA and existing financial legislation—are being met.

EU AIA: a strong call for action

As financial institutions scale their use of AI, the risks — to customers, to the business and to trust — are growing alongside the opportunities.

AI can clearly improve service, speed and efficiency. But in areas like credit decisions and investment advice, it can also reinforce biases hidden in data — leading to unfair outcomes or poor advice. Without the right oversight, AI could end up excluding certain groups from credit or steering customers towards products that are not right for them.

The EU AIA aims to tackle some of these risks — by requiring greater transparency, human oversight and accountability. For financial institutions, this goes beyond compliance. It means embedding AI governance into core risk frameworks and making sure leadership stays close to how AI is used across the organisation.

For boards and executive teams, the message is clear: those who act early to build strong, transparent AI governance — consistent across business lines and across jurisdictions — will be better placed to manage the risks and build trust. And in a fast-moving market, that will also be a source of competitive advantage.

Stay tuned: our next newsletter will focus on establishing what considerations the financial sector will face in the implementation on the EU AI Act.


[1]Règlement – UE – 2024/1689 – FR – EUR-Lex; [2] Respectively Corporate Sustainability Reporting Directive and Corporate Sustainability Due Diligence Directive; [3] https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice; [4]The Commission publishes guidelines on AI system definition to facilitate the first AI Act’s rules application | Shaping Europe’s digital future; [5]Commission launches public consultation and call for evidence on the Apply AI Strategy | Shaping Europe’s digital future