Artificial intelligence, implemented by financial regulators, can dramatically improve the effectiveness of supervision, but also carries systemic risks of opacity, discrimination and loss of control.
This is stated in the International Monetary Fund's study "Artificial Intelligence Projects in Financial Supervisors: a Guide to Successful Implementation," prepared by Parma Baines, Gabriela Conde, Rangasari Ravikumar and Ebru Sonbul Iskender.
The authors emphasize that the rapid spread of technology "leaves regulators no room for inertia" and requires them to build new, technologically savvy supervisory systems.
Freedom from manual labor
According to the IMF, about 60% of financial regulators are already exploring ways to integrate AI into supervisory processes, and the share of countries using generative AI (GenAI) increased from 8% to 19% in just one year — from 2023 to 2024. Of the 42 supervisors surveyed by the organization in 2024, 32 reported that they were already testing, using, or developing GenAI tools. In total, 164 supervisory authorities from 105 countries are implementing so-called suptech solutions, technologies designed to make supervision more risk—oriented, operational and accurate.
According to the IMF, AI is already being used in key areas of oversight, from licensing and anti—money laundering to microprudential analysis. For example, the Bank of England uses AI models to predict GDP and bank stress, the Bank of Thailand to analyze bank meeting minutes and identify regulatory violations, and the ECB uses new technologies to verify the business reputation of executives and speed up approval procedures. And the Malaysian Securities Commission uses AI to analyze corporate reports and control information disclosure, the researchers note.
Nevertheless, as the IMF emphasizes, the authorities of developed, developing and emerging economies are implementing suptech unevenly, which raises concerns about the emergence of a "two-tier supervision system", when some regulators are ready and able to introduce new technologies, while others are not. In 2024, 75% of supervisors in developed countries and 58% in developing countries used suptech tools, compared to 79% and 54%, respectively, in 2023.
The authors see the main advantage of artificial intelligence in financial supervision as its ability to do things that took weeks for a person to do in a matter of minutes. AI is able to take over routine processes that directly affect the effectiveness of the supervisory authorities: verification, reconciliation and consolidation of reporting data. According to the IMF report, "artificial intelligence models can facilitate data management — its validation, consolidation, and visualization." "For example, the authorities can replace manual checks of completeness, correctness and consistency of calculations with AI algorithms," the report says.
Freed from this manual labor, regulators will be able to direct resources to where human analysis is really important: "Freed from routine functions such as checking, reconciliation and summarizing data, authorities will be able to focus on the key value—adding function of implementing supervisory judgment," the authors of the report say.
Another advantage of AI is the ability to work with arrays of information that are inaccessible to human perception. Thanks to this, algorithms are able to find relationships and patterns that cannot be identified using traditional methods.: "AI models can support anti-money laundering and counter-terrorism financing surveillance by identifying suspicious patterns in detailed data from different sources." Due to this, AI can become an early warning tool for potential violations, both in the field of AML/CFT and in market surveillance, the authors of the study note.
In addition, AI opens up another important opportunity for regulators responsible for financial stability — forecasting systemic risks, since models can analyze the behavior of market participants and signal signs of stress in advance, the IMF economists add.
Black box and discrimination
However, as the IMF warns, artificial intelligence, designed to make supervision more accurate and effective, carries risks that can undermine confidence in the very idea of algorithmic control, since AI can distort the decision-making process. Algorithms trained on uneven data can "introduce bias by systematically and unfairly discriminating against individuals or groups of individuals," experts emphasize.
Another fundamental problem is the opacity of the “internal logic” of machine solutions. Regulators often cannot explain why the algorithm produced exactly the right result, especially when it comes to complex neural networks. The IMF emphasizes that even explicable artificial intelligence methods such as LIME or Shapley do not guarantee an understanding of what is happening: "The desired properties of such methods — clarity, simplicity, completeness and reliability — may be questioned if the results are difficult to interpret, if significant computational resources are required, if explanations are inconsistent and unstable in different cases. or they are based on assumptions that are not always fulfilled."
IMF economists note that AI can turn into a “black box” whose solutions are difficult to verify and impossible to reproduce, and it is recommended to give priority to simpler models, even if this is to the detriment of the accuracy of predictions. In their opinion, if the explainability of the model is unattainable, as in the case of deep neural networks, regulators should "assess whether the use of such models is appropriate."
Another risk is added to this problem — the obsolescence of algorithms. In an environment where financial fraudsters and market participants quickly adapt to new rules, models lose relevance in just months. "The model trained six months ago to detect suspicious transactions may not take into account the new types of transactions that have emerged over the past three months, as fraudsters adapt their behavior to avoid detection," the report says.
The IMF recommends that regulators create their own artificial intelligence management framework (AI governance) — internal control systems that include ethics committees, model validation procedures, and transparency standards. "Financial authorities should identify organizational structures, roles and areas of responsibility, performance indicators and accountability for the results of AI models, and people should remain involved in the decision—making process supported by AI," the authors of the study note.
