Artificial Intelligence in Financial Inclusion: How Should Supervisors Respond?
Insurance premium and coverage that dynamically change according to a driver’s behavior? A financial adviser born out of algorithmics?
AI and ML are used in various areas such as fraud detection and credit scoring, as we can see in the figure below.Figure 1: To what extent does your organization use artificial intelligence for the following business units?
Source: Economist Intelligence Unit Survey (global survey with senior banking executives)
While we still do not have an exact picture of the full financial inclusion potential of AI and ML in emerging economies, there is no doubt that these technologies bring both opportunities and risks in a wide array of areas, which need to be carefully considered by financial supervisors. But what are these risks and opportunities and what should supervisors do?
According to a Toronto Centre Note, AI and ML create new risks and affect old ones, both positively and negatively, in three main areas of concern for supervisors (Figure 2). More familiar risks include credit, data security and money laundering risks. Some of the new risks include the difficulty in ensuring full transparency of AI and ML models, such that they can be readily explained to a general public, as well as their robustness, fairness and ethical use. For instance, supervisors should be concerned with the potential that these models have to amplify or create unfair biases and discrimination.
Figure 2: Transmission mechanisms whereby AI and ML impact risks
Before taking hasty action, supervisors should ask themselves:
- What can be done to control and mitigate these risks?
- Are new regulations required or are the existing ones adaptable?
- What are the implications for supervisory resources – numbers, skills and expertise?
Supervisors need to understand the models and use cases in their jurisdictions and respond according to the significance of the related risks. They may find it necessary, for instance, to impose specific regulatory obligations on financial institutions to control and mitigate risks, such as ensuring that underwriting models do not produce unfair discrimination and exclusion. They may also need to adapt their supervisory approach, skills and expertise. For example, data science expertise can help supervisors evaluate critical AI and ML models.
But AI and ML do not only mean risks; there are many opportunities as well (Figure 3). They are already offering value in advancing financial inclusion, for example with new credit and insurance underwriting models that use alternative data to serve low-income clients. suptech, which can help improve the effectiveness and efficiency of financial supervision.
according to the needs and characteristics of individual clients or client segments. Another area of opportunity opened by AI and ML is supervisory technology, orFigure 3: Main areas of opportunities opened by AI and ML
Supervisors can combine three types of responses:
- Apply high-level principles and guidelines for trustworthy AI, such as those issued by the OECD and by the European Union, by converting them into regulatory requirements and/or supervisory expectations.
- Apply and adjust existing regulatory requirements that govern the use of statistical models by financial institutions, to the use of AI and ML models. The Financial Stability Institute provides a useful guide for this step.
- Consider replicating and adapting standards that have been recently issued to deal with AI and ML models, including those issued by standard setting bodies (e.g., IOSCO), national authorities (e.g., UK Prudential Regulation Authority, and international authorities (e.g., European Banking Authority).
It is clear that, as in so many other areas of financial innovation,
. This can only be done if they succeed in applying the concept of proportionality to juggle their multiple – and sometimes conflicting – mandates. This task is especially difficult in emerging and developing economies, where supervisory capacity is lower and the legal and regulatory framework is weaker. There is a long road ahead!