From Data Trails to Impact: Building the Foundations for AI in Inclusive Finance
At last year's European Microfinance Week, Edoardo Totolo spoke in a panel on artificial intelligence and financial inclusion. FinDev Gateway caught up with him to learn more about his thoughts on what the financial inclusion sector needs to do to reap the benefits of AI-based applications.
Below is the transcript of the interview:
The foundations needed to take advantage of AI: Data trails and data exchange
A lot of the conversations we're having are primarily about AI applications, meaning how do we use AI to develop new services, new tools that can be used either at the front end to serve customers or at the back end to, for example, streamline reporting requirements.
What we are not talking about enough are the foundations that make those applications possible. We need data trails for underserved or excluded customers so that they can be part of the data economy, so they can benefit from these AI tools. So, the data trails are incredibly important and you see this divide between those that have a lot of data trails they can use, they can share, that can pass between institutions, and those who have very few data trails and those customers are by definition excluded from that.
A second layer which is very important, and we've published about it recently, is on data exchange. The data, if it sits in the same institution where it was first created, doesn't have much value. The value of data comes when you're able to use that data and move it, with the consent of the consumer, between institutions so that you can access services that you wouldn't be able to access otherwise. That data exchange layer is, I think, one of the most important pillars on which we will build new applications in the future.
There are some models being tested, for example, in the financial sector. Open banking, open finance ultimately is a system of data exchange where consumers can move their data from one institution to another. What is the challenge is that if you are underserved or you are excluded, you don't have any data to share so there's an element of first building these data trails and then once you build those is exchanging them in a safe secure consumer-driven way - consumer-centric, meaning that they need not only to consent but they need to initiate that transaction, because we know consent has a lot of problematic features, meaning that people may consent to things that they don't know.
We need data trails for underserved or excluded customers so that they can be part of the data economy, so they can benefit from these AI tools.
So, these are the two foundations that I think are critical and that we'll need to talk about a lot more.
Examples of different applications of AI
There are many different applications of AI and they're not new. We've been talking about machine learning and the use of machine learning for credit scoring using alternative data for over a decade probably. We've been using natural language processing for, for example, analyzing social media data or analyzing complaints data to develop tools for suptech - supervising - technologies or regtech. Those are not new; those have been around for a while.
However, it's important to say that now with generative AI in the last year and a half or so, we have accelerated exponentially the interest, the momentum, the potential of these applications to really play a transformative role in financial inclusion. We are at a strange time where it's difficult to tell what is there to stay, what has substance, and where instead there's promise but very difficult to identify viable opportunities that will have that transformative role.
For example, Sajida Foundation in Bangladesh has developed this chatbot that initiates voice calls to customers in local language, actually in a range of local languages, and it's primarily aimed at those illiterate clients or those clients who have low levels of literacy. What they do is essentially they call them with notifications, for example, in order to confirm some transactions to make sure that they are not fraudulent or that those are not incorrect in a way that I think has a lot of potential. They also use the same technology for complaints handling, very powerful.
We saw another application which is quite different. This is now a front-end customer-facing application. We saw MFR and Oikocredit piloting a new tool which is more at the back end, meaning they're trying to streamline reporting requirements which we know is a very heavy requirement for many small microfinance institutions. So they're trying to use AI in order to basically use diverse types of documents and build the capacity to use those documents and create those reporting requirements automatically rather than manually as it currently happens.
These are a couple of applications that I think are very important. Now those are pilots, those are at very early stages, and during the panel, it was very clear that cost is a very significant factor. Developing, maintaining, keeping up to date these tools is very expensive and it doesn't make sense unless they reach scale basically, and that's not clear right now how that financial viability component will be addressed. That's still an open question.
The risks associated with new AI processes
We see, of course, new opportunities but also a lot of new risks. Now there's a range of them. Those risks can be bias risks, privacy risks, they can be third-party risk, exclusion risk, so there's a range of different risks that we are facing that are developing in different ways and that will need to be tackled in a different way.
Now what's important to say is that these risks are being amplified, especially if you think about risks of bias. The risk of bias when we think about a single credit officer may have a bias and may make a few biased decisions per day, but when it comes to an algorithm, the algorithm can make thousands, hundreds of thousands of biased decisions if the data they use are biased or is not portraying the accurate realities of the customers that they're trying to serve. So I think that the scale of this risk is just being amplified by AI and the mitigating factors need to be thought through. Some of the research that we have done points to a few things. Algorithms themselves, of course, play a role. We can use some data science techniques to first of all identify this risk and mitigate this risk. So there are some statistical computer science solutions that are being developed and that have potential, but that's not enough.
What's important to say is that these risks are being amplified, especially if you think about risks of bias. [While] a single credit officer may ... make a few biased decisions per day... [an] algorithm can make thousands, hundreds of thousands of biased decisions if the data they use are biased.
Beyond that, as we mentioned earlier, there are data inputs, so there's an element of making those data inputs more widely available and increase their capacity to portray the livelihoods of multiple segments of the economy of the population. That is very important. So there's an element of data inputs, there's an element of algorithms, there's also an element of business processes.
If we think about the fintechs, financial institutions themselves, what leads to an increased capacity to detect and mitigate these risks? We saw that those teams that build structures, for example, having more diverse management and data science teams that have built structures, mechanisms, policies to regularly detect trying to mitigate this risk, have a much higher capacity to do something that is useful.
It's a fast-evolving space, so some of the risks, the way we're thinking about them, the way we're talking about them right now may evolve very fast. There are definitely many applications that are still at the very early stages and that will be tested and will be deployed in the market at a much higher speed in the future.
First, we need to build the foundations
I think that those customer-facing applications that are able to increase the capacity to communicate with customers in an efficient way, in a way that is clearly understood and that serves the needs of the customers, those customer-facing applications are very powerful and I think that there will be new applications similar to the one we saw during this conference, the one that Sajida Foundation is piloting right now. There's also a range of back-end applications that I think also will play a very important role for financial institutions to streamline the processes to become more efficient. Now, I think that before we get to the full potential of these applications, we need to build the foundations.