Why the biggest AI challenge may be explaining what it does

Publication type
Artificial intelligence
Date

​​​Imagine for a moment you’ve applied for a loan that has the potential to change your life and you’re waiting to hear back from the bank.

Perhaps it’s a mortgage to acquire a dream home for your young family. Maybe it’s a business loan that will allow you to finally start chasing your entrepreneurial dreams.

Finally, the bank calls with the news: you’ve been rejected.

Your heart sinks.

“But why?” you ask the voice on the line.

“Ah, gee, I don’t know,” the voice on the line responds. “To be honest with you, it’s super complicated.”

Would you be satisfied with that explanation?

Explainability and regulating artificial intelligence (AI)

Of course, a bank would have a better (and more polite) response and a better handle on its approval processes.

But it does beg the question: when it comes to the exponentially more complex AI systems being used by the financial industry now and in the coming years, how much, and what kind, of information needs to be accessible and explainable to customers and regulators?

That was just one of the questions that an impressive group of AI experts, from the financial services industry, government bodies and academia, discussed at the recent Financial Industry Forum on Artificial Intelligence (FIFAI) workshops.

The conversations and the resultant report, coalesced around four main principles guiding the use and regulation of AI in the financial industry:

  • E – Explainability
  • D – Data
  • G – Governance
  • E – Ethics

In this series of articles – starting with Explainability – we’ll examine each of the themes in detail to see what we can learn (and how to apply that knowledge to regulatory research and activities in the future).

Note that the content of this article and the AI report reflects views and insights from FIFAI speakers and participants. It should not be taken to represent the views of the organizations to which participants and speakers belong to, including the Office of the Superintendent of Financial Institutions (OSFI) and the Global Risk Institute (GRI).

In addition, the content of the article and report should not be interpreted as guidance from OSFI or any other regulatory authorities, currently or in the future.

Explainability explained

So, what exactly is “explainability” in the context of AI?

At its most basic level, explainability enables financial institution to deepen trust with their customers.

When customers understand the reason for a decision, they become empowered to work towards their financial goals.

That’s easier said than done, of course.

One of the most prominent and persistent challenges with using AI is explaining how those models reach their conclusions.

Without explainability, it’s tough to examine the theory, data, methodology and other key foundational aspects underpinning an approach. Without that, it is also difficult to verify a model’s performance and confirm it’s fit for purpose.

As Alexander Wong, Professor and Canada Research chair at the University of Waterloo said: “One of the key things that explainability enables is ensuring that the right decision is being made for the right reason.”
Forum participants tackled five key questions while unpacking these issues, namely:

  • What levels of explainability might AI systems have?
  • What factors should determine an appropriate level of explainability for a particular application?
  • What are the approaches to achieve explainability and what are the associated risks?
  • How does explainability connect to a more general concept of transparency?
  • What is the role of explainability in building trust?

Who needs to know what, and when?

One important point was that the degree of explanation required for a model should be considered at the onset of the model selection and design and be driven by the use case and associated governance framework.

“Explainability for someone doing a rigorous independent review of an approach might be different than explainability that needs to be provided to a consumer explaining why they shouldn’t be extended credit,” David Palmer of the Federal Reserve Board said.

The FIFAI report provides several examples of that concept in action. Explainability can:

  • Be helpful to data scientists for easier debugging and better identification of ways to improve performance and robustness of AI models
  • Help business owners understand and better manage risks that stem from AI tools, and also help regulators certify compliance
  • Provide better explanations to customers so they understand why a certain decision was made
  • Explain how the customer can change their behaviour to influence future decisions

The next step is to decide what the appropriate level of explainability is.

Levels of explainability

As the FIFAI report notes: Levels of explainability reflect the degree to which we understand how a model arrives at its conclusions.

This applies to both local explanations (understanding a particular decision) and global explanations (understanding an AI model).

Models that are completely transparent have a high level of explainability compared to less transparent techniques that have a low level of explainability.

That doesn’t mean it’s impossible to understand more complex systems. An explanation of outcomes can still be provided for models with a low level of explainability via post-hoc analysis techniques.

All forum participants agreed that the appropriate level of explainability depends on numerous factors, including:

  1. What needs to be explained?
  2. Who needs the explanation? Levels of explanation could differ depending on the recipient (i.e., data scientist, business owner, regulator, customer).
  3. Is it a high-materiality use case? For example, the need for explanation is less important for an AI chatbot versus AI models that make credit decisions.
  4. How complex is the model? Extremely complex models might not be appropriate for certain use cases.

Having answers to these questions allows institutions to achieve the very important objective of building trust.

Explainability and trust

There are many factors that come into play when it comes to building trust.

Simple knowledge of how a model works may not be sufficient to build trust because there are other aspects like model accuracy and absence of bias that come into play.

And how do you quantify trust, anyway?

“Trust is really hard to gauge, but you know when you don’t have it anymore,” said Stuart Davis, Executive Vice President, Financial Crimes Risk Management, Scotiabank.

Here’s how the FIFAI report summarizes the interplay between AI and trust:

Explainability, together with disclosure at the right levels and to the right audience, is one of many factors that contribute to developing trust between a financial institution and its customers.

Inevitably, increasing trust in AI enables further use and innovation.

For a deeper dive into explainability or any of the other themes forum participants discussed, read the full FIFAI report (PDF, 5.42 MB).