How financial regulators can keep pace with AI innovation

Publication type
Artificial intelligence
Date

There is a persistent myth that innovation and regulation are mutually exclusive concepts, however, regulation isn’t about restricting companies from pursuing new, inventive approaches to doing business.

Rather, it’s about finding balance.

So, how do regulators walk the line between ensuring the soundness of the financial system and protecting consumers while also allowing financial institutions continue to innovate and remain competitive?

Breaking down AI regulation

Regulation was just one of the topics that a group of AI experts from the financial services industry, government bodies, and academia discussed at the recent Financial Industry Forum on Artificial Intelligence (FIFAI) workshops.

Those conversations, summarized in the FIFAI report, touched on four main principles guiding the use and regulation of AI in the financial industry:

  • E – Explainability
  • D – Data
  • G – Governance
  • E – Ethics

In this final edition in our series of articles on AI in the financial industry, we’ll take a closer look at the role of regulation in this rapidly evolving space (please be sure to check out our previous pieces on Explainability, Data, Governance and Ethics).

Over the past two months, we’ve been examining each of the themes in detail to see what we can learn and how we can apply this knowledge to regulatory research and activities.

Please note that the content of this article and the AI report reflects views and insights from FIFAI speakers and participants. It should not be taken to represent the views of the organizations to which participants and speakers belong, including FIFAI organizers, the Office of the Superintendent of Financial Institutions (OSFI) and the Global Risk Institute (GRI).

In addition, the content of the article and report should not be interpreted as guidance from OSFI or any other regulatory authorities, currently or in the future.

New technology, new risks

As the FIFAI report notes, AI has and will continue to provide benefits to financial institutions and their customers.

As these systems improve performance and efficiency, it’s expected that financial institutions will increasingly embed AI within their products, processes, and decision-making.

“However, the realization of the new risks and exacerbated risks such technology could pose have necessitated various jurisdictions to begin to formulate regulations,” the report explains.

FIFAI forum participants centred their regulation discussions around three key questions:

  • What is the state of AI regulations globally?
  • What is expected from financial institutions?
  • What are the positions of the regulators?

In recent years, policymakers and regulators have been reviewing existing laws and regulations, collecting feedback from stakeholders, and drafting new regulations to address AI-related risks.

Although approaches to AI principles and regulations are varied in terms of scope and impact, they are aligned across jurisdictions on the following points:

  • AI models should be conceptually sound (e.g., accurate, reliable, robust, sustainable)
  • Explainability is essential in high-stake decisions (including those with customer impact)
  • Organizations should have proper governance structures that address challenges created by AI (e.g., transparency, accountability)
  • AI should not cause any harm to individuals and society (e.g., bias, discrimination, ethical considerations, privacy concerns)

Building the formula for success

Forum participants searched for some common characteristics of successful regulations.

Again, it’s about finding the right balance. Financial institutions want to have the ability to innovate, so they’re seeking regulations that are clear on what’s allowed and yet not so prescriptive that they stifle those efforts.

The AI experts agreed that regulations should be consistent, follow a new set of best practices, cover third-party data and products, consider feedback from stakeholders, and ensure proportionality.

“In addition to the quality and content for successful regulations, forum participants recommended that regulatory bodies promote AI literacy, including data and consent, and also encourage financial institutions to do the same in order to broaden financial inclusion in Canada,” the FIFAI report suggests.

Regulatory innovation

As financial institutions continue to innovate and implement new technology, it’s important that regulators stay agile and current with advances in AI as well.

As the FIFAI report notes: “Despite the financial sector being ahead of many other sectors with regards to understanding the risks from models, it is necessary for regulatory bodies to keep abreast of the new risks emerging from adoption of AI.”

On the innovation front, regulators must continue to test and learn, as well as glean insights from AI use in other industries. It may make sense for some regulators to stand up innovation offices to stay on top of the latest developments.

Collaboration with other regulatory bodies and with the financial institutions they oversee can also help to fill in gaps.

“It is important that regulators not be perceived as a hindrance to financial institution innovation,” the report concludes. “Discussing and sharing with the private sector could be beneficial.”

For more on regulation of AI in the financial industry and any of the four EDGE themes, read the full FIFAI report (PDF, 5.42 MB).