The key to keeping financial industry AI safe and effective

Publication type
Artificial intelligence
Date

As organizations race to integrate increasingly complex Artificial Intelligence (AI) into their business practices, experts suggest a robust governance framework is a good way to ensure models remain effective, safe, and fair.

AI governance was one of the topics a group of AI gurus from the financial services industry, government bodies, and academia discussed at the Financial Industry Forum on Artificial Intelligence (FIFAI) workshops.

The conversations, summarized in the FIFAI report that followed, touched on four main principles guiding the use and regulation of AI in the financial industry:

  • E – Explainability
  • D – Data
  • G – Governance
  • E – Ethics

In this series of articles – which has already covered Explainability and Data – we are examining each of the themes in detail to see what we can learn and how we can apply this knowledge to regulatory research and activities.

Please note that the content of this article and the AI report reflects views and insights from FIFAI speakers and participants. It should not be taken to represent the views of the organizations to which participants and speakers belong, including FIFAI organizers, the Office of the Superintendent of Financial Institutions (OSFI) and the Global Risk Institute (GRI).

In addition, the content of the article and report should not be interpreted as guidance from OSFI or any other regulatory authorities, currently or in the future.

What is governance, anyway?

First, it’s important to understand what governance is (and isn’t).

The Canadian Audit and Accountability Foundation defines governance as structures, systems, and practices an organization has in place to:

  • Assign decision-making authorities, define how decisions are made, and establish an organization’s strategic direction
  • Oversee the delivery of its services, including the implementation of its policies, plans, programs, and projects while monitoring and mitigating key risks
  • Report on its performance in achieving intended results and use performance information to drive ongoing improvements and corrective actions.

More specifically, FIFAI Forum participants highlighted five characteristics desirable for good governance of AI at financial institutions:

  • It should be holistic and encompass all levels of the organization
  • Roles and responsibilities should be clear and well-articulated
  • It should include a well-defined risk appetite
  • It should reflect the risk of use cases
  • It should be flexible as a financial institution’s adoption of AI matures

Governance in practice

So, what does this mean in practice?

One example, from our article on the FIFAI theme of Data, suggests organizations build a "culture of data literary" by bringing attention to the potential pitfalls associated with unfettered data use.

"Organization-wide awareness of the various risks that stem from inadequate use of data is essential with widespread adoption of AI, thus, organizations should consider ongoing training activities for their employees on a broad range of aspects related to data," the FIFAI report suggests.

Another good example comes from the Government of Canada’s Treasury Board Secretariat (TBS).

At the FIFAI forum, TBS Director of Data and Artificial Intelligence Benoit Deshaies explained how the Government of Canada uses an Algorithmic Impact Assessment Tool to assess automated decisions on a range of topics.

"It was developed using a collaborative approach, has a well-defined scope and application, is risk-based, and it implements the principle of proportionality, as it determines a score based on the impact to customer," the FIFAI report explains.

"Risk mitigation practices are then translated into specific requirements of governance depending on the level of impact. It is a self-assessment tool that is supported by a peer review process for automated systems that may require it."

Governance isn’t the be-all, end-all solution to ensuring models are both effective and safe, but it is a critical component of any multi-pronged approach.

A strong foundation

That said, it may not be necessary to start from scratch.

Forum participants agreed with the Bank of England’s assessment that extending existing governance frameworks was a better approach than developing a suite of new AI-specific processes and procedures.

As the FIFAI report notes, financial institutions are at different levels of maturity in their adoption of AI and in their implementation of governance frameworks.

Developing a robust governance framework inclusive of AI might involve a significant culture change as more areas of financial institutions leverage AI techniques.

Moving forward

Forum participants discussed the "way forward" for AI governance and settled on five key areas: tools and technology governance, third-party governance, organizational aspects, skillset and education, and collaboration.

The FIFAI forum itself shows the value of the latter.

Participants agreed it would be helpful to continue collaborating with stakeholders and create a broad community of practice where institutions could share best practices.

"Continuous dialogue and collaboration between different stakeholders like academia, industry and regulators could help advance innovation through knowledge sharing," the report states.

To dig deeper into AI governance or any of the other themes forum participants discussed, read the full FIFAI report (PDF, 5.2 MB).