Financial Industry Forum on Artificial Intelligence: A Canadian Perspective on Responsible AI

Publication type
Artificial intelligence


Data availability and accessibility have improved dramatically, modeling techniques have taken a large step forward, and models are being applied to an increasing number of businesses across regulated financial institutions in Canada. Capabilities and usage have evolved faster than regulation.

OSFI partnered with the Global Risk Institute (GRI) to create a community of Artificial Intelligence (AI) thought-leaders from academia, regulators, banks, insurers, pension plans, fintechs, and research centres. This group, called the Financial Industry Forum on Artificial Intelligence (FIFAI), advanced the conversation around appropriate safeguards and risk management in the use of Artificial Intelligence (AI) at Financial Institutions.

Ideas discussed to support safe AI development are grouped into Explainability, Data, Governance, and Ethics - the “EDGE” principles.

Responsible artificial intelligence principles – EDGE


Explainability should be considered at the onset of model design and is driven by the use case and associated governance framework. Examples were provided where an explainable model would be selected over a higher performing opaque model, recognizing that modeling goals may be broader than strictly performance. In high impact use cases, there was discussion whether inherently explainable models should be used rather than relying on post-hoc explanations.


Financial institutions have long been working with data, but the integration of AI and the ensuing data sources into their operations have presented new challenges for managing and utilizing data. With new data sources and types, increased volume of data, and acceleration at which data is generated and utilized, it can be more challenging for financial institutions to integrate and standardize controls to manage data risk.  This is especially true when data exists in silos within an organization or when it comes from different external sources. Improving the data used to train an algorithm will have a direct impact on model performance. Therefore, it is important for financial institutions to align their business and data strategies to ensure that they are collecting, managing, and analyzing the right data to support their goals. Good data governance can help ensure that data is accurate, consistent, and complete, which is crucial for the effective functioning of AI systems.


Governance has become an increasingly important topic and has matured to have the following properties: it should be holistic and encompass all levels of the organization, roles and responsibilities should be clear, it should include a well-defined risk appetite, and it should be flexible as an institution’s adoption of AI matures. In addition, AI model governance specifically requires a multi-disciplinary approach to be effective, it should be part of a risk-based culture and not just a rote exercise.


The concept of ethics is very nuanced and naturally it is subjective. Ethical standards change over time and their codification into laws and regulations show the challenges and complexity of addressing AI Ethics. There is not a universal definition of fairness:  what is perceived as fair depends strongly on the context as well as one’s perspective. Within the realm of algorithmic fairness, there are different mathematical definitions, many are conflicting. Despite the legal perspective, complying with the law does not always mean that actions and outcomes are fair or perceived to be fair. There are many use cases where bias is the desired outcome, such as pricing policies or risk stratification. Data used for AI training can be the source of bias and unfair outcomes. The current approach to addressing potential discriminatory bias is dubbed ‘fairness through unawareness’ where financial institutions do not use, and may not even collect, certain personal attributes in decision-making. The societal expectation that financial institutions maintain high ethical standards continues to increase and there is real reputational risk and consequences when harm, actual or perceived, is done to customers. Organizations should maintain transparency, both internally and externally, through disclosure on how they ensure high ethical standards for their AI models.

Balance between regulation and innovation in artificial intelligence

Globally, regulators are striking a balance between regulation and innovation, that is, setting robust regulations while ensuring financial institutions continue to transform and remain competitive. The approach to regulating AI globally varies across jurisdictions, with institutions like the Bank of England adopting a principles-based approach while other jurisdictions, like the Monetary Authority of Singapore, provide more granular prescriptive guidance.

The insights and discussion from FIFAI are a testament to the need for collaboration and a multidisciplinary approach and have shown an appetite for continued dialogue in the Canadian financial services industry.