Proposed Revisions to Guideline E-23 on Model Risk Management

Information
Publication type
Letter
Category
Sound Business and Financial Practices
Date
Sector
Banks,
Trust and Loan Companies
Table of contents

To: All federally regulated financial institutions and federally regulated pension plans

In September 2017, Office of the Superintendent of Financial Institutions (OSFI) issued Guideline-E23: Enterprise-Wide Model Risk Management for Deposit-Taking Institutions , which set out OSFI's expectations on the life cycle approach to managing the use of models by federally regulated deposit taking institutions (DTIs).

Since then, OSFI’s supervisory work has identified opportunities to provide greater clarity for DTIs on certain elements: model risk management guidance at the enterprise-wide level, the scope of models to which the guideline applies and the application of the proportionality principle towards smaller institutions.

Federally regulated insurance companies and federally regulated pension plans also rely on models to make business decisions. Although professional standards, to some extent, cover the use of certain models, the scope of Guideline E-23 is much broader. As such, OSFI believes the holistic guidance in Guideline E-23 on model risk management should be extended to federally regulated insurance companies and federally regulated pension plans. This would provide consistent guidance and expectations around the risk management of models across all federally regulated financial institutions (FRFIs) and federally regulated pension plans (FRPPs).

Furthermore, models are increasingly leveraging significant amounts and types of data, as well as employing more complex techniques. As described in OSFI’s discussion paper Developing Financial Sector Resilience in a Digital World , some model risks are also being exacerbated by digitalization and the use of advanced analytics, including artificial intelligence and machine learning (AI/ML). These factors, in conjunction with the expansion of model use at FRFIs and FRPPs, contribute to an increase of model risk.

Consequently, OSFI plans to expand the scope of Guideline E-23 to address these emerging risks and to clarify OSFI’s expectations that all FRFIs and FRPPs appropriately assess and manage model risks at the enterprise level. OSFI will take a balanced approach that is reflective of proportionality considerations for FRFIs and FRPPs based on the model risk management framework under which they operate. OSFI’s risk-based approach will also recognizes FRFIs’ and FRPPs’ desire to innovate and preserve agility in model development while maintaining the importance of appropriate model risk management. Please see the Appendix for some aspects that OSFI is considering in future revisions to Guideline E-23.

OSFI plans to launch a consultation on Guideline E-23 in March 2023, with final guidance planned for publication by the end of 2023 and target implementation by June 2024. At this point, OSFI is seeking input from stakeholders on the expanded scope of application and models along with any other element of the current Guideline E-23 where additional detail or greater clarity would be beneficial.

Please submit any comments to models-modeles@osfi-bsif.gc.ca by June 30, 2022.

Appendix

Below, categorized under the principles of Soundness, Accountability, and Explainability, are some new aspects of model risk that OSFI is considering in the update to Guideline E-23. These aspects would apply to all in-scope models; though the degree of compliance will be dependent on the model’s materiality, complexity, and use.

Soundness

Model soundness is a broad and complex topic that considers, among other things, issues pertaining to: data, development, validation, monitoring, bias, and documentation.

Issue: Data

OSFI will consider

Expanded model risk challenges, suggesting stronger coverage of controls and governance through data lineage, due to:

  • Increased amount and variety of data; and

  • Speed at which data is leveraged in model development.

Issue: Model development, validation and implementation

OSFI will consider

Strengthening the rigour employed by model owners, users and validators to ensure:

  • Model robustness is taken into account in development, such that subtle differences in production data do not result in model output inaccuracy;

  • Models are more reliable, through proper exception handling and understanding of model limitations;

  • Appropriate oversight of model implementation; and

  • Contingency actions are predefined in case of model failures.

Issue: Monitoring

OSFI will consider

Appropriate frequency and intensity of monitoring depending on the risk of models. Timely and effective monitoring is needed for AI/ML models to ensure model performance remains adequate providing early warning against model failures.

Issue: Bias

OSFI will consider

Different types of bias that could manifest in models. Unwanted bias can lead to fairness considerations, which is one of the principal evolving topics in the AI/ML space. Model bias can ultimately lead to reputational risk.

Issue: Documentation

OSFI will consider

Appropriate level of documentation, commensurate with model risk, while being sensitive to:

  • Industry trends towards agility in model development while balancing the potential for frequent recalibrations without necessarily introducing new models;

  • Opportunity to leverage platforms as part of the model lifecycle; and

  • Recognition of the potential of varied audiences/contributors to model documentation.

Accountability

With the increase in use of models, the scope of Guideline E-23 will be enhanced to include models used beyond capital calculation and risk management, following a risk-based approach.

The continued evolution of model risk and model risk management practices is leading to an increase in the number and variety of stakeholders involved over the model lifecycle, with a greater degree for AI/ML models.

The updated guidance will reflect the extent to which model governance structures and frameworksAdditional governance concepts could include accountabilities for model explainability, fairness, impact on corporate culture, use of self-learning models and human involvement as dual control (human in the loop). may need to be enhanced in terms of lines of accountability to cover:

  • multidisciplinary model risk management that includes control functions (legal and compliance);

  • interrelationships between models and data, ensuring data lineage is transparent and effective;

  • technology advancements and evolving model risks, including risks exacerbated by the use of AI/ML; and

  • potential opacity of models and third party dependencies and their effect on model outcomes and results.

While the use of big data and more complex modelling techniques highlight the need for a specialized skillset and understanding of models, OSFI recognizes there might be different options to fill this need, like outsourcing, as model risk management frameworks evolve.

Explainability

Explanation of model outputs enhances the ability to mitigate the risks and unintended outcomes associated with using them and supports model soundness and accountability.

The amount and variety of data leveraged for model development as well as the complexity and type of modeling techniques utilized indicate the extent to which a model is inherently explainable. In turn, the level of model explainability required is driven by:

  • the intended model use across business areas of the organization; and

  • the different types of model stakeholdersFor example, senior management, model owners, customers, auditors, and regulators. The differing goals of each stakeholder indicate the scope and dimensions of explainability that need to be considered. .

A higher level of model explainability may require greater detail and more robust discussion on the intuitiveness of the model drivers as well as the roles involved with the model explainability assessment. The scope and dimensions of explainability could encompass all or part of the model lifecycle as well as the characteristics of the explanations required (such as local vs. global or exact vs. approximate). Consideration will also be given to the dynamic nature of some AI/ML models, the limited capacity to explain third party products and the need for ongoing monitoring of model explainability.

OSFI will aim to outline in updated guidance the different dimensions of explainability. In addition, OSFI will articulate its expectations on the levels of explainability required commensurate with model risk.