FIFAI II: AI Risks and Opportunities: Adopting an AGILE Framework in Canadian Financial Services

Publication type
Artificial intelligence
Date

Table of contents

    The content of this report reflects views and insights from individual FIFAI speakers and participants. This report should not be interpreted as guidance from and does not necessarily reflect the views of the Bank of Canada, the Department of Finance Canada, the Financial Consumer Agency of Canada, the Financial Transactions and Reports Analysis Centre of Canada, the Office of the Superintendent of Financial Institutions or any other regulatory authorities, currently or in the future.

    Foreword

    This phase of FIFAI is about deepening our understanding of how AI technologies are reshaping the industry now and going forward. It is as much about capturing the opportunities as it is about effective AI risk management strategies, both of which are of increasing importance for the sector.

    Sonia Baxendale, President and CEO, Global Risk Institute

    It's been three years since the Office of the Superintendent of Financial Institutions (OSFI) and the Global Risk Institute (GRI) launched the Financial Industry Forum on Artificial Intelligence (FIFAI), bringing together experts from the financial industry, academia, policymakers and regulators.

    Today, the fast pace of technological change and AI adoption highlights the need for renewed collaboration between the public and private sectors, leading to FIFAI Phase II with the overall goal to:

    • Deepen our understanding of how AI technologies are reshaping opportunities and threats for the financial system and financial consumers, and
    • Discuss best practices and effective AI risk management strategies to learn how to build resilience into individual organizations, consumer well-being protections, and the financial system.

    This FIFAI II final report outlines the AI risks and opportunities raised at forum workshops and introduces the AGILE framework for financial-industry stakeholders to navigate the evolving impacts of AI.

    The sponsors wish to express their deep gratitude to the more than 170 participants who shared perspectives throughout the FIFAI II discussions. Participants included representatives from banks, insurers, asset managers, non-financial corporations, consumer advocates, universities, research institutes, government and regulatory agencies. All were committed to balancing the risk and opportunity inherent in the continued advancement of AI within financial services and in the wider environment affecting the industry.

    Global Risk Institute

    Office of the Superintendent of Financial Institutions

    Department of Finance Canada

    Financial Transaction and Reports Analysis Centre of Canada

    Financial Consumer Agency of Canada

    Bank of Canada

    Background

    The first phase of FIFAI focused on the internal risks associated with the development, deployment, and use of AI within financial institutions. The FIFAI I report, A Canadian Perspective on Responsible AIFootnote 1, established the EDGE (Explainability, Data, Governance, and Ethics) principles as pillars for responsible AI adoption across the financial industry and encouraged harmonized regulation that reflects Canadian values while enabling innovation. Adherence to the EDGE principles involves appropriate explainability and disclosure; consumer-centric approaches that uphold ethical values and protect privacy; and strong, risk-based governance supported by sound data practices and multidisciplinary collaborationFootnote 2.

    It was the view of the participants that Canadian financial institutions have generally aligned with EDGE principles. Evident Insights, an independent firm that benchmarks AI use in bankingFootnote 3 and insuranceFootnote 4, ranked Canada's five largest banks and two Canadian insurers among the top 15 globally for "transparency of responsible AI activities" in 2025.

    Since FIFAI I, AI-related risks have expanded beyond EDGE's scope due to rapid adoption of AI within the financial industry, significant technological advances, and growing impacts of AI on the external risk environment. FIFAI II therefore convened extensive discussions on escalating cyber threats, third-party risk, financial well-being and consumer protection impacts, financial crime, and financial stability implications.

    FIFAI II reflects a shared commitment among industry, government, and regulators to achieve progress through collaboration. Between May and November 2025, four workshops sponsored by GRI with a combination of OSFI, Finance Canada, Financial Transactions and Reports Analysis Centre of Canada (FINTRAC), Financial Consumer Agency of Canada (FCAC), and the Bank of Canada across them, examined AI risks, mitigants, and opportunities. Interim reports covered:

    This final report, informed by the respective workshops and participant views, outlines critical risks and affirms that continued responsible AI adoption is necessary for both competitive resilience and effective management of inherent AI risks as well as heightened defence against sophisticated external threats. EDGE remains the foundation for responsible adoption while "agility" emerged as a central theme to guide a sector that must move dynamically to capture AI's benefits while responding to fast-evolving risks.

    Executive summary

    AI is a transformative force–both awe-inspiring and potentially perilous…Its true impact will hinge on disciplined, responsible innovation & robust collaboration across borders and sectors.

    Peter Routledge, Superintendent, OSFI

    AI is reshaping financial services globally, redefining operating models and competitive dynamics. Automation and human augmentation can unlock efficiency, improve decisions, and strengthen competitiveness – but only when innovation aligns with principle-based governance to maintain trust and resilience. Canada's financial sector, with strong data foundations and disciplined risk culture, is uniquely positioned to lead responsibly in AI adoption while unlocking significant productivity and growth. At the same time, AI is reshaping the risk landscape, with potential systemic implications. Most pressingly, AI is enabling fraudsters and cybercriminals to operate with unprecedented speed, scale and sophistication. Institutions increasingly need AI not only to compete but to strengthen their defences and risk management.

    FIFAI II discussions underscored a broader set of critical areas that demand attention. Institutions must navigate strategic risk in the face of competitive pressures, execution risks and governance challenges from rapid AI adoption. AI is also intensifying threats to organizational security, from automated spear phishing to synthetic IDs being used to infiltrate organizations via the hiring process. Consumer-facing applications bring their own risks, as gaps in transparency, explainability and accountability may expose consumers to bias, fraud and other harms. At the same time, talent shortages and uneven upskilling may slow responsible innovation. Growing dependence on a small number of AI providers and opaque AI supply‑chain dependencies heighten systemic fragility. More broadly, AI‑driven operational disruptions, correlated trading behaviours and potential credit risk impacts introduce new challenges for financial instability.

    With that in mind, FIFAI II introduces the AGILE framework (Awareness, Guardrails, Innovation, Learning, Ecosystem Resiliency) and suggests implementation priorities for navigating AI risks and seizing AI opportunities:

    AGILE - Navigating Evolving AI Risk & Opportunity
     
    Text version
    • Awareness: Stay ahead of AI-driven risks by understanding how technologies reshape the risk landscape through organizational enhancements such as AI oversight, board engagement, and expanded monitoring and stress testing scenarios.
    • Guardrails: Make best practice regular practice with strong controls, data-integrity standards, human oversight for high-impact decisions, transparency and appropriate consumer outcomes, and rigorous third-party oversight.
    • Innovation: Adopt an AI growth mindset that treats AI as a driver of competitiveness and enhances consumer financial well-being and protection, supported by bold investments in talent, modern infrastructure, and responsible innovation.
    • Learning: Build AI skills at every organizational level, including employees and management, through continuous training and collaborative initiatives, while also empowering consumers with AI literacy to help them protect themselves and make informed choices.
    • Ecosystem Resiliency: Fortify system-wide defences through improved third-party oversight, regulatory clarity, enhanced digital identity security, expanded real-time threat sharing, and upgraded incident-response frameworks.

    Canada's financial sector stands at a pivotal moment. Strategic choices made now will shape competitiveness, resilience, and trust for years to come. The benefits of AI are within reach, but only through deliberate action that balances innovation with robust risk management and consumer trust.

    By embracing an AGILE framework, the industry can unlock growth while safeguarding stability and consumer well-being. This is the Canadian financial sector's opportunity to lead in AI innovation and set a national standard for trust, security, and productivity.

    The Evolving AI Risk environment

    AI is increasingly giving rise to critical risks, both from growing internal adoption of AI within institutions and the broader effects of AI on the external risk environment. The urgency of managing these risks lends itself to the recognition that continued responsible AI adoption is necessary, both for competitive resiliency and to defend against more sophisticated AI-enabled threats. While the risks outlined below represent a snapshot of risks as of the time of the workshops, industry stakeholders expect the risk landscape to evolve rapidly and in unanticipated ways.

    1. Strategic risks

    The biggest risk is not doing enough.

    FIFAI II participant

    Pace of adoption: Moving too quickly to adopt AI without proper risk management can lead to potential operational and consumer harms. Conversely, moving too slowly may result in missed opportunities and competitive disadvantages (for example, potential disruption from technology firms and other new entrants).

    Fragmented or near-sighted approach: A rushed, incomplete, or inflexible AI strategy can result in critical lapses across data, technology, infrastructure, business models, and consumer outcomes (including consumer protection and financial well-being). Implementing AI systems in isolation risks creating longer-term issues and increases execution risk. Often this is the result of hype-driven deployments, where AI is adopted without proper long-term strategic planning or success metrics. Unrealistic expectations from senior leadership can amplify this risk, as can a focus on scoring a "quick win" rather than incorporating AI for the longer term.

    Resource constraints: Effective AI adoption requires significant capital, expertise, and infrastructure. Underinvestment in AI limits the sector's ability to detect cyber threats, fraud, financial crime, and emerging systemic risks. It also reduces opportunities to use AI to improve productivity in core operations and free capacity for higher‑value risk management, supervision, and innovation. Competition for investment and R&D budgets is intense, with smaller institutions and public sector entities particularly challenged in this area.

    Data strategy and quality: AI performance depends on an effective data strategy that prioritizes data governance and quality. In a 2024 AI report published by OSFI and FCAC, federally regulated financial institutions identified data-related risks as a "top concern" for AI deployment.Footnote 5 Data that is inconsistent or incomplete, and fragmented across platforms (including those of third parties and offshore storage) can lead to increased data sovereignty concerns, privacy risks for consumers, and challenges for regulatory oversight. Complex data workflows without proper data lineage and governance further compound these issues. Sub-standard data quality undermines efficiency gains and increases the likelihood of harmful outcomes (e.g., consumer harm, diminished trust, and failures in AML reporting).

    AI regulatory uncertainty: Financial sector regulation in Canada spans 14 jurisdictions, creating a matrix of guidelines, rules, and legislation which need to be adhered to, potentially creating operational overhead. Furthermore, many financial institutions in Canada also operate in foreign jurisdictions. While the amount of AI-specific guidance by Canadian financial regulators has been limited, financial institutions may be hesitant to invest in and deploy AI if there is perceived uncertainty about compliance obligations or the potential for future regulatory constraints.

    2. Security and cybersecurity threats

    AI is enabling smarter fraud detection, faster investigations, and more adaptive compliance. However, it's also introducing new risks that are evolving just as quickly as the technology itself.

    Sarah Paquet, Director and Chief Executive Officer, FINTRAC

    Social engineering and synthetic identity fraud: As Michael Barr of the Federal Reserve Board of Governors noted in April 2025: "Deepfake attacks have seen a twentyfold increase over the last three years."Footnote 6 AI can enable effective deepfakes with minimal information, often obtained through social media. The rise of these tactics has increased the importance of secure digital identification and authentication. Canada, like most countries, lacks a universally adopted secure digital identity potentially exposing identity as a key attack opportunity across onboarding, consumer channels, and remote work environments. In one notable case, AI-generated synthetic identities allowed foreign operatives to obtain remote employment at North American firms, gaining access to internal systems and data.Footnote 7

    Voice spoofing and Fraud-as-a-Service (FaaS): A 2024 industry survey found that 91% of financial institutions globally are reconsidering voice-verification systems due to AI voice-cloning capabilities.Footnote 8 Call centres and IT helpdesks are also vulnerable as AI can convincingly impersonate employees or customers to request new devices, reset passwords, or obtain access credentials. AI has also accelerated the use of FaaS, which enables criminals to purchase turnkey, AI‑powered tools that dramatically increase the scale, speed, and sophistication of financial fraud.

    AI-assisted cyber-attacks: With AI, cyberattacks can be more easily automated, accelerated, and tailored by threat actors. The barriers to entry are lower and the capabilities are greater. Simultaneously, the overall attack surface has expanded as financial institutions increasingly implement outward-facing AI systems. Potential vulnerabilities extend to regulators and government, as they hold significant sensitive data. Organized groups can use AI to scale cyber and fraud operations and to facilitate money laundering and sanctions evasion. AI agents could automate multi-step attacks end-to-end, further lowering the barrier and enabling scale. In 2025, AI company Anthropic reported that it had disrupted an operation by a state-sponsored group to manipulate one of its models to autonomously attack various corporate and government targets.Footnote 9

    Disinformation and misinformation: Disinformation and misinformation can spread quickly due to the ubiquity of social media and the increasingly polarized political environment. The use of AI can enable a malicious disinformation campaign to be deployed at scale, which could then be amplified by misinformation. For instance, deepfakes and automated bots can disseminate false or misleading claims about a bank's solvency, regulatory actions, or system stability, actions that can quickly undermine trust.

    3. Consumer risks

    [This work] is focused on ensuring that innovation in the financial marketplace is not only forward-thinking and efficient, but also grounded in fairness, transparency, and a strong commitment to protecting consumers.

    Shereen Benzvy Miller, Commissioner, FCAC

    Consumer confidence and well-being: Applications of AI that impact consumers —such as product recommendations, credit adjudication, underwriting, and investment advice— are becoming more pervasive. As consumer-impacting AI applications increase and become more fully integrated into the internal operations of Canadian financial institutions, the consumer-trust pillars of transparency, explainability, and accountability become increasingly important. Many consumers may not know when AI is involved or may question how these systems reach decisions. This challenge is intensifying with generative and agentic AI, which can make end-to-end decisions harder to explain. Disclosure and consent are closely linked to explainability and transparency while accountability frameworks and complaint handling need to keep pace with AI-enabled services to reduce the possibility of adverse consumer outcomes. These challenges intensify when consumers use AI-powered self-serve tools and products where there is less opportunity for human oversight.

    Data bias and security: Unfair outcomes driven by incomplete or biased data hold the potential to erode consumer trust in AI systems. The use of alternative or inferred data to personalize products can create hidden proxies for identifiable information, exposing consumers to unintended profiling. These dynamics can amplify unwanted bias in technology‑driven decision‑making and systemically disadvantage consumers at scale. Increasingly complex data flows, often involving multiple third‑party systems, heighten privacy and security vulnerabilities to which consumers are exposed. Data governance frameworks that lag behind AI adoption compound the risks.

    Increased exposure to attempted fraud: AI is expanding the scale and sophistication of fraud and related criminal activities. Personalized frauds exploit consumer data and behavioural patterns, making them more persuasive and harder to detect. Consumers may find it increasingly difficult to distinguish legitimate communications from AI‑generated frauds, putting them at greater risk and eroding overall trust in Canada's financial sector. The true extent of the problem is unknown, as the Canadian Anti-Fraud Centre (CAFC) estimates 90 to 95% of fraud goes unreported.Footnote 10 Without proactive measures to counter these threats, some consumers may feel that their financial institutions are not sufficiently protecting them.

    Inequality of access: AI could widen the digital divide between those with access to digital technologies and those without. Even among those with access, gaps in AI literacy and confidence may limit the ability to benefit from AI-enabled services. Absent a thoughtful approach, AI could exacerbate existing inequalities faced by some groups, including low-income individuals and households, newcomers to Canada, Indigenous communities, the elderly, and Canadians living with disabilities.

    4. Knowledge and talent gaps

    As technology continues to evolve rapidly, it's important for us to welcome new graduates who are inherently digital savvy and bring fresh perspectives. We're also prioritizing ongoing learning and development for our employees, leaders, and Board members, so everyone has an understanding of the latest tools and how to use them to harness emerging opportunities.

    Laura Money, EVP, Chief Information and Technology Innovation Officer, Sun Life

    Shortages of AI talent: As AI becomes more foundational to financial services, a scarcity of top AI talent represents a potentially significant threat to a company's ability to operate safely to execute on strategy and to compete. This also imperils the regulatory capacity and ability to oversee rapid industry change and complex systemic risks amplified by AI. While Canadian universities educate thousands of AI specialists annually, supply remains insufficient, especially for those who also have financial industry knowledge. Canadian financial institutions may struggle to compete with substantially higher compensation packages offered by foreign technology companies. Regulatory agencies are similarly limited in their ability to match private sector compensation or provide access to cutting-edge technology. AI expertise is concentrated in larger institutions; smaller firms may have little or none.

    Shortfalls in AI learning: Moving slowly or failing to upskill workforces could impede the ability of institutions to thrive in an increasingly AI-dominated world. Meanwhile, limited AI knowledge and trust among consumers, especially vulnerable groups, could prevent them from benefitting from AI-enhanced services and expose them to a greater risk of being scammed or defrauded.

    Potential AI misuse: A Large Language Model (LLM) can generate entirely fabricated information that appears authoritative (so-called hallucinations). There have been incidents where unverified LLM generated figures, statements and references have led to material errors. A lack of awareness of AI hallucination and other AI knowledge gaps represent a clear risk to both consumers and organizations.

    "Learning velocity" mismatch: LLMs that seemed revolutionary in 2023 are now considered primitive. Individuals and organizations alike may struggle to keep up with the pace of change. By the time an institution develops comprehensive AI training, the technology might have already advanced. For instance, the pace of advancements by threat actors in their social engineering techniques is quickly outpacing internal training programs. Similarly, AI-enabled money laundering techniques often evolve faster than detection capabilities and associated training can be developed and deployed.

    5. Third-party concentration and supply chain risks

    As AI adoption accelerates, third-party concentration and supply chain dependencies are becoming core sources of systemic risk. Financial institutions must look beyond individual vendor resilience and understand where shared dependencies, limited visibility, and single points of failure could amplify disruption across the system.

    Graeme Hepworth, Chief Risk Officer, RBC

    AI supply chain and third-party concentration risk: Growing AI dependencies span data, models, software components (including open source), and compute/cloud infrastructure. AI third-party service providers often depend on additional parties through complex 'nth party' or multi-tiered supply chains. Disruptions or compromises at any layer can propagate across institutions. Furthermore, AI adoption is deepening financial institutions' dependence on a small set of third-party technology providers. The July 2024 CrowdStrike outage resulted in an estimated financial loss of $5.4 billion for the Fortune 500 (excluding Microsoft),Footnote 11 illustrating the systemic impact of single points of failure. Mid-sized and smaller institutions may be more exposed because they often rely proportionately more heavily on external vendors.

    Lack of visibility and control: Risks arise from limited visibility and transparency into third-party controls and practices related to data, governance, security, and model risk. Financial Institutions are accountable for the use of third-party services but have little control to ensure third parties comply with their expectations. A security failure by a third-party AI model can expose sensitive data, increasing vulnerability to adversarial attacks and resulting in the loss of intellectual property while eroding consumer trust. Given the size and influence of some third-party providers, even Canada's largest financial institutions may have limited leverage regarding contractual terms, operational transparency, or remediation timelines. Visibility into fourth- and fifth-party relationships is especially limited in the context of AI services, where financial institutions often lack clarity on how models were trained, what data was used, and which other entities are embedded in the supply chain. The introduction of open banking is expected to further expand the third and 'nth party' ecosystem, increasing both the complexity and scale of risk that institutions must manage.

    Sovereignty and oversight of critical providers: Many critical providers operate outside the financial regulatory perimeter, limiting visibility into the likelihood and potential impact of failures at a system level and complicating regulatory oversight. Global cloud architectures and limited domestic infrastructure mean that many AI services and associated data reside abroad, exposing institutions to foreign legal regimes and geopolitical risks.

    6. Financial stability risks

    [We must] understand the role of AI in the financial industry and mitigate the risks it represents to financial stability effectively. A better understanding can dispel unfounded fears and support policymakers in aligning oversight efforts with the most material AI-related vulnerabilities.

    Greg Reade, Associate Assistant Deputy Minister, Financial Sector Policy Branch, Finance Canada

    Operational shocks: The impact of internal system outages, reputational incidents, and data breaches can be magnified by the rapid growth of AI. As AI-enabled systems become embedded in essential processes, institutions face heightened exposure to risks such as model failures, data corruption, and system misbehaviour, including critical infrastructure providers that support the sector (e.g., payments and telecommunications). Anti-money laundering (AML) risks may also be elevated as institutions often see only parts of criminal networks that span capital markets, casinos, payment service providers, and cross-border flows.

    Market volatility: Many AI-powered trading algorithms and models are trained on similar data which may intensify market volatility, particularly on a short-term basis. If AI-based trading models move in concert, this could lead to procyclical shifts in financial markets during periods of stress. Unregulated market participants using AI tools can further undermine systemic resilience. Equity markets and exchange-traded derivatives are possible areas of vulnerability.

    Labour and business disruption: The International Monetary Fund (2024) estimates that 60% of jobs in advanced economies will be affected by AI automation.Footnote 12 Citi's research (2024) predicts that 54% of finance jobs face potential AI-led displacement, the highest percentage among major industries.Footnote 13 While technological transformations historically have created new roles, the unprecedented speed of AI advancement suggests displacement could occur faster than workforce retraining, creating a critical transition period. AI-driven automation will also impact firms across manufacturing, retail, transportation, professional services, and other sectors that form the backbone of Canada's economy. This economy-wide disruption poses systemic risks to financial institutions, particularly in terms of rising credit risk to affected individuals and businesses. This could lead to a "k-shaped" economy where some flourish and others face shrinking opportunities, a scenario where lenders could face increased default risk.

    Gaps in threat information sharing: There are established channels through which threat information currently flows between financial institutions, other critical sectors and governmental agencies. However, various factors can impede the optimal flow of threat information; as a result, threat actors can conceivably strike multiple institutions and sectors before defences have time to adapt. Current information-sharing channels may experience delays to anonymize the information which limits their ability to support real-time responses to AI-enabled fraud, cyberattacks, or third-party incidents. Competitive dynamics and privacy mandates further limit information sharing.

    Vulnerabilities in crisis response mechanisms: Canada's existing incident-response and crisis-coordination arrangements provide a solid foundation, but blind spots may exist in the face of unprecedented AI-enabled threats. Information, access and systems may be fragmented across agencies, or the mandates of the bodies tasked with incident response. At an institutional level, business continuity processes may not incorporate specific cases of failure in AI systems or models.  Novel attack strategies enabled by AI could involve multi-pronged attacks at a speed and scale that current crisis response mechanisms were not designed to handle. Recent technology and cloud outages, like the CrowdStrike or AWSFootnote 14 events, have also illustrated the gap that can exist between plans on paper and practical readiness, particularly in novel or unprecedented scenarios.

    Agentic AI and emerging financial stability risks:  Agentic AI systems can act autonomously, make multi‑step decisions, and trigger financial actions at machine speed across markets and institutions.Footnote 15 As these systems gain traction, their behaviour may become increasingly difficult to predict, monitor, or constrain. Agents that make investments on behalf of retail clients, for example, may respond simultaneously to similar data sources or market cues, amplifying short‑term volatility and intensifying liquidity pressures during stress events. Corporate treasury agents could rapidly reallocate deposits in reaction to news, social‑media sentiment, or shifting rate environments. During times of stress this could potentially accelerate funding outflows and destabilize bank balance sheets. As agents are deployed in more financial use cases, the risks may evolve further in unexpected ways. Other emerging technologies are also materially reshaping risk profiles. AI is accelerating progress toward fault‑tolerant quantum systems, raising the prospect of breakthroughs that could overturn current cybersecurity assumptions.

    Seizing the AI opportunity

    AI presents a strategic opportunity to strengthen Canada's financial system. The Canadian financial sector creates 7-8% of the nation's GDPFootnote 16 and employs almost 850,000 Canadians.Footnote 17 It is a significant adopter of AI today. Globally, across a broad range of industry sectors, financial services report the third-highest use of AI at work (72%) and the second-highest level of organizational support (75%).Footnote 18 More broadly, it is projected that continued AI deployment could add $298 billion in cumulative GDP from 2025 to 2035 and generate an average of 41,500 new jobs annually.Footnote 19 Canadian financial institutions that successfully harness AI could capture much of this potential value creation.

    The ability of AI to process vast datasets and detect patterns makes it a powerful risk management tool. Financial institutions can monitor market activity, identify anomalies, and anticipate stress scenarios before they escalate, enhancing resilience and reducing systemic vulnerabilities. Regulators can also leverage AI to enhance systemic monitoring, harmonize guidance and improve supervisory capacity. Beyond risk mitigation, AI can deliver significant productivity gains. Applications across compliance, fraud detection, and operational workflows can free skilled professionals for higher-value priorities. These efficiencies translate into cost savings that can be reinvested in innovation, infrastructure, and talent development, which are essential for fueling long-term competitiveness.

    AI also offers transformative opportunities for financial well-being, consumer protection and financial crime prevention. Advanced authentication, behavioural analytics, and anomaly detection help prevent identity theft, account takeovers, and deepfake-enabled fraud and strengthen trust in digital financial ecosystems. At the same time, AI is democratizing access to financial advice through personalized guidance at scale. Conversational platforms and recommendation engines can make sophisticated insights available to underserved populations, potentially promoting quality financial inclusion and financial literacy. In combatting financial crime, AI models can analyze vast transaction networks in real time, detecting suspicious patterns and enabling proactive intervention to reduce fraud, money laundering and other emerging threats.

    If adopted responsibly and at scale, AI can reinforce Canada's global competitiveness and serve as a catalyst for a smarter, safer, and more inclusive financial system. Achieving this balance between innovation and risk management requires a shared, practical approach to AI adoption across the sector.

    The AGILE Framework

    The AGILE framework (Awareness, Guardrails, Innovation, Learning, and Ecosystem Resiliency) can help guide responsible AI adoption, innovation and resilience across Canada's financial sector. Developed from workshop insights, this framework empowers stakeholders to capture the benefits of AI while effectively managing the risks.

    Awareness of emerging threats and systemic risks

    Awareness is critical for the Canadian financial sector as AI increasingly reshapes the risk landscape and the sector expands its AI use. Stakeholders need to understand the ways in which AI can alter the risk environment and how further developments, such as agentic AI or AI-driven macroeconomic disruptions, can impact them. Organizational enhancements such as AI oversight, board engagement, and expanded monitoring and stress testing scenarios will help to manage the risks in this area.

    Adapt risk identification and governance in response to technological change

    • Proactively embed horizon scanning, risk assessment, and strategic planning into standard risk management practice.
    • Update emerging risk inventories to capture systemic exposures such as AI-driven macroeconomic impacts, market volatility and disinformation campaigns.
    • Establish triggers for revisiting governance assumptions as new technologies, like agentic AI or quantum computing, move from concept to reality.

    Address AI risks at the senior management and board level

    • Improve AI literacy, so senior management and board members understand AI capabilities, risks, and limitations.
    • Leverage a deeper understanding of AI to make ongoing, informed choices about suitable AI investments and strategy implementation.
    • Establish AI executive oversight, if not already in place.

    Prepare for new technologies

    • Track new technologies, including those that can enhance the performance of AI systems, such as agentic AI and quantum computingFootnote 20.
    • Develop and implement appropriate training and oversight frameworks for employees prior to deployment of new AI technologies.
    • Establish clear governance guidelines for agentic AI, defining where human approval is required and where autonomous agents can operate safely.

    Monitor AI-driven labour market and business disruptions

    • Prepare for AI-related macroeconomic disruptions as a forward-looking risk to credit portfolios and strategic plans.
    • Improve stress-testing practices to include scenarios with material macroeconomic impacts stemming from broader AI adoption. This could include scenarios where AI adoption creates uneven outcomes across industries and regions and shifts in default patterns across retail and commercial portfolios.
    • Identify and monitor early warning metrics and take appropriate action if there is evidence of emerging credit or strategic risks.

    Monitor AI-driven market volatility

    • Enhance real-time monitoring that tracks leading indicators (for example, changes in order flow, spreads, liquidity, transaction value and velocity) to allow for time to respond prudently.
    • Define, test, and enforce circuit breakers, kill switches, and dynamic risk limits, along with systems that require human approval for high-impact actions.
    • Develop in-house trained trading models as opposed to solely relying on external models and data.
    • Exercise higher vigilance, by validating and continuously monitoring trading and risk models for drift, feedback loops, and correlated behaviour.
    • Run stress tests that assume market liquidity evaporates.

    Guardrails to ensure responsible AI adoption continues

    Organizations should ensure that AI-enabled systems operate safely, predictably, and fairly across their lifecycle, particularly where failures or misuse could cause consumer harm or systemic risk. Effective AI guardrails embed fundamental best practices into day-to-day operations, keep control frameworks evergreen, and establish clear accountability for outcomes produced by AI. Guardrails include appropriate human oversight for high-impact decisions, where able, ensuring vigilance about data quality and rigorous standards for third parties.

    Focus on the fundamentals

    • Build institutional habits in critical areas such as cyber hygiene and other elements of organizational security, such as staff onboarding.
    • Promote a culture of cyber vigilance by encouraging staff to question unexpected requests, especially those involving sensitive data or financial transactions.
    • Ensure controls are properly implemented, frequently tested and regularly audited.
    • Refocus on data to ensure data is accurate, accessible, and fit for purpose. Data frameworks should include quality and integrity standards for third-party, external, or synthetic data used to develop proprietary models and algorithms. Data governance should enforce security hygiene, manage access, and set standards for handling proprietary and sensitive information.

    Implement Robust and flexible control frameworks

    • Keep control frameworks 'evergreen' to ensure their scope and effectiveness develop in step with AI adoption.
    • Proactively update internal safeguards to ensure strong consumer protection and well-being, including clear disclosure, consent and transparency as new products and systems are deployed.
    • Require control frameworks to be transparent with clear documentation of AI processes to meet board requirements, management objectives, and regulatory and market conduct guidelines.

    Focus on inclusive consumer well-being practices

    • Embed 'Inclusion by Design' principles by placing consumer well-being at the center when designing and deploying products and services involving AI.
    • Respect consumers through principles-based, plain-language disclosure frameworks that meet market conduct obligations and are accessible and meaningful to consumers.
    • Use a "tiered" disclosure system that enables consumers to choose the level of detail that best suits their needs. Tiered disclosure could range from high-level summaries that provide sufficient information to help consumers make better informed decisions to comprehensive technical details.
    • Communicate clearly in plain language when consumers are dealing with AI and, where possible, provide meaningful opportunities to opt out, without limiting access to products and services.
    • Provide explanations for material AI-driven decisions that impact consumers, including how the system processes information and reaches conclusions, to the extent technically feasible.

    Remain accountable for AI outcomes

    • Take accountability for all aspects of the output and behaviour of AI systems, particularly where those outcomes entail significant risk or materially affect consumers.
    • Include, where possible and appropriate, human oversight of material decisions made by AI-assisted tools, agents and services.
    • Provide transparency about the information presented by AI models leveraging the best currently available explainability methods.
    • Provide consumers with clear avenues for timely redress (recourse) and remediation when AI-driven outcomes have a negative and seemingly unfair effect upon them.

    Enhance due diligence requirements for engaging third parties

    • Ensure that due diligence protocols and vendor requirements for engaging third parties are regularly updated and effectively enforced.
    • Develop controls that strive to ensure that the full AI supply chain is visible and continuously monitored. Mapping fourth, fifth, and "nth party" dependencies can help surface hidden single points of failure and concentration risk.
    • Perform periodic reassessments, concentration limits, exit and substitution plans, and resilience testing that assumes correlated disruption across multiple providers.

    Innovation through bold AI adoption

    Responsible AI adoption can empower workforces, strengthen defence against evolving threats, enhance consumer financial well-being, and drive growth across both their top and bottom lines. To do so, the industry must adopt an AI growth mindset, viewing AI not as a replacement for human expertise, but as an enabler of its workforce and a catalyst for expanding opportunities and product offerings. Adequate resourcing, investment in technological infrastructure, and the purposeful use of AI will be required to strengthen operational resilience while advancing consumer financial well-being, including access and protection.

    Adopt an AI growth mindset

    • Develop strategic plans and manage workforce transitions with the primary objective of achieving economic growth.
    • Invest in AI talent, creating cross-functional teams and prioritizing longer-term strategic transformation over "AI quick wins."
    • Adopt AI in the public sector in areas such as monitoring third-party dependencies, analyzing financial crime networks, detecting market anomalies, and supporting supervisory activities.

    Take a strategic approach to innovation

    • Make strategic investments in responsible AI adoption and avoid inaction due to inertia or misplaced fears about future regulatory actions.
    • Scale AI deployments wisely by assessing the impacts across the enterprise and beyond. This includes evaluating the implications of AI system development to consumer well-being and protection, data security, cybersecurity, operational resilience, and third-party risks.
    • Maintain a dynamic strategy through regular reviews in response to rapidly evolving changes to the business and risk landscapes due to AI advancements.
    • Collaborate across the organization with continuous communication to help avoid fragmented strategies.
    • Establish clear success outcomes and metrics to gain executive support for new AI developments.

    Modernize technology infrastructure

    • Update data infrastructure by standardizing data formats and consolidating data across silos.
    • Transition to a zero-trust security architecture anchored in continuous authentication, least-privilege access, and micro-segmentation.
    • Strengthen model integrity controls such as adversarial testing, query monitoring, and rate limiting to help reduce new attack paths.
    • Create controlled test environments that can be used to trial higher-risk capabilities safely.
    • Implement strong identity and access management for increasingly autonomous systems. This includes assigning each agentic model a distinct "digital identity"Footnote 21 to enable appropriate systems for security, oversight, tracking, and auditability.

    Boost operational resiliency through AI innovation

    • Utilize AI to enhance identification of, response to, and recovery from cyber attacks.
    • Leverage AI to enable real-time risk assessment at critical touchpoints and during customer onboarding.
    • Automate processes like Suspicious Transaction Reports (STR) and use AI-driven pattern recognition to improve the quality and consistency of financial transactions monitoring.

    Enhance consumer financial well-being and consumer protection through AI innovation

    • Enhance consumer financial well-being with consumer-facing digital assistants that help people budget, save, and plan by answering questions quickly, guiding users through complex choices, and tailoring recommendations based on their goals and personal circumstances.
    • Reduce consumer fraud by integrating AI with proactive safeguards, enhanced verification for high-risk actions, and education strategies that help consumers recognize and resist deception.
    • Use AI to strengthen consumer protection against fraud by more quickly identifying suspicious patterns and transactions.
    • Utilize AI to improve authentication and verification so that fewer legitimate payments are interrupted while genuinely suspicious activity is detected and stopped.
    • Deploy targeted initiatives to use AI to better reach underserved communities, aligning with inclusion objectives embedded in the National Financial Literacy Strategy and other federal frameworks.

    Learning to cultivate AI fluency

    AI education and training are vital investments for the financial sector. Continuous learning programs can help institutions keep up with the pace of change, and through collaboration, the industry can pool resources and accelerate progress. Enhanced learning programs around crucial areas like AI-enhanced social engineering and fraud are urgently needed to protect consumers and the integrity of the financial system. Educational efforts must also extend to consumers so that they understand how AI systems work, how their information is used, and how to recognize and protect themselves against threats, including AI-enabled fraud. At the same time, education should highlight the benefits AI offers, including clearer insights, personalized guidance, and easier day-to-day financial management.

    Develop talent strategically

    • Establish internal development programs that identify high-potential employees with strong analytical skills and provide them with intensive AI training.
    • Collaborate with Canadian universities on curriculum development, research partnerships, and knowledge exchange.
    • Partner with Canadian Forces transition programs to access veterans with security clearances and technical training.
    • Retain talent through enhanced benefits such as sabbaticals for advanced education and access to cutting-edge computing resources.

    Pursue continuous and comprehensive AI learning

    • Develop board and executive education that clarifies AI implications for business models, risk landscapes, strategic implications, and competitive dynamics.
    • Implement training to ensure managers understand enough about AI to evaluate proposals, allocate resources, and integrate capabilities into operations.
    • Provide all-staff training on how to use AI tools safely, including how to identify hallucinations, biases, and limitations. Customer-facing staff, in particular, need to understand AI decisions well enough to explain them to clients.
    • Future-proof AI learning environments for safely experimenting with AI technologies and alternative delivery formats, such as short "bite-sized" modules.

    Collaborate on learning opportunities

    • Establish a formal consortium that brings together financial institutions to share learning resources, best practices, training materials, case studies, and lessons learned.
    • Partner outside the financial sector with other industries facing similar AI challenges to share learning approaches and resources.
    • Enhance existing joint research centers focused on financial AI applications. These centers should employ world-class researchers, train graduate students, and produce open research benefiting the entire ecosystem.

    Improve employee training for phishing and cybersecurity

    • Make staff aware that senior executives often have abundant publicly available data that can be used to create convincing deepfakes, vishing or spear phishing attempts using AI.
    • Evolve training programs to help staff develop a more critical eye towards highly convincing communications received outside of normal channels.
    • Help employees develop the ability to recognize subtle abnormalities and to question unverified requests purported to be from authority figures.

    Educate consumers on AI use in financial services

    • Inform clients when AI systems shape product recommendations, financial decision-making, credit adjudication, or investment advice, when appropriate and possible.
    • Ensure consumers are fully informed about how their personal data is being used, including where AI may provide benefits such as more personalized insights, simplified choices, or improved financial management.
    • Implement proactive education campaigns that raise awareness of AI-related risks and provide practical knowledge on how to recognize deepfake communications and inappropriate information requests and other AI-enabled fraud techniques.
    • Adopt plain-language, tiered disclosure frameworks that clearly explain where and how AI is applied, the limits of opting out, and the safeguards in place to protect consumers.

    Ecosystem resiliency for a stronger financial system

    Enhancing resiliency across the financial ecosystem will depend on a more coordinated, system-wide approach. Financial sector participants will need to work together to create common standards and disclosure requirements for critical third parties, upgrading response frameworks for critical financial infrastructure and AI-related shocks or attacks. The public and private sector will need to collaborate to ensure a clear regulatory environment, potentially by establishing sandboxes, enhancing secure digital IDs, and addressing any gaps in information sharing and incident response mechanisms.

    Establish common standards and oversight for critical third parties

    • Raise contract standards that require critical third parties to meet ISO or other external benchmarks and mandate model bias testing and auditing for consumer-facing AI.
    • Set uniform disclosure requirements for data and model transparency.
    • Require visibility into the AI supply chain dependencies of third parties.
    • Develop certification frameworks and shared inventories of approved providers that would audit the governance, risk management and redundancy capacities of critical third parties.

    Enhance regulatory certainty

    • Collaborate across public and private sectors to better understand perceived fragmentation and regulatory-related concerns that may inhibit responsible AI adoption.
    • Expand collaboration and innovation through harmonized regulatory guidance and consistent messaging that encourages responsible AI adoption.
    • Provide testing environments, such as sandboxes, to accelerate innovation and learning while managing and better understanding risk.

    Implement secure digital identification and authentication

    • Implement multi-factor authentication as a low-cost, high-impact identity verification control.
    • Enforce stronger identity verification at onboarding and transaction points.
    • Evaluate the establishment of a national strategy on digital IDs that could reduce systemic fraud risk and support safer use of AI-enabled services.

    Strengthen information-sharing arrangements

    • Enhance information-sharing protocols —among financial institutions, across government entities, and between institutions and government — that cover anonymized threat indicators, incidents, and near-misses to help all parties respond faster and more consistently.
    • Identify and address any gaps or barriers that prevent the voluntary exchange of threat information.
    • Increase cross-pillar collaboration to avoid fragmented responses.

    Upgrade incident response frameworks

    • Enhance clarity of roles and responsibilities across agencies and consider the full range of potential scenarios.
    • Strengthen playbooks to ensure rapid coordination and response to incidents among critical third parties or market infrastructures.
    • Integrate AI-enabled supervisory tools into these frameworks to improve real-time awareness of third-party outages, cyber incidents/attacks, or market disruptions.

    An AGILE Framework: Implementation Priorities

    AI will continue to rapidly evolve. The AGILE framework is built to evolve with it. Its strategic tenets will endure even as AI's reach and capabilities continue to advance. Immediate and longer-term opportunities for AI navigation using the AGILE framework include:

    Immediate priorities

    • Awareness: Strengthen executive awareness by ensuring boards and senior leaders actively understand evolving AI risks and proactively prepare for emerging technologies like agentic AI through clear governance frameworks and adaptive controls.
    • Guardrails: Reinvigorate focus on the fundamentals by making best practice regular practice with strong governance and risk controls that work as intended and building muscle memory in areas such as cyber hygiene and third-party due diligence.
    • Innovation: Enable bold, responsible AI-driven innovation by encouraging experimentation and scaled adoption of AI in customer service, market operations, internal processes and security focused uses of AI, supported by appropriate safeguards, sandboxes, and outcome-based supervision that allows new products, services, and business models to emerge.
    • Learning: Establish financial industry AI literacy and upskilling initiatives through continuous learning systems, organizational AI training frameworks, and collaborative industry initiatives that accelerate talent development and consumer awareness.
    • Ecosystem resiliency: Pursue greater regulatory certainty on AI-related risks by beginning to clarify how existing rules apply to AI and aligning across agencies where possible on messaging, priorities and next steps.

    Short to medium term

    • Awareness: Expand stress testing and monitoring by incorporating AI-driven macroeconomic scenarios into enterprise risk frameworks and continuously tracking labour market and economic impacts to anticipate systemic vulnerabilities and inform strategies.
    • Guardrails: Drive evergreen governance and transparency by maintaining adaptable frameworks, enforcing strong data integrity standards, and delivering consumer-centric disclosures with explainable AI decisions and inclusion by design.
    • Innovation: Accelerate AI-driven transformation by investing in tools and talent, modernizing legacy systems with standardized data and zero-trust security, and enhancing consumer financial well-being and protection through such things as personalized guidance and proactive fraud detection.
    • Learning: Advance sector-wide AI capability by building deep talent pipelines through university partnerships, scaling AI training across the organization, and empowering consumers through transparency and accessible AI literacy so they can help protect themselves against threats, understand AI‑driven decisions, and make confident, informed choices.
    • Ecosystem Resiliency: Strengthen information-sharing frameworks and joint intelligence efforts by developing clear legal and privacy frameworks for threat information sharing, standardizing formats, and encouraging participation from institutions of all sizes.

    Conclusion

    Artificial Intelligence is reshaping the financial sector at a pace demanding both boldness and discipline. The potential is vast – for economic growth, productivity gains, strengthened defenses, improved consumer well-being and protection, and enhanced systemic resilience – yet the risks of inaction or missteps are equally significant. Success requires balancing innovation with responsible adoption and safeguards to consumer trust, organizational resilience, and financial stability.

    The AGILE framework provides a strategic roadmap for this transformation. Building on EDGE, they are designed for action:

    • Awareness – anticipating emerging threats and risks
    • Guardrails – solidifying controls and consumer protections
    • Innovation – driving competitive resiliency through responsible adoption
    • Learning – building necessary human capital and consumer confidence
    • Ecosystem Resiliency – fortifying the system through collaboration and shared standards.

    This report translates this framework into concrete steps such as strengthening cyber hygiene, scaling AI literacy, and modernizing infrastructure. These actions form a framework for collective progress and reflect a critical workshop insight: that the greatest risk of AI is failing to act decisively. Institutions that hesitate risk falling behind technologically and competitively while exposing themselves to AI amplified threats.

    Success requires coordinated effort across industry, regulators, and government. Collaboration must become the norm through information sharing, learning initiatives, and greater regulatory certainty. Investment in talent and infrastructure must be prioritized to ensure Canada's financial system leads in responsible AI adoption. By operationalizing AGILE, the sector can deliver value for consumers, strengthen systemic resilience, and reinforce Canada's global competitiveness.

    The challenges are urgent, but a clear path forward exists. With agility, foresight, and collaboration, Canada's financial ecosystem can seize AI's promise while safeguarding the trust and stability on which it depends.

    Glossary

    Adversarial attacks
    Techniques used to mislead or compromise AI systems, potentially undermining model integrity and reliability.
    Agentic AI
    AI systems capable of autonomously initiating actions or decisions.
    AI literacy
    Understanding of AI capabilities, limitations, risks, and responsible use.
    Attack surface
    All potential points where unauthorized actors could attempt to compromise systems or data.
    Authentication
    Processes that verify identity, such as passwords, biometrics, or multi‑factor authentication.
    Automated bots
    Software agents that execute tasks automatically.
    Barriers to entry
    Structural or regulatory factors that limit new competitors from entering the financial market.
    Circuit breakers
    Mechanisms that automatically pause or limit trading or system activity to prevent instability or cascading failures during abnormal or volatile conditions.
    Cyber attack
    Any attempt to disrupt, access, or compromise information systems without authorization.
    Cyber criminals
    Individuals or groups that conduct illegal digital activities, including fraud and data theft.
    Data center
    Facility that houses the specific IT infrastructure needed to train, deploy and deliver AI and other digital applications and services.
    Data sovereignty
    Requirement that data be stored and processed under the laws of the jurisdiction where it originates.
    Deepfakes
    AI‑generated synthetic media that convincingly imitates real individuals, increasing risks of impersonation and fraud.
    Digital identity
    Technologies used to verify a user's identity electronically, often through documents, biometrics, or cryptographic credentials.
    Disclosure
    Required communication to regulators, stakeholders, or customers regarding material incidents or information.
    Disinformation
    False information deliberately created and spread to mislead, manipulate, or cause harm.
    Entrants
    New organizations entering a market, including fintechs and technology firms.
    Explainability
    The extent to which AI decision‑making processes can be interpreted and justified to different stakeholders.
    Fault‑tolerant quantum systems
    Quantum systems engineered to operate reliably even when qubits (i.e., the basic unit of quantum information) encounter errors.
    Financial fraudster
    A person or group conducting fraudulent financial activities using stolen, synthetic, or manipulated identities.
    Financial stability
    The resilience of the financial system to shocks, ensuring continued functioning of markets and institutions.
    Generative AI
    AI that generates content, such as text, images, or code, based on patterns learned from training data.
    Governance
    Framework (structures, oversight, and controls) by which organizations are directed and controlled.
    Inclusion by design
    The deliberate creation of financial and AI systems that ensure equitable access and outcomes for diverse populations.
    Kill switches
    Controls that can instantly shut down an AI system, trading process, or automated operation to stop harmful or unintended actions.
    Misinformation
    False or inaccurate information shared without intent to deceive.
    Nth‑party
    Any indirect third‑party provider deeper in a supply chain (such as fourth‑ or fifth‑party) that supports a contracted vendor but is not directly engaged by the financial institution.
    Open‑source models
    AI models whose architectures or weights are publicly accessible, enabling collaboration.
    Phishing
    The fraudulent practice of sending emails or other messages purporting to be from reputable companies in order to induce individuals to reveal personal information, such as passwords and credit card numbers.
    Privacy
    Assurance that the confidentiality of, and access to, certain information about an entity is protected.
    Quantum computing
    Computing technology leveraging quantum mechanics to solve complex problems, with potential implications for cryptography.
    Robust safeguards
    Controls, policies, and technologies able to withstand or overcome adverse conditions.
    Social engineering
    Manipulation techniques used to exploit human behaviour and gain unauthorized access to systems or information.
    Social media
    Online platforms where individuals share content.
    Spear phishing
    Highly targeted phishing attacks customized to specific individuals or organizations.
    Spoofing
    Impersonation of a trusted entity to deceive users into sharing sensitive information.
    Synthetic identity
    A partially fabricated identity combining real and fictitious information.
    Synthetic identity fraud
    Fraud committed using synthetic identities to open accounts or access credit.
    Systemic vulnerability
    A weakness with potential to impact the broader financial system rather than a single institution.
    Threat actor
    An individual or group responsible for malicious cyber or fraud activity.
    Vishing
    Voice‑based phishing in which attackers impersonate trusted individuals or institutions over the phone to obtain sensitive information.
    Zero‑trust architecture
    A cybersecurity model that assumes no user or device is inherently trustworthy and requires continuous verification, least‑privilege access, and strict system segmentation.