FIFAI panel report sets "AGILE" guide for AI in finance
The Global Risk Institute and Canadian public sector partners have published a final report on artificial intelligence in financial services, setting out an AGILE framework to manage AI risks and opportunities across the sector.
The document marks the second phase of the Financial Industry Forum on Artificial Intelligence (FIFAI), a three-year initiative involving financial institutions, academics, policymakers, consumer advocates and regulators. More than 170 participants contributed to discussions on cybersecurity, financial crime, financial stability, consumer protection, and the wider effects of AI on the financial system.
Sonia Baxendale, President and CEO of the Global Risk Institute, said this phase was designed to deepen understanding of how AI is reshaping the industry and how firms should respond.
The report argues that AI has moved beyond earlier debates about internal governance and now presents broader risks, including fraud, third-party dependencies, market disruption and consumer harm.
AGILE Model
The framework introduced in the report groups recommended actions under AGILE's five headings: Awareness, Guardrails, Innovation, Learning and Ecosystem Resiliency. It is presented as a practical structure for financial firms, regulators and government bodies adapting to a fast-changing AI environment.
Under awareness, the report calls for stronger oversight by boards and senior managers, better horizon scanning and wider use of stress testing for AI-related shocks. Under guardrails, it urges firms to strengthen data governance, maintain human oversight for high-impact decisions and improve due diligence on third-party providers.
On innovation, the report argues that firms should not delay adoption out of caution alone. Instead, institutions should invest in AI talent, modernise data and security infrastructure, and use AI in areas such as cyber defence, suspicious transaction reporting and consumer fraud prevention.
Learning is another central pillar. Firms, regulators and consumers all need stronger AI literacy, from understanding model errors and hallucinations to recognising AI-driven scams. The report also points to shortages of specialist talent in both industry and government, particularly where technical expertise must be combined with financial sector knowledge.
Broader Risks
Canadian financial institutions have largely aligned with the EDGE principles set out in the forum's first phase, covering explainability, data, governance and ethics. But the spread of AI tools and the faster pace of adoption have widened the agenda, pushing firms and authorities to address risks that extend beyond individual institutions.
Among the most immediate concerns, according to the forum report, is the use of AI by fraudsters and cyber criminals. AI is making social engineering attacks more convincing, enabling criminals to create deepfakes, synthetic identities and cloned voices with limited information. Call centres, IT help desks and remote hiring processes are becoming more vulnerable as impersonation becomes easier.
The document also highlights concentration risk in the AI supply chain. Financial institutions are becoming more reliant on a small number of providers for cloud infrastructure, models, software and data, while visibility into fourth-party and fifth-party relationships remains limited. That raises the risk that a failure or compromise at one provider could spread quickly across the sector.
Economic Stakes
The report frames AI as both a risk management issue and an economic opportunity. Financial services are already among the heaviest users of AI across industries, and wider deployment could lift productivity, improve fraud detection, strengthen compliance and support more tailored financial guidance.
At the same time, AI could introduce new financial stability risks. These include correlated market behaviour when trading models rely on similar data, faster operational shocks when AI systems fail in key processes, and pressure on credit portfolios when automation disrupts the wider labour market.
"AI is a transformative force-both awe-inspiring and potentially perilous. Its true impact will hinge on disciplined, responsible innovation and robust collaboration across borders and sectors," said Peter Routledge, Superintendent of OSFI.