The Increasing Role of AI in Banking Operations
Artificial Intelligence (AI) is progressively becoming a central force in many core banking activities. It powers systems used for fraud detection, early risk sensing, credit decisions, anti-money laundering surveillance, and a growing range of customer-facing services. The level of AI adoption in the banking sector indicates an increasing reliance on these systems, raising a pertinent question for bank boards: What happens if these systems fail, and can banks still keep operations running smoothly?
Adobe Stock
AI Adoption in the Banking Sector
A survey conducted by American Banker in 2025 found that AI is rapidly expanding into risk, compliance, and financial analysis. 70% of big-bank respondents reported using chatbots, and 63% had adopted biometric tools. At HSBC, AI is used to analyze hundreds of millions of transactions per month in their fight against financial crime. JPMorganChase has deployed generative AI tools to over 200,000 employees, while Bank of America’s virtual assistant “Erica” handles more than 58 million interactions each month.
However, some institutions may be more dependent on AI than their leaders realize. On paper, continuity plans look robust, detailing manual reviews or alternate processes, fallback queues, rerouting rules, and escalation protocols. But are banks confident that these plans could be operated at scale with the staffing and skills available today? AI-enabled systems are often used at points in the workflow where large volumes of activity are filtered, scored, flagged, or prioritized before staff see them. Earlier manual or rules-based pathways have been reduced as processes were streamlined around these capabilities.
The Operational Impact of AI System Failures
If an AI system becomes unavailable or its performance degrades, the operational impact can be significant. Fraud detection pipelines can stall, exposing the bank to higher losses, or anti-money-laundering monitoring can miss suspicious activity. In credit activities, loan approval processes can freeze, disrupting revenue flows and delaying decisions for customers. In some cases, systems remain technically available, but concerns about output reliability can prompt risk and compliance teams to suspend or restrict their use.
Regulatory Expectations and Continuity Planning
From a continuity perspective, having a clear map of critical models and data flows, including their operational dependencies and any components operated by key third-party providers, can help banks understand how long specific AI systems can remain offline before customer or regulatory impact becomes material. A crucial point for boards and executives is whether their confidence about operating without a particular AI system is supported by testing and exercises conducted under realistic conditions.
Regulators are placing growing emphasis on operational resilience and continuity. In the United States, existing supervisory frameworks already cover AI systems through expectations for model and third-party risk management: SR 11-7 sets expectations for board oversight of model risk management, while the 2023 Interagency Guidance on Third-Party Relationships emphasizes that banking organizations remain responsible for managing risks in their use of third parties, including those involving critical technology and AI service providers.
As AI becomes more closely connected to essential banking workflows, continuity thinking becomes an important part of board oversight. This means that processes that depend on AI systems are brought into existing continuity discussions and exercises, so that their operational dependencies and fallback options are understood as clearly as those of other critical systems.
Preparing for the Future
For bank boards, the path forward rests on acknowledging that continuity planning must evolve as AI is integrated into more critical processes. Preparedness is ultimately what will keep the bank running even if AI systems are disrupted. The increasing dependence on AI calls for improved business continuity plans to account for possible AI system failures.
Source: Here



