Last November, the Financial Stability Board (FSB) issued an illuminating report on the financial stability implications of AI. It identified four primary ways that AI may put financial stability at risk, which are outlined in the next four sections. Specifically, AI can give rise to third-party dependencies, increased market correlations, amplified cyber risks, and weak AI model governance. We explore these four factors and then conclude with our own analysis of the particular vulnerabilities posed by the race to attain Artificial General Intelligence.
Third-Party Dependencies and Service Provider Concentration
Banks and financial institutions increasingly rely on only a few big providers for high-performance hardware and cloud services. This forms single points of failure as the disruption of one provider could trigger widespread financial instability. If a critical AI or cloud service provider experiences an outage, multiple financial institutions would simultaneously lose access to essential services like transaction processing. In effect, financial markets could experience sudden liquidity shortages, payment disruptions, and widespread loss of confidence. Financial regulators may need to expand their oversight to include critical third-party technology providers while rapidly adapting risk frameworks for these concentrated dependencies.
Market Correlations
Widespread adoption of similar AI models may increase correlations in decision-making. If many banks and investors train models on the same data or use the same third-party AI solutions, their strategies and risk assessments can become synchronised. Accordingly, when these similar models encounter unexpected market events, they may all recommend similar decisions, such as mass selloffs or tightening of credit standards. Such synchronised actions will likely intensify market instability. For example, in a shock scenario, AI-driven systems might all flag the same assets as risky or execute sell orders together, exacerbating price drops and liquidity crunches.
Meanwhile, AI is altering the speed and complexity of systemic financial crises. What once unfolded over days might now happen in minutes due to AI-driven decision-making. This acceleration occurs because AI models rapidly interpret and respond to market signals, cascading risk decisions (such as selling assets or limiting credit) throughout the financial network at unprecedented speeds.
Conventional stress-testing frameworks which assume relatively stable market conditions are ill-suited for the sudden, large-scale revaluations AI systems can trigger. Moreover, supervisory monitoring approaches using slower, discrete data collection will not keep pace with real-time, algorithmic decision-making.
Cyber Risks
AI uptake could amplify cyberattack risk, as attackers exploit high data usage and novel interactions with AI systems. Specifically, three primary challenges arise: (i) a broader attack surface for hackers, (ii) data poisoning and (iii) more severe scams and fraud mechanisms.
Every AI system integrated into a bank, especially those connected to external data sources or cloud services, becomes a potential entry point for hackers. AI integration introduces open interfaces and complex data pipelines vulnerable to modern cyber threats. The broader attack surface increases opportunities for breaches, whether through infiltrating third-party AI service providers or abusing APIs that feed data into models. Financial regulators will likely need to integrate enhanced cybersecurity standards and real-time threat monitoring into regulatory frameworks.
Moreover, AI itself introduces novel attack vectors. Adversaries might tamper with training data to skew model outputs in their favour. Poisoned training data could systematically distort risk assessments, triggering cascading failures across interconnected financial systems. Financial regulators must therefore now address the integrity of AI training processes and the subsequent reliability of model outputs under rapidly evolving threat conditions.
Finally, AI-systems may be used to craft convincing phishing emails, fake personas, and deepfake audio/video to trick employees and customers, making social engineering attacks more effective at scale. Financial regulators are accordingly burdened with continually updating protocols to counter dynamic and sophisticated AI-driven cyber threats.
Model Risk, Data Quality, and Governance
The complexity and opacity of advanced AI models pose significant model risk challenges. Many AI algorithms operate as ‘black boxes’ with opaque decision-making logic. This gives rise to three issues: (i) incorrect financial signals, (ii) AI-driven trading, and (iii) the ‘governance gap’.
AI models, particularly large language models, are prone to ‘hallucinations’ that produce inaccurate or misleading assessments. However, their opacity makes it difficult for regulators to detect errors because they cannot easily audit or verify how AI models reach their conclusions. The inability to scrutinise AI-driven decisions delays identifying and correcting faulty risk assessments. Consequently, incorrect financial signals could go unchallenged, allowing errors to propagate across institutions and destabilise markets.
Further, AI-influenced trading makes market manipulation harder to detect, understand, and prevent. Indeed, traditional surveillance relies on identifying recognisable abuse patterns, but AI models can develop adaptive strategies to evade conventional detection methods. Accordingly, as AI models operate at speeds beyond human oversight, financial regulators must develop real-time monitoring tools capable of capturing ultra-fast market dynamics and novel forms of manipulation.
Moreover, the Australian Securities and Investment Commission’s recent review of AI use by 23 financial licensees revealed a ‘governance gap’, as firms adopt AI faster than their risk and compliance frameworks are updated. Rapid adoption without improved risk frameworks exposes consumers to potential errors and biases that are only apparent after harm occurs. Indeed, without clear oversight and risk management frameworks, AI decision-making (such as automated lending decisions or personalised financial advice) could systematically discriminate against certain consumer groups, harming consumers and potentially violating existing anti-discrimination and fairness laws.
AI systems might exploit personal data or behavioural biases, nudging consumers into unsuitable financial products or extracting higher profits at their expense. Additionally, poorly supervised AI systems might mishandle or expose sensitive consumer data, creating privacy breaches and eroding consumer trust. These risks are compounded by the speed at which AI can operate, meaning that any harm inflicted on consumers could spread rapidly before remedial measures or regulatory oversight are implemented.
The AGI Race
Beyond these vulnerabilities and risks identified by the FSB, we suggest other risks attendant upon the race to develop Artificial General Intelligence (‘AGI’) within the next few years. Hundreds of billions of dollars are being invested by tech giants worldwide as they pursue AGI as their primary objective. The US-China Economic and Security Review Commission even encouraged the development of a program like the Manhattan Project to pursue AGI.
At a minimum, AGI could magnify the risks discussed above. As highly sophisticated and autonomous AI models, AGI’s emergence could overwhelm the ability of financial regulators to maintain economic stability and market fairness. More dramatically however, AGI threatens widespread labour displacement at a speed and scale never experienced. As AGI rapidly outperforms humans in many economically valuable roles, large portions of the workforce could be quickly unemployed. This may devastate consumer demand and trigger sharp recessions, potentially outpacing traditional monetary and fiscal interventions.
Alternatively, if the intense speculative frenzy surrounding AI fails to deliver on the ambitious promise of AGI, the economy risks facing an AI-driven bubble reminiscent of, but larger than, previous tech bubbles, such as the dot-com bubble. Investors worldwide are pouring unprecedented funds into the race for AGI. If this technological breakthrough doesn’t materialise, confidence could collapse, triggering investor panic, massive asset sell-offs and substantial financial losses. The deflation of unrealistic valuations in AI firms could severely destabilise financial markets, erode consumer and business confidence, and lead to an economic recession.
Therefore, regardless of whether we achieve AGI, the race for it may pose substantial challenges to financial stability.
Ross P Buckley is Scientia Professor at UNSW
Bisesh Belbase serves on the Editorial Board of the UNSW Law Journal