In Insurers retreat from AI cover as risk of multibillion-dollar claims mounts, the Financial Times reported at the end of this week that several major insurers (AIG, Great American, and WR Berkley) are seeking to introduce explicit exclusions for risks related to artificial intelligence, particularly concerning the use of agents and language models. The reasons put forward are straightforward: potential losses related to AI could reach several hundreds of millions of dollars, or even more. But above all, the danger lies less in the severity of any individual claim than in the possibility of correlated, massive, simultaneous losses, impossible to mutualize. As Aon summarizes in one sentence:
“What [the industry] can’t afford is if an AI provider makes a mistake that ends up as a 1,000 or 10,000 losses.”
The issue is far less about the severity of a particular incident than about the simultaneity of thousands of identical incidents. In other words, we are dealing with the very definition of systemic risk. To understand this, it is useful to place the phenomenon within the broader framework of complex systems, accidents in high-reliability organizations, and system-wide risk mechanisms as studied for several decades in finance and insurance.
AI as an interconnected network: a breeding ground for contagion
The literature on systemic risk teaches us that it is not the absolute size of institutions that determines their vulnerability, but the structure of their interconnections. Prasanna Gai highlights that financial systems exhibit what he calls a “robust-yet-fragile” dynamic: they withstand countless shocks yet may collapse abruptly when a specific shock travels through the right channels. He reminds us that:
“The system may be robust to most shocks, but when problems strike the effects may be catastrophic.”
A local error can turn into a global catastrophe as soon as it reaches the network’s “vulnerable cluster.” The simulations he presents show that the more interconnected a network is, the faster an error can spread. A minimal change in connectivity or capitalization can shift the entire system from a stable to a critical state, a true “phase transition.” As Gai puts it:
“An initial error can spread through the entire vulnerable cluster.”
In this context, generative AI displays all the characteristics of a system highly conducive to contagion. When an AI provider deploys a faulty update, introduces an error in a model’s parameters, or suffers a cybersecurity vulnerability, it is not isolated users who are affected but thousands, because they rely on the same infrastructure. Each client depends not only on its own usage but on the integrity of a global model, where even the smallest modification instantly reproduces identical behaviour across all users. This is exactly the kind of rapid, synchronized contagion insurers now fear: propagation that is not slow or progressive, but immediate, simultaneous, and homogeneous.
Why insurers fear correlated losses
Insurability has historically depended on a fundamental condition: the law of large numbers. As Hufeld, Koijen, and Thimann remind us in The Economics, Regulation, and Systemic Risk of Insurance:
“The risk must obey the law of large numbers.”
Events must be independent, or sufficiently heterogeneous, for losses to offset each other statistically. Yet cyber-risks already fail to meet this condition, as Biener et al. showed in 2015:
“Cyber losses are highly interrelated and characterized by severe information asymmetries.”
Cyber insurance is already structurally fragile, facing simultaneous, large-scale, difficult-to-attribute incidents. Generative AI only reinforces this structure, creating an environment where errors are not merely frequent but potentially identical and simultaneous. The examples cited by the Financial Times (an Air Canada chatbot binding the airline to a fare, a Google module defaming a small business, a $25 million deepfake fraud against Arup, or an autonomous agent producing pricing or diagnostic errors at scale) highlight how extremely reproducible such incidents are. A single defect, update, or vulnerability can affect an entire sector all at once.
This homogeneity of losses is toxic for insurance. As actuarial literature reminds us, when losses become correlated, mutualization collapses mechanically. By design, insurance cannot absorb risks whose very structure pushes toward aggregation.
AI, cybersecurity, and “normal accidents”: when complexity makes error inevitable
Clearfield and Tilcsik, in Meltdown, show that in complex systems, failures are not anomalies: they are inevitable. Surprises emerge from interactions, and small errors amplify as they propagate through tightly coupled networks. Their argument corresponds exactly to what insurers fear: the error of an autonomous model is not an isolated human mistake, but a systemic mechanism liable to replicate everywhere.
“In wicked environments, error signals are ambiguous, interactions are complex, and systems become vulnerable to surprises.”
Generative AI models fit this description perfectly: structural opacity, non-deterministic behaviour, dependence on a handful of global providers, and the lack of separation between uses create a system in which local failures become systemic. The technological dependence concentrated among a few actors amplifies this dynamic further. When a single model is deployed in legal, medical, financial, and industrial contexts, an error propagating through that model can contaminate every domain simultaneously. AI is thus less a tool than an interconnected ecosystem, a fully-fledged complex system.
Why AI creates a new kind of systemic risk for insurance
Traditionally, insurance has been considered less vulnerable to systemic risk than banking, because insurers exhibit far lower interconnectedness. This is the consensus highlighted by Hufeld, Koijen, and Thimann:
“The insurance industry is not subject to systemic risk, in particular due to its significantly lower interconnectedness.”
But this assumption relies on a hidden premise: that the risks insured must themselves remain independent. With AI, that premise collapses. For the first time, an insured risk (cyber, errors & omissions, software-related damages) becomes structurally interconnected. A single provider, a single model, a single update, or a single vulnerability can trigger thousands of simultaneous losses. Interconnectedness is no longer a property of the insurance market, it is a property of the risk itself. Acharya, Pedersen, Philippon, and Richardson define a risk as systemic when it can generate a simultaneous “capital shortfall” across multiple institutions:
“Even if liabilities are not runnable, a firm can contribute to systemic risk through its contribution to the aggregate capital shortfall.”
In a world where thousands of losses linked to the same AI error could occur at once, this definition becomes sharply relevant. AI introduces a form of aggregated, correlated, non-diversifiable risk. It is no longer a volatile risk, but a structurally synchronized one — insurers’ worst-case scenario.
What the “big systemic events” of AI might look like
Several scenarios illustrate how AI-driven systemic risk might unfold. Imagine a faulty update to an LLM used across the financial sector: a model deployed in two thousand banks simultaneously misinterprets a regulatory rule. The consequences (non-compliance, sanctions, lawsuits, customer withdrawals, class actions) would be immediate and perfectly synchronized. An autonomous legal agent could also generate systematic hallucinations, producing false legal citations or flawed reasoning. If deployed across several hundred companies, that error would instantly become collective. Beyond these operational scenarios lies a subtler phenomenon: the breakdown of interpretability. When models produce signals that seem plausible but remain opaque, both humans and machines tend to attribute excessive meaning to them. A weak signal — a slight increase in churn score, a small rise in a risk indicator — may be interpreted as a real behavioural shift, though it may merely be a statistical artifact, dataset bias, or latent drift.
These misinterpretations can create feedback loops, turning noise into a real shock: emergency decisions, pricing changes, adjustments to dependent models. Clearfield and Tilcsik describe this as a self-fulfilling prophecy. In Gai’s terminology, an initial noise finds its way into the vulnerable cluster and triggers a cascade:
“With positive probability, a random initial default at one bank can lead to the spread of default across the entire vulnerable portion of the financial system (…) Contagion breaks out when shocks strike any bank in, or adjacent to, the giant vulnerable cluster and spread across the entire cluster.”
For insurers, such risk (intrinsic to the system’s functioning) becomes impossible to mutualize.
The Tesla case study
The book The Tesla Files offers a striking illustration of these dynamics. It reveals an organization where extremely concentrated critical functions make any incident capable of propagating system-wide. A single administrator holds global access; thousands of employees have elevated privileges; and the whistleblower describes an absence of monitoring despite massive data extractions. In such a context, an incident does not remain local, it compromises the entire organization.
“The stream of data expands to include customers, business partners, and a wide array of individuals and companies with ties to Tesla.”
Tesla aggregates not only its own data but also data from customers, partners, governments, subcontractors, and regulators — creating dependency structures extremely similar to pre-2008 financial networks. The authors show that Tesla minimizes some risk indicators (recording only crashes involving airbag deployment), sometimes disables Autopilot just before impact (obscuring responsibility), or refuses to transmit data to regulators. Four elements appear: information asymmetry, opaque accountability, centralized incident handling, and extreme software homogeneity.
Autopilot crashes in Europe are handled by a single team, and thousands of customer complaints exist regarding phantom braking or sudden acceleration. When an entire fleet relies on a single software model updated simultaneously, a defect can produce a massive correlated shock. This is exactly the scenario insurers now fear for generative AI systems.
Tesla thus serves as a concrete example: a system where software homogeneity creates such aggregation risk that a single error can become a “big systemic event.”
AI Liability: a legal blind spot in an asymmetric market
To these operational risks we must add a final dimension: legal responsibility. “AI liability” )the question of who is responsible when AI causes harm) is today one of the least explored and most explosive issues. The Financial Times rightly notes:
“Nobody knows who’s liable if things go wrong.”
In practice, AI providers’ contracts include drastic limitations of liability, exclusions of performance guarantees, and clauses transferring almost all risk to the user. Because demand is nearly inelastic (there are often no viable alternatives to the major models), providers can impose their terms unilaterally, creating a significant contractual asymmetry.
This asymmetry becomes critical in regulated sectors, where firms are required to control their models: banks, insurers, and hospitals must comply with strict requirements for operational and model-risk management, even though the models they use are opaque, external, and unaudited. The Tesla Files shows that even technologically advanced companies can lose control of their own systems, making some regulatory obligations nearly impossible to fulfill. The financial sector thus faces a profound contradiction: it is legally responsible for tools over which it has no meaningful control — not the design, not the training data, not the governance, not the behaviour. Firms cannot contest provider errors nor obtain compensation when a model fails.
This situation creates a triple gap: a regulatory gap (firms cannot meet their obligations), a contractual gap (all responsibility lies with the user), and an insurance gap (insurers are withdrawing from the field). The result is a legal systemic risk, characterized by diffuse responsibility, concentrated dependency, and profoundly inefficient allocation of risk.
References
- Acharya V., Pedersen L., Philippon T., Richardson M. (2016). Measuring Systemic Risk. Review of Financial Studies, Vol. 30, No. 1, 2–47.
- Biener C., Eling M., Wirfs J. (2015). Insurability of Cyber Risk. The Geneva Papers on Risk and Insurance, Volume 40, 131–158.
- Clearfield C., Tilcsik A. (2018). Meltdown: Why Our Systems Fail and What We Can Do About It. Penguin Press.
- Gai P. (2013). Systemic Risk: The Dynamics of Modern Financial Systems. Oxford University Press.
- Gai, Prasanna; Kapadia, Sujit (2010). Contagion in Financial Networks. Proceedings of the Royal Society A, Vol. 466, 2401–2323.
- Harris, Lee; Criddle, Cristina (2025). Insurers retreat from AI cover as risk of multibillion-dollar claims mounts. Financial Times, 23 November 2025.
- Hufeld F., Koijen R., Thimann C. (eds.) (2016). The Economics, Regulation, and Systemic Risk of Insurance. Oxford University Press.
- Iwersen S., Verfürden M. (2025). The Tesla Files. Steerforth Press, Hanover (NH).






