Recently, Lukasz Szpruch, Agni Orfanoudaki, Carsten Maple, Matthew Wicker, Yoshua Bengio, Kwok-Yan Lam and Marcin Detyniecki posted a paper entitled Insuring AI: Incentivising Safe and Secure Deployment of AI Workflows. I came across it recently, and it strikes me as a good opportunity to complement the post I published last month, Insurers and Insurers and AI, a systemic risk The paper starts from a diagnosis that many people share: AI is spreading everywhere, and with it come risks that are hard to measure (sometimes already present but “silent” in existing insurance policies). The authors argue for dedicated insurance (AI insurance) and governance. They also emphasize (i) the lack of historical data, (ii) the dynamic nature of models, (iii) accumulation/correlation risks, and they conclude that a “paradigm shift” in pricing is needed: moving away from past experience toward a priori performance and evaluation data.
My point here is simple (and not polemical): this diagnosis is useful, but it leaves an important blind spot. A large part of these difficulties is not really new for insurance. They have structured, for decades, the scientific literature on insurability, accumulation, catastrophe modeling, uncertainty (epistemic vs aleatory), information asymmetry—and even the question of bias and discrimination. In other words: there are genuine novelties (speed, scale, technological dependencies, updates), but the methodological toolbox developed over more than a century is not outdated. If I dared, I would say it is precisely designed to deal with poorly observed, unstable, socially sensitive risks—provided it is extended with well-written contracts.
“Silent” risks and incentives to rebuild
The Insuring AI paper reminds us that many AI-related risks are already present in insurers’ portfolios, without being clearly included or excluded, and thus neither priced nor managed (what is often called “silent coverage”). It then proposes a dedicated insurance logic, with parametric triggers, usage-based pricing, bonus-malus schemes, and so on.
It also insists on a central point: without financial responsibility, audit, certification and compliance risk becoming box-ticking exercises; the insurer, by contrast, pays and therefore has an incentive to demand robust evidence. This intuition is very close to what insurance has long known: insurance is not only a price; it is a set of incentive devices (prevention, control, exclusions, deductibles, clauses, surveillance). Where I would qualify the framing is that presenting this as an almost blank territory easily leads one to underestimate the fact that scientific research in insurance—what one might call actuarial science—has long developed a body of work to handle (i) deep uncertainty, (ii) dependence/correlation, (iii) risk opacity, and (iv) information asymmetry.
The “paradigm shift”…
The conclusion of Insuring AI is explicit:
“underwriting must transition from relying on past claim statistics to focusing on a priori performance and assurance data”
If one interprets this as saying that actuarial science is simply extrapolating past losses, then yes: that is limiting. And I can only agree—this is what I was trying to say, somewhat clumsily, in From From Contemplative to Predictive Modeling, where I argued that actuarial work is too often “contemplative”, i.e., based on the idea that the future will resemble the past. I explained that as soon as the environment is no longer stationary (climate, markets, technologies), the implicit “rebus sic stantibus” assumption becomes fragile: actuarial work must shift toward truly predictive models, able to integrate ex ante information, expertise, and updating mechanisms.
My other point was “data can’t talk without a model”, which meant that switching to a priori signals does not mean “letting metrics speak”, but rather connecting them to an economic (or insurance) model. And that actuarial practice should go back and read actuarial science, which has been offering elements of an answer for a long time.
For natural catastrophes—except in a few very specific markets (California comes to mind)—the industry has long priced and managed risks using a priori models (hazard/vulnerability/financial modules), precisely because historical data are incomplete, non-stationary, and extremes dominate. And the scientific literature insists on distinguishing aleatory uncertainty (intrinsic variability) from epistemic uncertainty (what we do not know—model, parameters), and on the need to document these uncertainties.
In pricing, credibility theory is exactly a response to the lack of data specific to a given risk: one combines risk-specific information and collective information through a weight. For the mathematics, I refer to the book by Hans Bühlmann and Alois Gisler; or to Bayesian statistics books (as I discussed in Bayesian Wizardry for Muggles almost 12 years ago, or more recently in a popular talk, Chapter 6 of The Theory That Would Not Die recalls the very strong links between Bayesian statistics and actuarial science).
My point is that moving toward a priori signals (tests, red teaming, metrics, stress tests) looks less like a “rupture” than like a modern reformulation of known practices (risk engineering, cat models, credibility, validation, prudential margins). The real issue is not “actuarial science is outdated”, but: which signals should we retain, how should we audit them, how should we link them to economic severity, and how should we embed them in the contract?
Insurability and accumulation
Insuring AI also emphasizes (rightly) correlated losses and accumulation: the same model (or the same technological dependency) deployed everywhere can generate simultaneous claims. It even adds—this is a sensitive point—that cyber accumulation has shown limits, and that:
“traditional actuarial techniques struggle to capture”
these systemic exposures. Yet accumulation is a historical theme of insurability theory. Baruch Berliner, in Limits of Insurability of Risks (1982), stresses that limiting simultaneous losses is “very severe” and that “devastating” simultaneous losses can hit the industry.
What this (more economic) literature brings to the discussion is a way of organizing the questions: What conditions make a risk insurable… and at what price? When do we tip into the non-insurable, or into something that is only insurable through limits, sub-limits, exclusions, coinsurance, reinsurance, cat bonds, etc.? How should solvency be thought about when dependence (copulas, tail correlation, scenarios) dominates diversification?
And, very concretely: if technological foundations create an “accumulation” of dependencies, the response is not to abandon actuarial science. It is perhaps to mobilize the toolbox of accumulation / capital / solvency / reinsurance, and to articulate it with auditable technical metrics.
Uncertainty: aleatory vs epistemic, validation, backtesting…
The Insuring AI paper speaks of opacity, distribution shift, instability, updated RAG, changing API versions: all this describes uncertainty that is partly epistemic (what we do not know, and what changes). But catastrophe modeling (e.g. Catastrophe Modeling edited by Patricia Grossi and Howard Kunreuther) explicitly insists that the two sources of uncertainty (aleatory/epistemic) must be distinguished, and that “structural and parametric uncertainty” of the model is a reality we live with.
On the quantitative risk management side, Alex McNeil, Rüdiger Frey et Paul Embrechts also stress the importance of understanding dependence “in the extremes” (tails), and not only on average. And for validation, backtesting is presented as “an important tool” to assess whether a risk model captures losses correctly. Coming back to Insuring AI: if pricing relies on “assurance data”, then we return to very actuarial questions. Which metric is stable, manipulable (Goodhart’s law)? What calibration, what margins, what capital? What backtesting device “in production”, on changing populations? These questions are neither purely ML nor purely actuarial: they are at the interface—and actuarial science has an edge on the economic and prudential formalization.
Information asymmetries: the contract at the center
Finally, Insuring AI devotes a passage to information asymmetry: developers/operators know more than insurers, hence adverse selection; and full access to data/models is often impossible. Insurance economics theory treats precisely this type of situation. In Economics of insurance (Karl Borch), one reads that the insurer
“can observe the care taken only by paying a verification cost”
and that high control costs can make certain risks non-insurable in practice. It seems to me that this is exactly the key topic: the crux of the matter will be how contracts are written, because that is what transforms a technical uncertainty into an economically insurable object.
Contractual elements
Without claiming to be exhaustive, a serious AI policy should explicitly provide: a definition of the claim event (measurable error? third-party harm? non-compliance with a standard? and over what time window), a technical scope (versions, dependencies—model, RAG/index, prompts, tools—and a change regime, i.e., when an update “requalifies” the risk), the insured’s obligations (monitoring, logging, incident response, periodic red teaming, preservation of evidence), audit clauses, a technique to manage accumulation (limits by common dependency, sub-limits by provider/architecture, “systemic event” clauses), and an adjustment mechanism (bonus-malus, usage-based pricing, or parametric triggers, as Limits of Insurability of Risks already discussed).
This is simply recognizing that insurance works well when incentives and proof are contracted, and that this is something actuarial science (together with contract law) has defended for a long time.
Bias, discrimination, fairness
Finally, to return to a topic I know well, Insuring AI seems to present bias as a novelty imported by ML. But insurance has long discussed segmentation, proxies, and the tension between mutualization and personalization. This is what I try to explain in Insurance, Biases, Discrimination and Fairness, stressing that modern techniques and massive data make discrimination harder to detect. With Laurence Barry, we have also reflected on extreme segmentation questions, noting that if one prices an insured based on ultra-granular performance signals, governance, compliance, and social legitimacy issues appear very quickly. In short: once again, rather than discovering a new continent, we might go back and read a literature (mathematical, economic, legal—what I call “actuarial science”) that already structures the question.
Closing
Clearly, when reading Insuring AI, I see that we have the same concerns, and the paper provides vocabulary for notions I had only touched on (workflow dynamics, assurance artefacts, monitoring, accumulation). But the “paradigm shift” it announces does not imply that actuarial science is outdated: insurance already knows how to price with little historical data (cat models), combine a priori and experience (credibility), and handle accumulation (insurability). Moreover, the real “risk” is not only statistical: it is above all a problem of information asymmetry and moral hazard—so the contract and proof become central. Finally, bias and fairness issues are not anecdotal: they are already part of modern actuarial science. Fortunately, Insuring AI seems to explicitly call for broad collaboration—and I can only agree!
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (December 16, 2025). Insuring AI: New risks? New models? Freakonometrics. Retrieved January 13, 2026 from https://doi.org/10.58079/15cyb