Tag Archives: AI

Insurers and AI, a systemic risk

In Insurers retreat from AI cover as risk of multibillion-dollar claims mounts, the Financial Times reported at the end of this week that several major insurers (AIG, Great American, and WR Berkley) are seeking to introduce explicit exclusions for risks related to artificial intelligence, particularly concerning the use of agents and language models. The reasons put forward are straightforward: potential losses related to AI could reach several hundreds of millions of dollars, or even more. But above all, the danger lies less in the severity of any individual claim than in the possibility of correlated, massive, simultaneous losses, impossible to mutualize. As Aon summarizes in one sentence:

“What [the industry] can’t afford is if an AI provider makes a mistake that ends up as a 1,000 or 10,000 losses.”

The issue is far less about the severity of a particular incident than about the simultaneity of thousands of identical incidents. In other words, we are dealing with the very definition of systemic risk. To understand this, it is useful to place the phenomenon within the broader framework of complex systems, accidents in high-reliability organizations, and system-wide risk mechanisms as studied for several decades in finance and insurance.

AI as an interconnected network: a breeding ground for contagion

The literature on systemic risk teaches us that it is not the absolute size of institutions that determines their vulnerability, but the structure of their interconnections. Prasanna Gai highlights that financial systems exhibit what he calls a “robust-yet-fragile” dynamic: they withstand countless shocks yet may collapse abruptly when a specific shock travels through the right channels. He reminds us that:

“The system may be robust to most shocks, but when problems strike the effects may be catastrophic.”

A local error can turn into a global catastrophe as soon as it reaches the network’s “vulnerable cluster.” The simulations he presents show that the more interconnected a network is, the faster an error can spread. A minimal change in connectivity or capitalization can shift the entire system from a stable to a critical state, a true “phase transition.” As Gai puts it:

“An initial error can spread through the entire vulnerable cluster.”

In this context, generative AI displays all the characteristics of a system highly conducive to contagion. When an AI provider deploys a faulty update, introduces an error in a model’s parameters, or suffers a cybersecurity vulnerability, it is not isolated users who are affected but thousands, because they rely on the same infrastructure. Each client depends not only on its own usage but on the integrity of a global model, where even the smallest modification instantly reproduces identical behaviour across all users. This is exactly the kind of rapid, synchronized contagion insurers now fear: propagation that is not slow or progressive, but immediate, simultaneous, and homogeneous.

Why insurers fear correlated losses

Insurability has historically depended on a fundamental condition: the law of large numbers. As Hufeld, Koijen, and Thimann remind us in The Economics, Regulation, and Systemic Risk of Insurance:

“The risk must obey the law of large numbers.”

Events must be independent, or sufficiently heterogeneous, for losses to offset each other statistically. Yet cyber-risks already fail to meet this condition, as Biener et al. showed in 2015:

“Cyber losses are highly interrelated and characterized by severe information asymmetries.”

Cyber insurance is already structurally fragile, facing simultaneous, large-scale, difficult-to-attribute incidents. Generative AI only reinforces this structure, creating an environment where errors are not merely frequent but potentially identical and simultaneous. The examples cited by the Financial Times (an Air Canada chatbot binding the airline to a fare, a Google module defaming a small business, a $25 million deepfake fraud against Arup, or an autonomous agent producing pricing or diagnostic errors at scale) highlight how extremely reproducible such incidents are. A single defect, update, or vulnerability can affect an entire sector all at once.

This homogeneity of losses is toxic for insurance. As actuarial literature reminds us, when losses become correlated, mutualization collapses mechanically. By design, insurance cannot absorb risks whose very structure pushes toward aggregation.

AI, cybersecurity, and “normal accidents”: when complexity makes error inevitable

Clearfield and Tilcsik, in Meltdown, show that in complex systems, failures are not anomalies: they are inevitable. Surprises emerge from interactions, and small errors amplify as they propagate through tightly coupled networks. Their argument corresponds exactly to what insurers fear: the error of an autonomous model is not an isolated human mistake, but a systemic mechanism liable to replicate everywhere.

“In wicked environments, error signals are ambiguous, interactions are complex, and systems become vulnerable to surprises.”

Generative AI models fit this description perfectly: structural opacity, non-deterministic behaviour, dependence on a handful of global providers, and the lack of separation between uses create a system in which local failures become systemic. The technological dependence concentrated among a few actors amplifies this dynamic further. When a single model is deployed in legal, medical, financial, and industrial contexts, an error propagating through that model can contaminate every domain simultaneously. AI is thus less a tool than an interconnected ecosystem, a fully-fledged complex system.

Why AI creates a new kind of systemic risk for insurance

Traditionally, insurance has been considered less vulnerable to systemic risk than banking, because insurers exhibit far lower interconnectedness. This is the consensus highlighted by Hufeld, Koijen, and Thimann:

“The insurance industry is not subject to systemic risk, in particular due to its significantly lower interconnectedness.”

But this assumption relies on a hidden premise: that the risks insured must themselves remain independent. With AI, that premise collapses. For the first time, an insured risk (cyber, errors & omissions, software-related damages) becomes structurally interconnected. A single provider, a single model, a single update, or a single vulnerability can trigger thousands of simultaneous losses. Interconnectedness is no longer a property of the insurance market, it is a property of the risk itself. Acharya, Pedersen, Philippon, and Richardson define a risk as systemic when it can generate a simultaneous “capital shortfall” across multiple institutions:

“Even if liabilities are not runnable, a firm can contribute to systemic risk through its contribution to the aggregate capital shortfall.”

In a world where thousands of losses linked to the same AI error could occur at once, this definition becomes sharply relevant. AI introduces a form of aggregated, correlated, non-diversifiable risk. It is no longer a volatile risk, but a structurally synchronized one — insurers’ worst-case scenario.

What the “big systemic events” of AI might look like

Several scenarios illustrate how AI-driven systemic risk might unfold. Imagine a faulty update to an LLM used across the financial sector: a model deployed in two thousand banks simultaneously misinterprets a regulatory rule. The consequences (non-compliance, sanctions, lawsuits, customer withdrawals, class actions) would be immediate and perfectly synchronized. An autonomous legal agent could also generate systematic hallucinations, producing false legal citations or flawed reasoning. If deployed across several hundred companies, that error would instantly become collective. Beyond these operational scenarios lies a subtler phenomenon: the breakdown of interpretability. When models produce signals that seem plausible but remain opaque, both humans and machines tend to attribute excessive meaning to them. A weak signal — a slight increase in churn score, a small rise in a risk indicator — may be interpreted as a real behavioural shift, though it may merely be a statistical artifact, dataset bias, or latent drift.

These misinterpretations can create feedback loops, turning noise into a real shock: emergency decisions, pricing changes, adjustments to dependent models. Clearfield and Tilcsik describe this as a self-fulfilling prophecy. In Gai’s terminology, an initial noise finds its way into the vulnerable cluster and triggers a cascade:

“With positive probability, a random initial default at one bank can lead to the spread of default across the entire vulnerable portion of the financial system (…) Contagion breaks out when shocks strike any bank in, or adjacent to, the giant vulnerable cluster and spread across the entire cluster.”

For insurers, such risk (intrinsic to the system’s functioning) becomes impossible to mutualize.

The Tesla case study

The book The Tesla Files offers a striking illustration of these dynamics. It reveals an organization where extremely concentrated critical functions make any incident capable of propagating system-wide. A single administrator holds global access; thousands of employees have elevated privileges; and the whistleblower describes an absence of monitoring despite massive data extractions. In such a context, an incident does not remain local, it compromises the entire organization.

“The stream of data expands to include customers, business partners, and a wide array of individuals and companies with ties to Tesla.”

Tesla aggregates not only its own data but also data from customers, partners, governments, subcontractors, and regulators — creating dependency structures extremely similar to pre-2008 financial networks. The authors show that Tesla minimizes some risk indicators (recording only crashes involving airbag deployment), sometimes disables Autopilot just before impact (obscuring responsibility), or refuses to transmit data to regulators. Four elements appear: information asymmetry, opaque accountability, centralized incident handling, and extreme software homogeneity.

Autopilot crashes in Europe are handled by a single team, and thousands of customer complaints exist regarding phantom braking or sudden acceleration. When an entire fleet relies on a single software model updated simultaneously, a defect can produce a massive correlated shock. This is exactly the scenario insurers now fear for generative AI systems.

Tesla thus serves as a concrete example: a system where software homogeneity creates such aggregation risk that a single error can become a “big systemic event.”

AI Liability: a legal blind spot in an asymmetric market

To these operational risks we must add a final dimension: legal responsibility. “AI liability” )the question of who is responsible when AI causes harm) is today one of the least explored and most explosive issues. The Financial Times rightly notes:

“Nobody knows who’s liable if things go wrong.”

In practice, AI providers’ contracts include drastic limitations of liability, exclusions of performance guarantees, and clauses transferring almost all risk to the user. Because demand is nearly inelastic (there are often no viable alternatives to the major models), providers can impose their terms unilaterally, creating a significant contractual asymmetry.

This asymmetry becomes critical in regulated sectors, where firms are required to control their models: banks, insurers, and hospitals must comply with strict requirements for operational and model-risk management, even though the models they use are opaque, external, and unaudited. The Tesla Files shows that even technologically advanced companies can lose control of their own systems, making some regulatory obligations nearly impossible to fulfill. The financial sector thus faces a profound contradiction: it is legally responsible for tools over which it has no meaningful control — not the design, not the training data, not the governance, not the behaviour. Firms cannot contest provider errors nor obtain compensation when a model fails.

This situation creates a triple gap: a regulatory gap (firms cannot meet their obligations), a contractual gap (all responsibility lies with the user), and an insurance gap (insurers are withdrawing from the field). The result is a legal systemic risk, characterized by diffuse responsibility, concentrated dependency, and profoundly inefficient allocation of risk.

References

Personalized Insurance Premiums Cheaper Thanks to AI? Here’s Why It’s a Slippery Slope

This article was initially written in French.

Insurance is based on a principle of solidarity that is undermined by the algorithms tasked with creating our profiles. As algorithms become more precise, the bill becomes more personalized. Various “at risk” profiles may thus find themselves excluded from insurance schemes, as the costs are so high. Personalization has an obvious legitimacy. But it must be reconciled with equitable access to insurance.

It must first be understood that insurance is marked by a fundamental paradox. On the one hand, its very principles assume a collective mechanism in which everyone contributes according to their capacity and benefits from solidarity in the event of a loss. On the other hand, technological advances, ever-larger datasets, and increasingly precise actuarial methods push toward ever greater individualization of premiums.

It is nothing less than reconciling actuarial refinement with the values of redistribution and solidarity on which the profession of insurer was founded.

To this tension is added an increasingly demanding legal framework, which prohibits any form of discrimination based on sensitive data, sometimes correlated with risk factors that are nevertheless relevant.

Pricing segmentation

Insurance companies have long used classification as the pillar of their economic model: age, sex, profession, geographic area, claims history…

In 1662, English statistician John Graunt published the Bills of Mortality, a first statistical analysis of London’s death registers. In 1693, English astronomer Edmund Halley developed the first quantified mortality table, allowing calculation of life expectancy at each age. These works laid the foundations for differentiated pricing by age and sex, which long remained the two main criteria of segmentation in life-death insurance.

At the same time, after the Great Fire of London in 1666, the first fire insurance contracts appeared: companies collected data on the type of construction materials and urban density. In the 18th–19th centuries, premiums were segmented according to the proximity of neighboring buildings and the presence of fire services, giving rise to the first “high-risk zones” and “low-risk zones.”

With the rise of the automobile in the 1910s–1920s, American insurers began systematically recording the number of claims, and the age and sex of drivers. As early as the 1920s, several pricing “classes” were distinguished: young drivers, women drivers, experienced drivers, making it possible to set variable premiums depending on the profile.

Today, actuaries have sophisticated algorithms, machine learning tools, and a flood of data: onboard telematics, connected objects, geolocation, driving or lifestyle behavior… For the insurer, refining segmentation makes it possible to charge each policyholder “their true risk level,” reducing the cross-subsidization effect from good risks to bad ones, while improving overall profitability.

But overly fine pricing reduces pooling; it can make insurance very expensive, even inaccessible for certain high-risk segments. Hence today, actuaries seek a subtle balance, aiming to capture the right information to differentiate profiles, while preserving the viability of the insured community.

Policyholders and the illusion of winning personalization

In Europe, the legislative proposal FIDA (Financial Data Access Framework) would open regulated access for insurers to individuals’ financial data. Its purpose is to refine understanding of spending and repayment behaviors. In this context, the promise of ultra-personalized pricing arouses both hopes of lower premiums and fears of excessive profiling and significant exclusions.

Faced with this new influx of data, many clients perceive personalization as a win-win approach: if I manage my budget better, I will benefit from a discount; if my saving and repayment habits are judged virtuous, my health premium will decrease; if my financial profile improves, my home insurance will become lighter.

This “pay-as-you-live” or “pay-how-you-drive” logic appeals: individuals believe themselves masters of their insurance cost through their lifestyle choices.

Yet several points deserve to be highlighted.

The principle of pooling is not neutralized: those who cannot adopt the most virtuous behaviors remain dependent on the solidarity of others. Indeed, even if higher-risk individuals pay more individually, those who are less at risk nevertheless continue to bear part of the costs thanks to the principle of pooling.

The asymmetry of information is reinforced, as the insurer knows statistics better than the client. The offer of personalization is often based on correlations, sometimes tenuous, whose scope the client does not understand.

Very fine personalization can force the most at-risk to over-insure, or on the contrary to give up insurance, weakening the pool.

Thus, even strengthened by access to financial data, “personalization” is not necessarily synonymous with “empowerment” for the consumer.

The legal framework: when the fight against discrimination is required

The development of big data in insurance raises important ethical and legal questions: how far can sensitive variables be exploited to predict risk?

In France and in the European Union, legislation explicitly prohibits discrimination based on protected criteria: ethnic origin, gender, sexual orientation, disability, religious beliefs, etc. The Solvency II Directive (EU) requires insurers to use “transparent” and non-discriminatory risk models.

Unlike the European Union—which bans differentiated pricing based on protected criteria (gender, origin, disability)—the Quebec model offers a more permissive framework. While the Charter of Human Rights and Freedoms of Quebec also prohibits discrimination, it provides exemptions specific to insurers: they can, when a factor is statistically relevant, base pricing on age, sex, or marital status.

This usage, authorized solely on the basis of a correlation, raises questions.

Ethics and social responsibility of insurers

Beyond mere legal compliance, insurers are increasingly judged on their ethical practices and social responsibility by consumer associations and the media, which relay incidents of algorithmic discrimination and exert reputational pressure.

In recent years, insurers must therefore ask themselves collectively how to guarantee equitable access to their products for vulnerable populations, without sacrificing the financial viability of their portfolios. Some innovative models propose “solidarity” formulas or capped premiums to avoid exclusion.

Insurers are being required to show more and more transparency. They must clearly explain pricing criteria, make calculation keys accessible to avoid a feeling of arbitrariness. Finally, they must integrate data protection and privacy from the design stage of offers (“privacy by design”), in order to preserve trust.

Insurers that manage to reconcile personalization, fairness and inclusion will become reference players for clients concerned with ethics.

Reconciling solidarity and data: a crucial challenge

The challenge, as we see, is considerable.

It is nothing less than reconciling actuarial precision with the values of redistribution and solidarity that founded the insurance profession.

It is in resolving this tension that the future of insurance will be decided: neither pure price discrimination nor simple illusory personalization, but rather a balance allowing each to contribute according to their risk and to benefit in fair measure from the pooling of life’s uncertainties.

“Mathematical Foundations of AI” day at the Sorbonne center for artificial intelligence

On Thursday 12th, I will attend the Mathematical Foundations of AI day, organized by the DATAIA Institute and SCAI (Sorbonne Center for Artificial Intelligence), in association with several scientific societies (namely, the Fondation Mathématique Jacques Hadamard (FMJH), the Fondation Sciences Mathématiques de Paris-FSMP, the MALIA group of the Société Française de Statistique and the Société Savante Francophone d’Apprentissage Machine (SSFAM)).

Slides are now online.

In this talk, we present two complementary approaches to addressing fairness in algorithmic decision-making through the lens of counterfactual reasoning and optimal transport, both in individual and group fairness. First, we introduce a novel method that links two existing counterfactual approaches: causal graph-based adaptations (Plečko and Meinshausen, 2020) and optimal transport (De Lara et al., 2024). By extending “Knothe’s rearrangement” (Bonnotte, 2013) and “triangular transport” (Zech and Marzouk, 2022) to probabilistic graphical models, we propose a new group framework, termed sequential transport, which we apply to the problem of individual fairness. Theoretical foundations are established, followed by numerical demonstrations on synthetic and real datasets. Building on this, we extend the discussion to algorithmic fairness in the presence of multiple sensitive attributes. While traditional fairness frameworks focus on eliminating bias with respect to a single sensitive variable, their effectiveness diminishes with multiple sensitive characteristics. To address this, we propose a sequential fairness framework based on multi-marginal Wasserstein barycenters, generalizing Strong Demographic Parity to handle multiple sensitive features. Our method provides a closed-form solution for the optimal, sequentially fair predictor, enabling interpretation of correlations between sensitive attributes. Furthermore, we introduce an approximate fairness framework that balances risk and unfairness, allowing for prioritization of fairness across specific attributes. Both approaches are supported by comprehensive numerical experiments on synthetic and real-world datasets, showcasing the practical efficacy of these methods in promoting fair decision-making. Together, they provide a robust framework for addressing fairness in complex, multi-attribute settings while preserving interpretability and flexibility.

References are given below

  1. we will discuss further counterfactual fairness, initiated in Optimal Transport for Counterfactual Estimation: A Method for Causal Inference, and the more recent paper Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness 
  2. we will discuss Wasserstein barycenters with multiple sensistive attributes, A Sequentially Fair Mechanism for Multiple Sensitive Attributes

[added on Sept 13th] Thanks (on the right) for this nice picture with .

Talk at the 38th Annual AAAI Conference on Artificial Intelligence, in Vancouver

This week, François is in Vancouver, at the 38th Annual AAAI Conference on Artificial Intelligence,

presenting our joint work on Sequentially Fair Mechanism for Multiple Sensitive Attributes,

In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score. Throughout recent years, the scientific community has developed a host of definitions and tools to solve this task, which work well in many practical applications. However, the applicability and effectivity of these tools and definitions becomes less straightfoward in the case of multiple sensitive attributes. To tackle this issue, we propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features. We accomplish this by leveraging multi-marginal Wasserstein barycenters, which extends the standard notion of Strong Demographic Parity to the case with multiple sensitive characteristics. This method also provides a closed-form solution for the optimal, sequentially fair predictor, permitting a clear interpretation of inter-sensitive feature correlations. Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness. This extension permits a targeted prioritization of fairness improvements for a specific attribute within a set of sensitive attributes, allowing for a case specific adaptation. A data-driven estimation procedure for the derived solution is developed, and comprehensive numerical experiments are conducted on both synthetic and real datasets. Our empirical findings decisively underscore the practical efficacy of our post-processing approach in fostering fair decision-making.

 

The Bullshit Society

(this article was intially written in French, La société du bullshit)

In 1986, the philosopher Harry Frankfurt defined the concept of “bullshit” by giving it a very precise meaning, sometimes translated into French as “baratin“, for “smooth talk” or “flannel”. A bullshitter is not a liar (who is aware of deceiving), he is just totally indifferent to the truth, which makes him almost even more dangerous (“bullshit is a greater enemy of the truth than lies are“). Bullshitters have an opinion about anything and everything, so they talk a lot about things they know almost nothing about. And the fascination of these last months around conversational agents (like GPT-3 or ChatGPT1), these algorithms born to chatter, can only question us.

ChatGPT or text completion

A Large Language Model (LLM) is a natural language processing model that uses a large amount of textual data to learn to predict the next words and sentences in a given text. These models are trained on large data sets, such as text corpora from various sources, such as books, newspaper articles, web pages. GPT-32 is one such model, containing 175 billion parameters (120 times more than GPT-2, launched a year earlier) distributed in a 96-layer neural network, with a contextual memory of 2048 tokens (or “lexemes“)3, and vectorizing in a 12288 dimensional space (I refer to Wolfram (2023) for more technical details on how ChatGPT works). This gibberish4 means that when typing some words, the algorithm first transposes them into a (vector) space giving them context, we will speak of word embedding. The best known of these algorithms is probably Word2vec5, by Mikolov (2013), which is based on the idea (stated in the 1950s) that words appearing in similar contexts have related meanings. We can then introduce a geometry into this latent space, which will allow us to interpret the fact that the vector connecting man and king is parallel to the vector connecting woman and queen, which we will sometimes write as king – man + woman = queen. Another famous example is Paris – France + Spain = Madrid, in other words Paris is to France what Madrid is to Spain. This lexical folding is not associated with an understanding of geopolitics, allowing to know that Madrid is indeed the capital of Spain, but simply on concomitances of words in sentences, used to train the algorithm. The quality of the texts used will then be important, and this logic allows to explain that the algorithm will reproduce biases observed in the texts, and will seem racist, or sexist. Here again, a well-known example is surgeon – man + woman = nurse.

Second, this model is a generative text completion tool, meaning that when it sees a succession of words, it will try to predict words that might follow, listing the most likely words, and then choosing. On GPT-3, the “temperature” parameter introduces randomness, by asking it not to necessarily always choose the most probable sequence, but allowing it a little randomness (which could be associated with a form of poetry or creativity). It is important to remember, as Li (2023) explains, that from a philosophical as well as a mathematical point of view, it is fundamentally impossible for models trained to guess the next word to learn the meanings of the language and that their performance is only the result of memorizing statistics and correlations that do not reflect a causal model of the process generating the sequence. To illustrate, let us imagine an algorithm trained on a single text, the Bible. If I ask it to complete the sentence “In the beginning was…”, the logical sequence for the algorithm will be “the word“, and “Let there be…” will be followed by “light” (while it will probably be “rock” if the training corpus was my music playlist). Here, the amount of text used to train the algorithm is astronomical. For example, on GPT-3, I can ask it to continue the beginning of a sentence:

The answer depends, among other things, on the number of terms that are asked. The sentences are coherent and grammatically correct, showing that GPT-3 has “formal linguistic competence”, as cognitive scientists would say. But it has no “functional linguistic competence”, i.e. it has no reasoning ability. In other words, GPT-3 behaves exactly like the operator locked in John Searle’s “Chinese room”, the thought experiment analyzed by Cole (2004). However, it can perform calculations, for example calculate an integral

or, when asking for more details

Indeed\int_{0}^{\pi/2}\cos(x)dx=\big[\sin(x)\big]_0^{\pi/2}=\sin({\pi/2})-\sin(0)=1-0=1.One can ask him to explain a little more. And the results are then astonishing, with formal calculations right, but wrong numerically

But sometimes, it is… weird (or like grading a student who did not quite understand the course)

I can’t help displaying others, just to illustrate how wrong it can go

or again, correct answer, but we hear about “the area of a triangle

or (with a multiplication gone wild)

We also have integration by parts

It is clear that the creative power of the algorithm is impressive

illustration by Sir John Tenniel, 1869 (Alice in Wonderland)

I can try to add an information, like I know mathematics (as a colleague said, “if you ask it to solve problems at your level, ask it to put itself at your level“)

Here again, we can get crazy things

One gets the impression that GPT-3 has made Lewis Carroll’s phrase6 its own, “But then,” says Alice, “if the world makes absolutely no sense, what’s to stop us inventing one?” We can also ask her for more… “personal” information, for example

then, by asking the question again,

We can ask the question again indefinitely, and have fun with the answers… but we have to admit that this is typically a bullshitter’s answer, because it is difficult to imagine that an algorithm has a favorite city. One could almost say that it is creative, even if it was content to search, in the whole corpus used to train it, for a logical answer, or rather a plausible sequence of terms to follow up on my request. One could almost stop here, and think back to all those books written about technology and the robotization of society, which claimed that robots would do the manual work, and humans the creative work. It’s 2023, and while GPT-3 is inventing favorite cities (he might even answer me in alexandrines), I still have to empty the dishwasher and to vacuum.

In the introduction, we mention that GPT-3 had no awareness of what the truth might be, he just makes conversation with us. We can tell him that he is wrong (even if he isn’t), and he will obediently agree

GPT-3 is thus an authentic producer of bullshit, as Frankfurt (1986) asserted. Worse, it can make statements that leave no room for doubt. For example

But would we prefer to read at the end of the sentence “but I have doubts“, which will almost complete the process of making him human7 ? This ambivalence is reinforced by a surprising feature of GPT-3 (and it is also the case of ChatGPT), with words that reveal themselves one after the other8.

When I write, do I really know what I want to write? Doesn’t the text reveal itself to itself as it formulates itself?” claimed Jacques Derrida in a lecture at Cornell in the novel Binet (2015). This is what we see when we ask GPT-3 a question, reinforcing the belief that behind the algorithm would almost be a human9.

The Turkish automaton and the parrots

We all know the mechanical Turk or the chess-playing automaton, this so-called life-size automaton, wearing a turban, which had the ability to play chess at the end of the 18th century, whose legendary story is told in Standage (2003).

illustration, drawing by W. de Kempelen and print by J. G. Pintz, 1783

This story is almost the opposite of ChatGPT’s, with here a man pretending to be a machine, whereas with conversational algorithms, it is algorithms that have almost convinced themselves that they think like people, that they can write original works. But if we open the machine, we won’t find anyone. Yet we would like to believe it, because we are all very good at personifying anything that seems to act like a human. As Mahowald & Ivanova (2022) wrote, we have a “persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do. It will be remembered that Joseph Weizenbaum, who developed ELIZA in the 1960s, one of the first conversational algorithms, wrote a book to warn people against anthropomorphizing computer programs, reminding them that his tools had no understanding of the world, but just relied on a few repetitive tricks based on keywords identified by the researchers.

ChatGPT has no understanding and no intention, it just answers us, because it has been programmed to do so. “We never talk to say nothing” says Viktorovitch (2021). But ChatGPT talks to us, while it literally has nothing to say to us, it is a simple random generator of words. Or as Bender et al. (2021) presented it, a “stochastic parrot”. The paper caused quite a stir at the time, as nine days before the paper was accepted for the conference, and published, Timnit Gebru, co-author of the paper, was fired from her position as co-director of the Google ethics team after refusing to remove her name from the paper (Margaret Mitchell, also author, was fired two months later). This context unfortunately overshadowed the significance of the paper, which warned about the dangers of large language models (the full title is “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?“). And to continue the avian metaphor, we can remember the anecdote told by the physicist Richard Feynman in 1966, when he was talking with his father as a child: “See that bird? It’s a brown-throated thrush, but in Germany it’s called a halsenflugel, and in Chinese they call it a chung ling and even if you know all those names for it, you still know nothing about the bird – you only know something about people; what they call that bird. Now that thrush sings, and teaches its young to fly, and flies so many miles away during the summer across the country, and nobody knows how it finds its way,” and so forth. There is a difference between the name of the thing and what goes on. In other words, there is a profound difference between saying and naming things, and actually understanding them. And the exponential explosion of the complexity and size of these algorithms will not change anything10. Even if GPT-4 contains a thousand times more parameters, the algorithm will not be “a thousand times smarter“, it will just be a better mirror of our intelligence, and of our stupidity, maybe only in an even more convincing way.

Bullshit, churn and risk

As Klein (2023) wrote, with GPT, “the cost of producing “bullshit” has gone to zero“. These algorithms are bullshitters, and this is not a flaw that subsequent versions will correct, it is a characteristic of these models. In 2020, as quoted by Marcus & Davis (2020), the researcher Douglas Summers-Stay stated “GPT is odd because it doesn’t ‘care’ about getting the right answer to a question you put to it. It’s more like an improv actor who is totally dedicated to their craft, never breaks character, and has never left home but only read about the world in books. Like such an actor, when it doesn’t know something, it will just fake it. You wouldn’t trust an improv actor playing a doctor to give you medical advice.” And it’s important to keep this characteristic in mind when imagining the possible uses of these discussion generators. And OpenIA’s competitors are responding, with Google having launched Google Bard a few weeks ago, this multiplication of the offer suggesting that there is a massive demand for this kind of tool.

The “reasoning” powers of the model are at best superficial, but more likely non-existent. As Mielczareck (2021) argues, “if bullshitters are ever more numerous, it is also because they respond to the injunctions of their time.” Again, it is important to keep in mind Frankfurt’s (2009) assertion, “bullshit is a greater enemy of the truth than lies are“, bullshitters are far more dangerous than liars. The “bullshitter” is the one who is caught with his hand in the jam jar, who looks at you with a smile and tells you that it’s not him, while continuing to stuff himself. And who could tweet “whatever” or “who cares”.

As we have seen, blindly trusting GPT-3 and blindly following him in his hallucinations11 can only be done by putting one’s critical sense on hold. So what is ChatGPT for? It is important to keep in mind that the algorithm is developed by OpenAI, a company12 whose official purpose is “to ensure that artificial general intelligence benefits all of humanity”, but which also intends to take advantage of this media hype to make money. Etymologically, the word “bullshit” was used in thieves’ slang to refer to the empty wallet that replaces the full one that has just been stolen so that the victim does not realize it. At the beginning of the 20th century, the word was used to describe the bluff of a businessman, a courtier or a sycophant.

The irony is that ChatGPT is probably close to this concept, the seductive vision of “artificial intelligence” is, for many, the promise that a sufficiently powerful technology will be able to replace human workers or, more importantly, will be able to precarity and undermine them. For there is a political message in this technology. The constant injunction to learn to live with these tools, which are here to stay and which we just have to learn to live with is frightening. As we have seen, it is important to distinguish what would be a temporary flaw from a profound characteristic, because these kinds of tools will never be more than a “stochastic parrot”. And in order to seriously quantify the risks, one should imagine the possible uses of this tool. For example, if one wishes to use a large language model to answer health questions, the accuracy and reliability of the answers is probably essential. Note the alert issued by researchers at cp<r> who had spotted that some recent cyberattacks had been possible thanks to GPT-3.

In June 2021, the Github platform launched Github Copilot, to generate code. For example “write a python code that, from a sentence, will return the same sentence by putting all the words in reverse order“, or other more complicated (and more useful) codes. To train such a code generation model, the algorithm learned from millions of codes, found online on dedicated sites. And very often, the algorithm copies a code found elsewhere, looking more like a search engine than a code generator. This is actually more efficient, because inventing a code from scratch has little chance that it will actually work when compiled. And unfortunately, several codes returned by the algorithm were protected by copyright, and several lawsuits are pending. Wikipedia constantly cites its sources, as Charpentier (2018) mentioned, but not GPT-3, which simply plagiarizes its training data. Should we be admiring, and assert, as Pablo Picasso13 (or Banksy) would have said, “the bad artists imitate; the great artists steal.” Gary Marcus had spoken of “pastiche” to describe ChatGPT. In its original sense, a pastiche is a forgery, and one can indeed question what this algorithm is really doing.

As mentioned earlier, one advance between GPT-3 and ChatGPT may be an intermediate layer for receiving human instructions14 to improve the algorithm, achieved by RLHF, Reinforcement Learning through Human Feedback. Perrigo (2023) recounts how OpenIA had gone through the company Sama, which employed staff in Kenya to analyze samples of writing identified as potentially “sensitive” (talking about murder, suicide, torture, incest, etc) in order to flag this content to the algorithm to avoid writing similar texts, to avoid producing hateful or pornographic content. The psychic impact on these workers (paid $2 per hour) was such that the contract between Sama and OpenIA was broken. The irony is that in the end the language-generating algorithms are not a Turkish automaton, they are not human, and to make them look “harmless”, their creators had to dehumanize real humans.

References

Anderson, C. (2008). The end of theory: The data deluge makes the scientific method obsolete. Wired magazine, 16(7), 16-07.

Bender, E., Gebru, T., McMillan-Major, A. & Shmitchell, S. (2021) On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency,‎ mars 2021, p. 610–623

Binet, L. (2015). La septième fonction du langage. Grasset.

Charpentier, A. (2018) Fake news, Wikipedia et Blockchain : Vérité et Consensus. Risques.

Chomsky, N. (2002). On nature and language. Cambridge University Press.

Cole, D. (2004), “The Chinese Room Argument”, The Stanford Encyclopedia of Philosophy

Frankfurt, H. G. (2009). On bullshit. Princeton University Press.

Grice, H. P. (1975). Logic and conversation. In Speech acts (pp. 41-58). Brill.

Klein, E. (2023). A Skeptical Take on the A.I. Revolution: The A.I. expert Gary Marcus asks: What if ChatGPT isn’t as intelligent as it seems?. New York Times, 6 janvier

Li, K. (2023). Do Large Language Models learn world models or just surface statistics?. The Gradient,

Mahowald, K. & Ivanova, A. (2022) Google’s powerful AI spotlights a human cognitive glitch: Mistaking fluent speech for fluent thought. The Conversation, 24 juin.

Marantz, (2023). “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT. The NewYorker,

Marcus, G., & Davis, E. (2020). GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about. Technology Review.

Marcus, G. (2022) How come GPT can seem so brilliant one minute and so breathtakingly dumb the next?  garymarcus.substack.com

McCulloch, B. (2023) The risks of Large Language Models (such as ChatGPT). Vux.

Mielczareck, É. (2021). Anti Bullshit: Post-vérité, nudge, storytelling: quand les mots n’ont plus de sens (et comment y remédier). Editions Eyrolles

Mikolov, T. (2013) Efficient Estimation of Word Representations in Vector Space, Arxiv.

Perrigo, B. (2023). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Times, 18 Janvier 2023

Standage, T. (2003). The Mechanical Turk : The True Story of the Chess Playing Machine That Fooled the World. Penguin Book.

Viktorovitch, V. (2021). Le pouvoir rhétorique. Seuil.

Weizenbaum, J. (1966). ELIZA — a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45.

Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment To Calculation, traduit en Puissance de l’Ordinateur et Raison de l’Homme – du Jugement au Calcul 1981, Ed. d’informatique.

Woldfram, S. (2023). What Is ChatGPT Doing … and Why Does It Work?

1. ChatGPT (Generative Pre-trained Transformer) is based on GPT-3 (Generative Pre-trained Transformer 3) from OpenIA, released in 2020.
2. More precisely text-davinci-003, with a “temperature” of 1 (in order to increase the randomness and to have new answers by asking the same question)
3. We won’t come back to this aspect, but this contextual memory allows the algorithm to attach a specific word to a pronoun, provided that it has not been mentioned too far in the text.
4. GPT-3 is a marvel of engineering, due to its impressive scale, almost giving reason to Anderson (2008), who evoked the end of theory, the victory of engineers over fundamental researchers.
5. We can mention BERT or Transformer, developed a few years later.
6. Unfortunately this alleged quote does not seem to appear anywhere in Lewis Carroll’s work. After all, as GPT-3 would say, if the quotation absolutely does not exist, what’s stopping us from inventing it?
7. Can we expect to read anything other than “I have doubts”, the algorithm having no awareness of what the truth might be?
8. It is also surprising that if the succession of words is so slow, there are not more errors, especially when a calculation is made. If answer 2 is given for the integral calculation, and then justified, he agrees that the final answer does indeed correspond to the first answer given.
9. However, when asked, the algorithm always answers that it is not human. This avoids reliving one of the first scenes of the television dystopia Westworld, where a man asking an android “Are you real?” is answered “If you can’t tell, does it matter?

10. Even if for many computer science researchers, “there’s no data like more data”, in other words “nothing is worth more data” if the model is not good enough (thus embracing Anderson’s (2008) assertion).
11. Hallucinations are a psychic phenomenon in which a waking subject experiences perceptions or sensations without any external object giving rise to them. This disconnection from reality is reminiscent of the behavior of this algorithm.
12. Many researchers have pointed out the irony of the name for a company that is not very open and transparent, much more eager to offer a platform to “democratize” the tool than to allow trusted third parties (academics) to better understand how the tool works.
13. In the Bristol museum there is a stone engraved with the Pablo Picasso quote “Bad artists imitate, great artists steal”, where Banksy has crossed out Picasso’s name to put his own.
14. This previous model is InstructGPT, “a more truthful and less toxic GPT-3.”

Intelligence artificielle et assurance IARD

L’intelligence artificielle pose de nombreux défis, tant technologiques qu’éthiques, voire philosophiques. Les problèmes les plus souvent évoqués étant le biais des données et la transparence des algorithmes (ou des modèles). Ce problème est bien connu en justice prédictive (les juristes parleraient de justice actuarielle) : les modèles doivent être interprétables, compréhensibles, et tenir compte du biais connu des données. Car à part les expériences randomisées, les données « neutres » ou « brutes » n’existent pas. Si je veux construire un score de fraude, j’utilise ainsi des données construites suite à des enquêtes pour fraudes, la majorité des observations découlant d’une suspicion préalable de fraude. Il est ainsi nécessaire de comprendre comment les données ont été collectées, ce que tout statisticien apprenait historiquement à faire. L’interprétabilité des modèles pose des questions complexes car elle est associée à un processus narratif, ce dernier proposant souvent une explication causale à un mécanisme (qui est souvent implicite dans l’idée du processus prédictif). Mais la grande majorité des modèles actuariels et d’apprentissage machine se contentent d’étudier des corrélations. Le genre est ainsi corrélé avec la sinistralité en assurance automobile, et les actuaires avaient pris l’habitude de justifier cette relation en recourant à des stéréotypes, de la même manière que la conduite d’une voiture de sport prêtait à un conducteur les pratiques de tous les conducteurs de voitures de sport. Cette généralisation par stéréotypes, au cœur du travail de l’actuaire, est aujourd’hui remise en cause par l’idée de personnalisation de l’assurance, comme cela s’observe également dans le domaine de la santé. Cette personnalisation est devenue indispensable en grande dimension : les classes de risques n’ont pas de sens. Avec plusieurs centaines de variables explicatives, plus aucun individu ne se ressemble, et il devient impossible de raconter une histoire autour d’un assuré « moyen ».

Cette personnalisation, qui flatte probablement l’ego de l’assuré, en le rendant unique casse l’idée de mutualisation et de segmentation de l’assurance. Historiquement, une forme de solidarité existait entre individus qui se ressemblaient, mais elle disparait dès lors que chaque assuré doit être vu comme un être unique. Si la théorie économique justifiait cette « spirale de la segmentation », la pratique et la théorie mathématique commence à cerner les limites. Segmenter trop signifie bien souvent que le modèle ne capture plus un signal de fond, et commence à modéliser du bruit, il overfit, et perd en généralisation : un modèle de tarification bon une année ne le sera peut-être plus l’année suivante. Les assureurs rêvent aujourd’hui en voyant les performances des modèles des GAFA qui arrivent à prédirent les prochains achats sur un site (probablement d’ailleurs en créant l’envie). Si des dialogues sont nécessaires entre ces deux mondes que tout oppose, la tortue conservatrice fascinée par son historique de données et le lièvre agile obsédé par l’avenir de l’autre, les modèles avancés d’apprentissage marchent souvent mal dans certains aspects de l’assurance IARD.

 Les algorithmes d’apprentissage profonds (des réseaux de neurones avec de très nombreuses couches, constituant des boîtes noires avec des milliers de paramètres) sont devenus incontournables pour analyser des images (prédire rapidement le coût d’un sinistre matériel automobile à partir d’une dizaine de photos du véhicule endommagé) ou pour analyser ou générer du texte (en offrant une assistance rapide et fiable à un assuré). Mais ils sont difficilement utilisables pour provisionner de manière robuste le coût d’un accident corporel et ils posent des questions intéressantes sur la tarification de l’assurance. Ces algorithmes ont été développés dans un contexte de classification, pour reconnaître des caractères manuscrits ou des objets sur une image et il y a des raisons simples pour comprendre qu’un chien est différent d’un chat sur une photo. Mais expliquer pourquoi Mr Dupont et pas Mme Dupond a eu un accident d’automobile l’an passé est plus complexe. Et ce n’est pas leur but : ces modèles visent à estimer la probabilité que Mr Dupont ait un accident dans l’année. Un modèle très poussé pourra prédire que la probabilité est de 17.23% (au lieu des 15.62% obtenue par un modèle jugé aujourd’hui trop simple), et la prime demandée doit alors augmenter de 10%. Mais est-ce si important si au final pendant les 8 années que Mr Dupont passera dans la compagnie, il n’aura aucun sinistre ? ou en aura un très grave ? L’assurance IARD relève bien souvent d’un aléa très important, la sinistralité étant en très grande partie imprévisible. Un danger est que ces algorithmes proposent des intervalles de prix beaucoup plus grands, et que la mise en compétition de ces modèles fera baisser les primes du marché. Si la sinistralité ne baisse pas en parallèle le marché perdra de l’argent, certains assureurs risquant alors de disparaître. En revanche, si ces nouvelles techniques permettent de réduire la sinistralité, tout le monde sera gagnant, mais cela suppose de faire davantage de prévention, cette dernière reposant sur l’identification de mécanismes causaux clairs, en identifiant quels leviers pourraient réduire la probabilité d’avoir un accident, et sa gravité.