Paradoxes of segmentation and discrimination in insurance

This article was originally written in French, and published here

The decision cannot be racist since it was made without any information about the person’s ethnic origin. ” We’ve all heard this kind of statement at one time or another. Whether it’s about racism, ageism, or sexism. Whether it’s about human decisions, models, or algorithms. However, “ Kranzberg’s Law1 reminds us that technology is neither good nor bad, but it is not neutral either. Neutrality may only come at a certain price. And it may be time to revisit the major principles surrounding segmentation and fairness in insurance, to better understand what we’re talking about when we raise the issue of discrimination.

Figure: Krater representing Theseus and Procrustes, and Theseus killing the Crommyon boar (source The Miriam and Ira D. Wallach Division of Art , 1862 – 1864 )

Actuarial fairness

Before defining actuarial fairness, it is important to review what insurance is. We generally define insurance as the financial contribution of the many to the misfortune of the few. Insurance is therefore, in its essence, a risk-sharing mechanism, which can only make sense collectively, through risk pooling. But, at the more concrete, or legal, level of contract law, insurance is above all a transfer of risk, from an insured to an insurance company, since in exchange for a financial contribution (the premium or contribution), the insured buys the promise of future compensation, in the event of an unfortunate event. And these two perspectives are clearly the source of some confusion.

Actuarial fairness is linked to the second dimension, that is, risk transfer. It dates back to the 17th century, when mathematicians introduced mean value in the context of fair distribution of games. In fact, it goes back to Aristotle and the concept of fairness in exchange, as Antonio José Heras Martínez, David Teira, and Pierre-Charles Pradier point out. This is why annuities in life insurance, since Christiaan Huygens and Jan de Witt, have been calculated based on the expected values ​​of future losses, where the probabilities of capital payout are calculated based on mortality tables. And it was not until the 1960s that Kenneth Arrow explicitly introduced the term “actuarial fairness” in the field of health insurance, to state that the contribution of policyholders must be equal to the expected value of losses. There is no moral connotation, but simply an economic principle, in this perspective of risk transfer between the insured and the insurance company. Charging more would be unfair to the insured, charging less would be unfair to the insurance company.

Overall, the premium collected must therefore be equal to the loss paid by insurers: this is a simple principle of equilibrium (on average).

But we can also ask ourselves, returning to the first dimension, and consider the problems of fairness in risk sharing. Once insurers know that they must collect a certain amount to, on average, compensate everyone, how can we a priori decide on a fair division of the cake among the insured? How much should each contribute? Where should we place the cursor between an identical average premium, perfectly united, and a supposed individualization of contributions, to take individual risks into account.

Direct or indirect discrimination?

In both cases, we adopt an economic view of the problem. Gary Becker, Nobel Prize winner in economics and pioneer in the economic analysis of social behavior, examined these issues of discrimination in The Economics of Discrimination , in a more general context than insurance (mainly the labor market). For Gary Becker, there is a form of discrimination corresponding to a preference, or a taste (often a disgust). Agents may experience disutility in interacting with members of a given group (for example, ethnic minorities, women, etc.). Very often, it is not rational, and it is economically inefficient. An employer who refuses to hire a competent person because they are Black, elderly, or homosexual will find themselves penalized compared to competitors who do not discriminate, simply by adverse selection.

But there is another form of discrimination, studied in more detail by Kenneth Arrow and Edmund Phelps, called statistical discrimination, which is based on a logic of imperfect information and rationality. This is calculated behavior, not prejudice. Here, agents use the average characteristics of a group (gender, ethnic origin, age, etc.) as a proxy to predict certain traits that interest them, productivity for an employer, risk for an actuary. This approach makes it possible to legitimize racial profiling by the police, also called “ethnic profiling.” Mathias Risse and Richard Zeckhauser point out that in a utilitarian framework, based on a cost-benefit approach, this approach is relevant. A decision analysis, where the expected costs of errors (Type I) of erroneous indication are weighed against the expected costs of errors (Type II) of erroneous non-indication, can be seen as rational, at least in the short term. Indeed, statistical discrimination is often inefficient 2 in the long term. It reproduces and reinforces inequalities, thus creating a self-fulfilling prophecy, ineffective for society as a whole. Kenneth Arrow and Edmund Phelps have justified this instrumental, statistical discrimination, which we could call actuarial, which is often based on a pecuniary objective, supposedly legitimate, supposedly independent of the discrimination in question. It is justified in a context of uncertainty for Kenneth Arrow, and in a model of bounded rationality for Edmund Phelps, but, as a brake on social mobility, it is often ineffective in the long term.

This idea of ​​statistical discrimination can be found in “red-lining” in the United States in the 1930s, when real estate brokers were barred from selling mortgages in certain geographic areas. Instead of looking at each building, a neighborhood was excluded. This approach was based on the belief that establishing a certain type of distinction would instrumentally promote the pursuit of an objective independent of discrimination. We can also think of airline pilots who can no longer fly beyond a certain age, the initial goal being to have pilots with good eyesight. This practice, of course, constitutes a form of discrimination, because the objective of distinguishing good pilots from bad ones, translated by the distinction between those who see well and those who do not, in practice involves a proxy, a correlated variable, namely the pilot’s age. However, as we know, there are many unfounded correlations, referred to in French as “spurious correlations.” Moreover, Simpson’s paradox reminds us that correlations are also subject to all kinds of biases (selection bias, omitted variable bias, etc.).

Biases are everywhere

Bias is generally characterized as a systematic deviation from objectivity that leads to flawed results, decisions, or perceptions. Bias can exist in data, algorithms, and human judgment. And it often influences outcomes unintentionally. Survivorship bias, which we discussed in 2018 , shows that it can lead to readings of data and decisions that are the opposite of what should be done. In insurance, selection bias works in a similar way and is perhaps the most well-known form of bias. It occurs when a sample is not representative of the target population, usually because of a selection process that favors certain groups of individuals over others.

But in the context of big data and the increasing algorithmization of decisions, these biases can take a more subtle, and even more formidable, form. A recruitment algorithm, for example, can learn to avoid CVs from candidates with female first names, simply because past data (historically biased) shows a lower rate of women in certain positions. The algorithm does not consciously discriminate: it replicates, amplifies, and legitimizes a historical bias, while claiming mathematical neutrality. This selection bias in the data automatically translates into a learning bias in the models. These biases are all the more difficult to identify because they are embedded in technical layers (weights, latent variables, covariance matrices) that are often inaccessible to non-specialists. From this perspective, indirect discrimination becomes invisible, systemic, and self-reinforcing. This is clearly demonstrated by analyses of predictive justice systems or social scoring systems (“ social scoring ” studied by Bernard Harcourt), where factors highly correlated with social or ethnic origin are used as predictive variables for recidivism, credit risk or administrative compliance. Once again, an instrumental logic leads to the reintroduction by technology of what the law excluded in principle: discrimination based on protected personal characteristics.

Category and stereotypes

In 1983, the United States Supreme Court 4 emphasized that “even a true generalization about a category cannot justify treatment based on the category,” thus highlighting the deep tension between statistical knowledge and individual justice, between statistical normality and ethical norms, between what is and what should be. For to categorize is necessarily to simplify. It is to attribute to an individual the supposedly average characteristics of a group to which they are attached, sometimes arbitrarily, and often independently of their will. This operation is at the heart of the construction of stereotypes, which, although sometimes based on empirical regularities, produce a form of epistemic injustice: the individual is reduced to a variable, denied in their singularity. This logic is exacerbated in contexts of mass data, where a person is no longer judged as such, but as a member of a predefined category, often correlated with sensitive factors (origin, gender, place of residence, etc.).

This mechanism is particularly visible in the field of insurance and actuarial science, where risk analysis is traditionally based on segmenting policyholders into homogeneous groups. The role of actuaries is often to propose differentiated pricing based on criteria such as age, gender, or postal code—all categories which, while effective from an actuarial point of view, raise ethical and legal problems when they are used to justify differences in treatment that have no direct link to the individual’s actual behavior. And very often, other considerations come into play. If it can be justified that a young driver should pay three times more than another driver for their insurance premium, solely by virtue of their age (and the high accident rate in this age group, simply because of their lack of experience), should we therefore apply such an additional premium? Within the European Union, the Court of Justice has already ruled that the use of gender as a pricing factor is illegal ( Test-Achats , 2011), in the name of the principle of equality. This shows that the law, contrary to actuarial logic, does not tolerate statistical generalizations, however well-founded, becoming criteria for differential treatment, without demonstrating necessity and proportionality.

Equity, or equality?

This principle of rigid equality continues to raise questions, and ends up echoing the legend of Procrustes ( Προκρούστης ). In Greek mythology, Procrustes was a brigand who lived on the road between Athens and Eleusis. He invited travelers to spend the night at his home, promising them a bed perfectly suited to their size. But this bed had a sinister peculiarity: if the guest was too tall, Procrustes would cut off his legs; if he was too short, he would violently stretch him to fit. In other words, the traveler had to adapt to the bed, and not the other way around. This equality reduced to a rigid uniformity, where everyone is treated the same way, with disregard for individual differences, is perhaps a bit extreme. Now, in matters of justice, the principle of equality does not mean that everyone should be treated identically, but that they should be treated fairly, that is, according to their situations, their needs, and sometimes their vulnerabilities. Like Frederick Schauer, one would almost be tempted to say that equality does not consist in treating those who are similar in the same way, but on the contrary, this principle perhaps requires that we treat those who are not similar in the same way. In France, equal access to public facilities generally requires that people be treated in the same way, even if their qualifications to use these facilities vary; and equal citizenship, which underpins democracy, gives everyone the opportunity to express themselves, one person, one vote.

If equality corresponds to a form of uniformity, equity would instead be based on a strong individualization (some authors speak of “particularization”), to take into account the specific characteristics of each person. Instead of prohibiting airline pilots from flying beyond a certain age, an individualizing approach would consist of testing the sight, hearing and reflexes of each pilot, rather than relying on age as an indicator of a decline in faculties.

What are the objectives for actuaries?

Ethical, and especially legal, questions are important when we begin to question segmentation and the construction of predictive models. Yet, it is not uncommon to hear actuaries say that their sole objective is the accuracy or precision of predictions. Like computer scientists who create algorithms that identify dogs and cats in images. Except that in the photo, there really is a dog, and we can tell if the algorithm has misidentified the animal. The pricing actuary’s goal is not to guess who will be affected, and who will not. With the goal of accuracy, their goal is to estimate “as precisely as possible” the probability that an insured will have an accident within the year. However, it is difficult to know whether the estimate of an individual probability of 8.62142% is correct or not. Alfred Sauvy said, ” in all statistics, the inaccuracy of the number is compensated by the precision of the decimals .” Today, we can hear actuaries seriously questioning whether the probability of an accident is 8.62142% as a logistic regression says, or 8.24126% as a deep neural network says, as we explained with Laurence Barry and Ewen Gallic.

The classic definition of probability is based on the law of large numbers. An experiment is repeated several times, and the law of large numbers then identifies probability with frequency. But how, in this case, can we associate a probability with an event that will occur once? If the estimated probability of having an accident is 8.62% and the insured person has had an accident, have we made a mistake? Shouldn’t we also ask ourselves whether the insured person has not had an accident?

In fact, the answer was given almost 100 years ago by Richard von Mises, ” when we speak of the `probability of death’, the exact meaning of this expression can be defined in the following way only. We must not think of an individual, but of a certain class as a whole. A probability of death is attached to the class of men or to another class that can be defined in a similar way. We can say nothing about the probability of death of an individual .” In fact, the only property that would make sense with respect to a prediction is the one used in meteorology to understand what “having a 10% chance of rain” means. This is the calibration property of models: a model is well calibrated if, taking all the people for whom an 8.62% chance of having an accident was predicted, indeed, out of this subgroup, 8.62% had an accident. This property is in fact a local version of actuarial fairness. And unfortunately, this is a property for which we have no theoretical guarantee with most machine learning algorithms.

Legal vagueness and inconsistencies

In addition to the difficulty of choosing the right model, the right prediction, it is difficult to understand what the legislation wants. For example, most US states have adopted some form of basic prohibition against unfair discrimination between individuals of the same class and exposed to essentially the same risk, unless the limitation or differential is based on sound actuarial principles (“ where the refusal, limitation, or rate differential is based on sound actuarial principles or is related to actual or reasonably anticipated experience ,” as New York State Code Section 4224 puts it). And no one really knows what those sound actuarial principles are.

In Europe, Article 5-1 of Council Directive 2004/113/EC of 13 December 2004 implemented the principle of equal treatment between men and women in access to and the provision of goods and services. It provides that ” the use of sex as an actuarial factor in the calculation of premiums and benefits shall not result in differences in the premiums and benefits of individuals .” Article 5-2, however, allowed for an exception to the prohibition if the use of sex was ” based on relevant and accurate actuarial and statistical data ,” much like the legislation found in several North American states. In the landmark 2011 Test-Achats case, which we discussed earlier, the Court of Justice of the European Union declared Article 5-2 invalid, meaning that gender can no longer be taken into account, even if it is relevant from an actuarial point of view. As the European Commission subsequently clarified 5 ), it remains possible for insurers to offer gender-specific insurance products to cover specific risks associated with the insured’s gender, such as prostate cancer or breast cancer. However, and to complicate matters further, this option is prohibited in matters of pregnancy and maternity, given the specific solidarity mechanism created by Article 5-3.

In fact, one of the most notable paradoxes of contemporary law lies in the tension between the requirement of non-discrimination and prohibitions on the processing of sensitive data. In Europe, the General Data Protection Regulation (GDPR) prohibits in principle the processing of data revealing racial origin, political opinions, religious beliefs, or sexual orientation (Article 9). This prohibition legitimately aims to protect individuals against the misuse of this information. However, in the context of algorithmic decisions, this rule can become counterproductive. Indeed, without access to these sensitive variables, it becomes impossible to test whether an algorithm is making discriminatory decisions based—directly or indirectly—on these same categories. In other words, the impossibility of collecting certain data makes discrimination invisible and prevents effective algorithmic audits. This legal paradox creates a technical lawless space, where algorithmic opacity is reinforced by the data protection constraints themselves.

This legal inconsistency is all the more worrying because it ultimately pits two equally fundamental imperatives against each other: the protection of privacy and the fight against discrimination. One prevents knowledge, the other requires knowledge. Case law still struggles to articulate these two logics. These inconsistencies ultimately create forms of technological irresponsibility, where discrimination is not committed intentionally, but appears as the mechanical by-products of a legally blind system. This normative vagueness weakens the law’s ability to fully play its role as a bulwark against systemic injustice in a context of increasingly complex predictive models.

The sailor Shadock stated that ” if there is no solution, it is because there is no problem .” We can also say that if there is no solution, it is perhaps above all because the problem is poorly formulated. Actuaries today must navigate blindly between technical injunctions, where they are asked to segment more while seeking meaning that can only exist through mutualization, and between legal injunctions, where they are asked to respect privacy, while fighting against discrimination.

Aristotle. (1990). Nicomachean Ethics – Book V (Tricot, Trans.). Vrin. (written c. 350 BC)
Arrow, K. (1971). The Theory of Discrimination (No. 403). Princeton University, Department of Economics, Industrial Relations Section.
Becker, GS (1957). The economics of discrimination. University of Chicago press.
Berman, EP (2022). Thinking like an economist: How efficiency replaced equality in US public policy. Princeton University Press
Charpentier, A. (2018). Can predictive models be fair and just? Risks, 113, 91-96.
Charpentier, A., Barry, L. & Gallic, E. (2020). What future for predictive probabilities in insurance? Annales des Mines, 2020 (1), 74-77
Charpentier, A. (2024). Insurance, biases, discrimination and fairness. Springer.
Frezal, S., & Barry, L. (2020). Fairness in uncertainty: Some limits and misinterpretations of actuarial fairness. Journal of Business Ethics, 167, 127-136.
Kranzberg, M. (1986). Technology and history: “Kranzberg’s laws”. Technology and Culture, 27(3), 544-560.
Harcourt, B. (2007). Against Prediction: Pro!ling, Policing, and Punishing in an Actuarial Age. University of Chicago Press
Hellman, D. (1998). Two Types of Discrimination: The Familiar and the Forgotten. California Law Review. 86:315–361.
Hellman, D. (2008). When is Discrimination Wrong? Harvard University Press.
Martínez, AJH, Teira, D., & Pradier, PC (2016). What was fair in acturial fairness? halshs:01400213
Phelps, E. S. (1972). The statistical theory of racism and sexism. The American Economic Review, 62(4), 659-661.
Risse, M. and Zeckhauser, R. (2004). Racial Profiling. Philosophy & Public Affairs. 32:131–170.
Schauer, F. (2006). Profiles, probabilities, and stereotypes. Harvard University Press.
von Mises, R. (1928). Wahrscheinlichkeit Statistik und Wahrheit. Springer-Verlag.
Winston, K. (1974). On Treating Like Cases Alike. California Law Review. 62:1–39.

  1. Kranzberg’s Law, first articulated by technology historian Melvin Kranzberg in 1986, was formulated in an article titled “Technology and History: ‘Kranzberg’s Laws’” in the journal Technology and Culture . Six laws are proposed, but the first is the best known, stating that the effects of technology depend on the social, political, economic, and cultural context in which it is used: “ Technology is neither good nor bad; nor is it neutral .” [ ]
  2. Anglicism which probably translates the term “efficient” better than the translation “efficace”, in the sense used by Elizabeth Popp Berman in her book “ How efficiency replaced equality ”. [ ]
  3. Abraham Wald showed that the analysis of damaged areas of returning bombers was flawed, as it ignored those that had been shot down, and thus did not take into account vulnerable parts of the aircraft. He recommended reinforcing undamaged areas, such as the engines and tail, as these were probably the most critical to the aircraft’s survival. [ ]
  4. Arizona Governing Comm. v. Norris , 463 US 1073 (1983), “ The use of sex-segregated actuarial tables to calculate retirement benefits violates Title VII whether or not the tables reflect an accurate prediction of the longevity of women as a class, for under the statute, “[e]ven a true generalization about [a] class” cannot justify class-based treatment[ ]
  5. Guidelines on the application of Council Directive 2004/113/EC in the insurance sector, in the light of the judgment of the Court of Justice of the European Union in Case C-236/09 (Test-Achats [ ]

OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (April 10, 2025). Paradoxes of segmentation and discrimination in insurance. Freakonometrics. Retrieved April 19, 2025 from https://doi.org/10.58079/13q1q


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.