Tomorrow, I will give a talk for the “pricing seminar” of a major insurance company. The slides are available online, and it will be on “Actuarial Pricing Discrimination and Fairness“. As always, my talk will explore why actuarial pricing inherently relies on discrimination between groups, and how this raises deep conceptual, legal, and ethical challenges. The objective is to understand both the mathematical foundations of “fairness” and the regulatory tensions that emerge.

The starting point is that actuaries generalize from groups to individuals, transforming uncertainty into pricing decisions. This group-based reasoning already contains the seeds of discrimination, in the neutral sense of “distinguishing categories.”

About the idea of pricing, actuaries rely on the idea, or concept, of so called “actuarial fairness”, that traditionally equates the premium to expected losses, reflecting the idea of mutualization: many contribute to cover the losses of the few. This definition already embeds choices about how groups are formed and what counts as “expected.”

Because insurers use different models and portfolios, there is no single “correct” premium. Actuarial fairness is thus internally coherent but not universal, reinforcing that fairness is model- and context-dependent. And actuarial fairness should be considered in the one-to-one sense, between the policyholder and the insurance company.

Insurance is also a system of risk sharing within and across groups, meaning fairness is about how cross-subsidies are distributed. The choice of conditioning variables, the \boldsymbol{X} that enters in the formula \mathbb{E}(Y\mid\boldsymbol{X}), determines who subsidizes whom.

Legal frameworks define specific “sensitive” attributes, and real-world examples show how societies deliberately restrict or allow distinctions based on these characteristics. Insurance pricing sits at the intersection of empirical correlations and normative constraints.

Many jurisdictions explicitly prohibit using certain attributes (such as “race” or ethnic origin, sex or gender), while sometimes authorizing others under (vaguely) strict actuarial justification. These asymmetries illustrate how fairness is partly a legal construct, not purely statistical.




Even when sensitive variables are excluded, models can reconstruct them through correlated features. Postal codes, credit scores, or medical codes become powerful proxies, raising the risk of “unlawful proxy discrimination.”





Economic theories show that when key risk variables are unobserved, decision-makers may rationally use group membership as a proxy for missing information. This highlights that proxy discrimination may arise from optimization rather than malice.

If actuarial fairness means predicting expected losses, then fairness aligns with accuracy under squared or Bregman losses. Generalized linear models appear as a special case where the estimation of \mathbb{E}(Y\mid\boldsymbol{X}) is tied to likelihood-based convex losses.


Beyond accuracy, calibration ensures that predicted probabilities correspond to observed frequencies within groups of similar scores. The examples show that different models may have similar accuracy but dramatically different calibration properties.





Actuarial pricing generalizes from group-level patterns to individual decisions, which can create “injustice by generalization.” What is statistically true on average for a group may be unfair when applied to a specific individual (we will get back to that issue later on)

Insurance fundamentally requires discrimination between risky and less risky individuals, yet many of the variables correlated with risk coincide with protected characteristics. This tension makes fairness uniquely challenging in insurance.

Different fairness notions—demographic parity, equalized odds, calibration—capture incompatible ethical priorities. Choosing a fairness metric is ultimately choosing a normative stance.

The COMPAS example shows that fairness criteria can contradict each other: a model can satisfy calibration across groups while violating error-rate balance. This illustrates why there is no “one size fits all” definition.


Group-level parity can be engineered by adjusting decision thresholds or blending models across groups. But doing so often breaks other fairness properties, highlighting trade-offs inherent to fairness interventions.



Philosophical debates on affirmative action show that sometimes equal treatment requires explicit use of sensitive attributes. Fairness interventions may thus require using the variable they aim to neutralize. COMPLETER

Fairness assessed on the insurer’s portfolio may not translate to fairness in the market, due to selection effects. This raises the question: fairness relative to which population?

Removing a sensitive variable does not eliminate group differences if the data itself is imbalanced or if proxies remain. The “is–ought” distinction shows that statistical regularities do not automatically justify moral ones.


I also want to add, here, that discrimination can be understood from two very different viewpoints. The first is the regulatory perspective, which focuses on group fairness: Are certain protected groups treated differently on average? This is the perspective most laws are written around. But policyholders often think in terms of individual fairness. Their question is: ‘Would I have been treated the same if I had belonged to another group—say, another gender or another race?’ This is what we call counterfactual fairness. This sounds simple, but it immediately runs into a conceptual problem: what does ‘other things being equal’ actually mean? In causal terms, changing a sensitive attribute rarely leaves everything else unchanged. For example, suppose we take a man who is 6 foot 3. If we imagine the same person as a woman, should we still imagine them being 6 foot 3? Probably not, because height distributions differ across genders. So the counterfactual is not just ‘flip the gender label’, it requires adjusting many correlated traits. And once we do that, the idea of an identical person except for the sensitive attribute becomes much more complex. Individual fairness is philosophically appealing but extremely difficult to operationalize in practice, especially in insurance where most characteristics are intertwined.”

Decompositions help disentangle structural effects (differences in behavior or exposures) from pure discrimination (differences in treatment). Visualization underscores how both components contribute to observed disparities.


Even without collecting sensitive data, insurers may infer group membership using statistical tools like BIFSG. Simpson’s paradox warns that aggregate fairness can hide subgroup unfairness.

Ignoring relevant predictors can create spurious differences between groups. The mortality example shows how population structure, not intrinsic risk, can drive misleading conclusions.





Spatial or socio-economic correlations can create apparent “ethnicity penalties.” Causal models are necessary to distinguish legitimate risk factors from discriminatory proxies.

Fairness with respect to one protected attribute can worsen unfairness with respect to another. This illustrates the “robbing Peter to pay Paul” problem in multidimensional fairness.

Actuarial work appears objective because it relies on numbers, yet subjective choices permeate data selection, modeling, and interpretation. Explainability is essential to uncover these hidden assumptions.

This slide reminds us that actuarial work only appears purely objective. As Brian Glenn argues, insurance decisions are always shaped by narratives, choices, and assumptions, long before we ever see the numbers. That’s why interpretability matters: without understanding how a model makes decisions, we cannot detect biases, justify pricing differences, or ensure fairness. Transparency is not optional; it is essential for trust.

To conclude, first, let me recall that dealing with discrimination in insurance is inherently difficult because actuarial pricing is built on group-based reasoning. As soon as we group people, some form of differential treatment appears—so fairness is not something we can bolt on afterward; it’s part of the very structure of pricing. Second, if we don’t confront these issues explicitly (both mathematically and ethically) there is no hope of building models that we can genuinely call fair. Simply refusing to use sensitive attributes is not a solution; in fact, it often makes things worse because the model will reconstruct them indirectly through proxies. Finally, regulators still face major unanswered questions. They will need to offer clearer guidance: What definitions of fairness matter? What kinds of discrimination are acceptable in pricing, and which are not? And at what level (portfolio or market) should fairness be assessed? So fairness in insurance is not just a technical problem. It’s a societal choice, and one that requires transparency, careful modeling, and thoughtful regulation.
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (December 7, 2025). Actuarial Pricing Discrimination and Fairness. Freakonometrics. Retrieved January 17, 2026 from https://doi.org/10.58079/15amf