Tag Archives: biases

Insurance, Biases, Discrimination and Fairness

Insurance, Biases, Discrimination and Fairness was published a few weeks ago. I still plan to spend some time this summer on the R package, including data and some functions…

This book offers an introduction to the technical foundations of discrimination and equity issues in insurance models, catering to undergraduates, postgraduates, and practitioners. It is a self-contained resource, accessible to those with a basic understanding of probability and statistics. Designed as both a reference guide and a means to develop fairer models, the book acknowledges the complexity and ambiguity surrounding the question of discrimination in insurance. In insurance, proposing differentiated premiums that accurately reflect policyholders’ true risk—termed “actuarial fairness” or “legitimate discrimination”—is economically and ethically motivated. However, such segmentation can appear discriminatory from a legal perspective. By intertwining real-life examples with academic models, the book incorporates diverse perspectives from philosophy, social sciences, economics, mathematics, and computer science. Although discrimination has long been a subject of inquiry in economics and philosophy, it has gained renewed prominence in the context of “big data,” with an abundance of proxy variables capturing sensitive attributes, and “artificial intelligence” or specifically “machine learning” techniques, which often involve less interpretable black box algorithms.

The book distinguishes between models and data to enhance our comprehension of why a model may appear unfair. It reminds us that while a model may not be inherently good or bad, it is never neutral and often represents a formalization of a world seen through potentially biased data. Furthermore, the book equips actuaries with technical tools to quantify and mitigate potential discrimination, featuring dedicated chapters that investigate into these methods.

Fairness and discrimination, PhD Course, #9 Mitigation, Pre-processing and In-processing

Finally, after defining (and quantifying) “group fairness“ and “individual fairness“, we can now start to discuss the idea of mitigating a possible discrimination. Here, we will see, how based on some data that were initially collected, and a model (a pricing model), it is possible to remove the discrimination in our pricing model.

Biases everywhere

As mentioned previously, insurance princing is based on the use of different datasets, at least one from “claims” and one from “underwriting”. And obviously, there might be biases in those data, conscious, or intended, or not.

Somehow, idea of tackling the problem from the end, as proposed, may not be the right one, and it might be better to tackle it from the beginning, through the biases in the data. The outcome of models will be less discriminatory if we could get rid of sexist or racist bias in underwriting, or even in the assessment of claims costs. Unfortunately, I cannot discuss that here since I do not have data that could be used to assess selection biases related to sensitive attributes.

On mitigation…

From a philosophical perspective, asking for mitigation might lead to some paradoxes.  I mention here two statements, by two judges in the U.S., that have very opposite perspective on the same problem,

versus

I will not talk much about those philosophical aspects (discussed a bit more in the textbook), we will not discuss how we can achieve fairness if required.

Interestingly, we have a nice property, on the price to pay to achieve fairness (price in terms of risk)

More precisely, we have the following result,

Interestingly, we have not only a lower bound, we can actually reach that bound (we will discuss that point next week).

Pre-processing

The first approach is related to the idea of “distorting” inputs, to get legitimate explanatory variables that are uncorrelated with sensitive ones.

If that makes sense in the context of linear models, it is not working well in the general case.

But one should not be (too) suprised: as mentioned in a previous post, on independence (and correlation), there is no statistical guarantees to keep it with nonlinear transforms of variables

or

It is what we observe on our datasets.

In-processing

An alternative is to use a penalized approach, where fairness is added as the constraint in the optimization procedure. For example Zafar et al. (2017), considered the following approach, with a constraint based on the covariance between the outcome and the sensitive attribute. We can adapt it for non-linear models.

We can look at the evolution of \widehat{\boldsybol{\beta}} as a function of c.

We can also visualize the evolution of predictions,

(including prediction for unwaware model, blind to the sensitive attribute)

On this slide, we can see that we have a tradeoff between accuracy and fairness.

It is also possible to visualize the distance between the distributions of scores in the two groups. We can see that c\to0 gives here strong fairness actuarlly, since Wasserstein distance tends to 0.

An alternative that we can find in the litterature (that I include in the in-processing section) is based on adversarial learning,

Formally, it is

which is related to minimax theorems

Standard references about adversarial learning and fairness are the following

Next week, we will discuss post-processing approaches.

Insurance, biases, discrimination and fairness, v2

In the Summer 2022, my report Insurance, biaises, discrimination and fairness (v1) was officially uploaded on the website of the Institut Louis Bachelier. I have spent another year to add illustrations and examples, and I sent the manuscript to the publisher at the beginning of the Summer 2023. Because of delays, the book is not out yet, but the publisher allowed me to upload Insurance, biaises, discrimination and fairness v2 of the document. Note that it will be the lecture notes of the doctoral course I will give this Winter at ENSAE, in Paris, France.

The R functions (and package) will be uploaded on https://github.com/freakonometrics/InsurFair soon.

Assurance, biais, discrimination et équité, chez IVADO

Ce vendredi, je donnerai un exposé au groupe de travail d’IVADO, invité par Dany Plourde. Les slides sont maintenant en ligne. En guise d’introduction, cette petite anecdote

L’exposé sera assez standard, avec une rapide introduction sur l’assurance, avant de revenir sur trois notions clés: la discrimination, les biais et les mesures d’équité

Comme c’est la première fois que je présente au Québec, je vais revenir sur quelques statistiques de marché

et une comparaison d’états aux États-Unis, et de provinces au Canada, pour caractériser des possibles discriminations

Sur les biais, je reviendrais deux de mes préférées, le paradoxe de Simpson et l’inférence écologique

et la loi de Goodhard, et le biais de rétroaction

On finira avec quelques définitions de l’équité