Tag Archives: insurance

Insurance, Biases, Discrimination and Fairness

Insurance, Biases, Discrimination and Fairness was published a few weeks ago. I still plan to spend some time this summer on the R package, including data and some functions…

This book offers an introduction to the technical foundations of discrimination and equity issues in insurance models, catering to undergraduates, postgraduates, and practitioners. It is a self-contained resource, accessible to those with a basic understanding of probability and statistics. Designed as both a reference guide and a means to develop fairer models, the book acknowledges the complexity and ambiguity surrounding the question of discrimination in insurance. In insurance, proposing differentiated premiums that accurately reflect policyholders’ true risk—termed “actuarial fairness” or “legitimate discrimination”—is economically and ethically motivated. However, such segmentation can appear discriminatory from a legal perspective. By intertwining real-life examples with academic models, the book incorporates diverse perspectives from philosophy, social sciences, economics, mathematics, and computer science. Although discrimination has long been a subject of inquiry in economics and philosophy, it has gained renewed prominence in the context of “big data,” with an abundance of proxy variables capturing sensitive attributes, and “artificial intelligence” or specifically “machine learning” techniques, which often involve less interpretable black box algorithms.

The book distinguishes between models and data to enhance our comprehension of why a model may appear unfair. It reminds us that while a model may not be inherently good or bad, it is never neutral and often represents a formalization of a world seen through potentially biased data. Furthermore, the book equips actuaries with technical tools to quantify and mitigate potential discrimination, featuring dedicated chapters that delve into these methods.

Talk in Stockholm, Sweden, at the Insurance Data Science Conference

This week, I will attend the Insurance Data Science conference in Sweeden. It has been a while… I was a keynote speaker at the one in London, ten years ago (to give a talk I still have feedbacks about – Getting into Bayesian Wizardry… (with the eyes of a muggle actuary) – by that time, the conference was “R in Insurance”), and then, we organized the one in Paris, back in 2017. Then we had the online events, but it was… different.

This time, I will get back to our recent paper A Sequentially Fair Mechanism for Multiple Sensitive Attributes, with François Hu and Philipp Ratz, and the equipy package, wrote with Agathe Fernandes-Machado and Suzie Grondin. The slides are available online.

Trip in (Northern) Europe

The next two weeks, in will be in (Northern) Europe, with a first stop in Brussels (to visit colleagues), then in Leuven (I will give a talk on Monday at KU Leuven), then in København (I will give a talk on Friday at Københavns Universitet), and finally in Stockholm (at Stockholm University, for the Insurance Data Science conference).

In the Fall, I will be in Europe, with Lisbon (European Actuarial Journal conference), in France (Cerisy Colloques) and in Warsaw in Poland. In Poland, I will give a two day cours on Insurance, Biases, Discrimination and Fairness

More to come soon…

TD General Insurance Pricing Seminar

Tomorrow, I will give a talk at TD General Insurance Pricing Seminar, on fairness and ethics in insurance. Slides are now online.

After a very general (and long) introduction, to motivate our recent work on discrimination, I will try to explain how to quantify possible discrimination (with respect to a binary sensitive attribute), using Wasserstein distance, and optimal transport

and the use of Wasserstein Barycenter to mitigate discrimination

I will also mention our worshop in May, at Laval University,

Fairness and discrimination, PhD Course, #7 Sensitive attributes and proxies

In our previous post, we discussed “group fairness“. I might have gone a bit fast, and I decided to add some material about sensistive attributes, and proxies.

Sensitive attributes ?

Almost everywhere, we can find a list of variables that are considered, by law, as sensitive, since they will lead to discrimination. As mentioned earlier, sensitive variable might change with time, and accross regions…

Another issue with black boxes is that it might be hard to assess if they are related to sensitive attribute. In order to extract informations in pictures to classify pictures, or detect pictures, algorithm might use information that could be considered as sensitive. First, recall the popular wolf-husky classifier, that detects snow in the background (since wolf were with snow in the training sample)

This can also be the case for health issues, where classifiers can be influenced by the color of the skin (or possibly some unexpected information)

Racism

The first sensitive attribute is probably the race, that has been discussed in insurance for decaded.

One should keep in mind that race is a social information, and most of the time, it is based on self-identification

This leads to popular maps in the U.S.

Racism is usually related to “colourism” (discrimination based on skin tone)

Is it relevant in the context of insurance, and risk ?

It has been observed that African Americans, in the U.S. were usually asked a higher insurance premium.

Have in mind that discrimination has nothing to do with intention, as mentioned previous. An insurance pricing can be racist, without any intention to be so. An important issue to quantify that problem is actually to observe that variable.

Sexism

Sexism is another popular exemple of discrimination, related to sex, or gender.

Actuaries have been using life tables that are gender related for more than 300 years. And indeed, it seems that women live longer than men.

Ageism

Age is another possible sensitive attribute, but it is more complicated. First, it is not a “club” and second, it is (somehow) clearly related to risk.

In dataset, there can also be selection bias, related to the age. For instance, during the COVID pandemic, triage was based on the age of patients. Treatements and tests can be related to the age of patients. So this bias will probably have an impact on observed risks.

Genetics

Another important sensitive variable is related to “genetic information”.

Such information is usually classified as sensitive everywhere.

To conclude, I wanted to mentioned that several important variables considered as sensitive have not much to do with genetics, but more with a social construction.

Finally, let us discuss proxies that can be related to those sensitive variables.

Names and language

The first one was discussed in the introduction : names contain information about race and ethnical origin,

Text and discussion can also reveal sensitive information.

Pictures

Pictures can also provide information. That was discussed 150 years ago, where researchers tried to identify criminals using solely pictures.

Some insurers have been trying, at some point, to detect diseases on facial pictures. And it possible to reveal informations from pictures. Possibly the age, and the gender.

One can also use satellite pictures, or pictures from Google Street View, such as the wealth in the neighborhood. And possibly sensitive information, such as the presence of an access ramp for disabled people.

Credit Scoring

Credit scoring is also a variable used by insurers, that can be related to variables considered as sensitive

Clearly, a bad credit score will have a big impact not only on mortgages and loans,

but also on insurance rates ! As we explained here, it costs a lot to be poor.

Networks

Finally, insurance can use information related to friends, or family, to assess the risk. And netword data capture a lot of sensitive information.

We will talk a little bit about network, to explain why using your friends risks to assess your own risk might not be a great idea…

It is an extension of th friendship paradox.

Proxies

Finally, we will conclude by showing that removing a sensitive attribute from a training dataset will not mitigate discrimination.

Discrimination by proxy (a real case study)

Yesterday, with Laurence Barry, we posted a blog post “Who benefits from data sharing?” explaining why data sharing, in insurance, could end mutualization. Actually, it can also be bad in the context of discrimination. Consider here the same dataset, with claim occurence, in a real insurance portfolio,

library(InsurFair)
library(randomForest)

Consider a version of this dataset without the gender, and use variable importance to get a list of variables we can use in a predictive model

subfrenchmotor = frenchmotor[,-which(names(frenchmotor)=="sensitive")]
RF = randomForest(y~. ,data=subfrenchmotor)
vi = varImpPlot(RF , sort = TRUE)

We sort variables based on variable importance (the first one is the “most important” one), and add splines for three continuous variables

dfvi = data.frame(nom = names(subfrenchmotor)[-15], g = as.numeric(vi))
dfvi = dfvi[rev(order(dfvi$g)),]
nom = dfvi$nom
nom[1] = "bs(LicAge)"
nom[3] = "bs(DrivAge)"
nom[7] = "bs(BonusMalus)"

Then, the idea is simple : at stage k, we keep the k most important variables, and run a logistic regression on those k variables. Again, I should stress that the gender of the driver is not among those k variables. Then, we compute the average prediction of claim frequency, for mean and women.

n=nrow(subfrenchmotor)
library(splines)
idx_F = which(frenchmotor$sensitive == "Female")
idx_M = which(frenchmotor$sensitive == "Male")
metric_gender= function(k =3){
if(k==0){
reg = glm(y~1, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm = paste("y ~ ",vr,sep="")
reg = glm(fm, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
sortie}

Let us not compute it for all variables

N = 0:15
M = Vectorize(metric_gender)(N)

and plot it

plot(N,M[1,]*100, xlab="Number of predictive variables (without gender)", ylab=
"Average predicted claims frequency (%)", type="b", pch=19, col=COLORS[2], ylim=c(8.12,9))
lines(N, M[2,]*100, type="b", pch=15, col=COLORS[3])

Interestingly, we can clearly see that with 15 explanatory variables, even if our model is gender-blind (since it is not in the training dataset), our model reproduce the difference we can observe in the dataset : annual claim frequency for men is almost 9% and 8.2% for women.

Actually, it is not possible to predict the gender for our 15 variables (below is the ROC curve of the logistic regression to predict the gender)

metric_gender_2= function(k =3){
if(k==0){
reg = glm((sensitive=="Female")~1, family=binomial, data=frenchmotor)
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm_genre = paste('(sensitive=="Female") ~ ',vr,sep="")
reg = glm(fm_genre, family=binomial, data=frenchmotor)
}
pred = prediction(predict(reg,type="response"),(frenchmotor$sensitive=="Female"))
performance(pred,"tpr","fpr")}
plot(metric_gender_2(15))

but still, when using 15 variables, we obtain discrimination in our portfolio, since the average predictions for mean and women are significantly difference (even if our models are, per se, gender-blind).

Fairness and discrimination, PhD Course, #3 Machine Learning, losses and distances

For the third course, we will get back a little bit on machine learning (slides are still online on the github repository). The starting point will be loss functions and risk.

Loss functions and risk

A general definition for a loss is that it is positive, and null when we consider \ell(y,y). As we will discuss further, it is neither a distance, nor a dissimilarity measure

Then, define the empirical risk (and the associated empirical risk minimization principle, as coined in Vapnik (1991))

Given a loss \ell and some probabilistic space, define the optimal decision, also called Bayes decision rule

And instead of the risk of a model, define the excess of risk.

A classical loss for a classifier is \ell_{0/1},

In that case, Bayes decision rule, ism^\star(\boldsymbol{x}) = \boldsymbol{1}(\mu(\boldsymbol{x})>1/2) =\begin{cases}1 \text{ if }\mu(\boldsymbol{x})>1/2\\0 \text{ if }\mu(\boldsymbol{x})\leq1/2\\\end{cases}where (of course), one needs to know \mu, otherwise, we can consider some plug-in estimator based on \widehat\mu. For continuous variables y, consider the quadratic loss \ell_2,

In that case, Bayes decision rule (the optimal model) is the conditional expectation

Observe that we can also define the quantile loss (or the expectile loss)

Observe that this loss is not symmetric..

From loss functions to distances

Let us discuss a bit more the fact that losses are not distances. As mentioned, it is neither necessarily symmetric nor seperable,

But furthermore, it has no reason to satisfy the triangle inequality. Actually, if d is the distance, it is very likely that d^2 is not (since exponentiating is not a subadditive transformation)

Another related concept could be the concept of similarity, or dissimilarity.

Another one is the concept of divergence, that we will use much more. For instance, Bregman divergence is

which safisfies desirables properties.

Interestingly it is possible to define “projections” even if we have neither an orthogonal projection (since there is no orthogonal concept since there is no inner-product), nor a distance. But still

One can use a nice algorithm to estimate that quantity, if the convex set can we expressed simply

When considering “distances” between distributions, instead of y‘s, among other interesting properties in statistics, we can mention the one of unbiased gradients,

and Müller (1997) defined integral probability metrics

Standard “distances” between distributions

The first one will be Hellinger distance

that can lead so simple expressions for standard parametric distributions, such as Beta distributions,

or (multivariate) Gaussian ones

We can also mention Pearson divergence

More interesting (and popular in probability theory), total variation

There are several ways to express that distance.

If instead of general sets \mathcal{A}, we can consider half lines, (-\infty,t][\latex], and we obtain Kolmogorov distance (or Kolmogorov-Smirnov)

Another important one in statistics is Kullback–Leibler divergence

For instance, with Gaussian vectors

Observe that the measure is actually a dissimilarity measure

If we want a symmetric version, we can consider Jeffreys divergence

or Jensen–Shannon divergence

Finally, we will mention f-divergence

and Rényi divergence

We will discuss a little bit more those "distances" (yes, I usually use that term, abusively) and next week, we will present the most interesting distance, that will be Wasserstein's.

Fairness and discrimination, PhD Course, #2 Insurance and risk classes

For the second course, we will get back a little bit on insurance pricing in a context of heterogeneous portfolio, and risk classification (slides are still online on the github repository). The starting point will be the pure premium.

See our online textbook, with Michel Denuit, Non Life Insurance Mathematics, for additional motivation. If we have some risk related variables \boldsymbol{x}=(x_1,\cdots,x_k), the pure premium will be the conditional expectation,

Here also, we have some law of numbers, for the conditional expected value,

This relationship, which defines the conditional expected value using the limiting value of a conditional frequency cannot be used to define properly \mathbb{P}[Y|\boldsymbol{X}=\boldsymbol{x}] and \mathbb{E}[Y|\boldsymbol{X}=\boldsymbol{x}]. One can consider a limit,\mathbb{P}\big(Y\in \mathcal{A}\big\vert X = x\big)=\lim_{\epsilon\to0}\frac{\mathbb{P}(\{Y\in \mathcal{A}\}\cap\{|X -x|\leq \epsilon\})}{\mathbb{P}(\{|X -x|\leq \epsilon\})}or\mathbb{P}\big(Y\in \mathcal{A}\big\vert X = x\big)=\lim_{\epsilon\to0}\mathbb{P}\big(Y\in \mathcal{A}\big\vert |X -x|\leq \epsilon\big)as in the law of the unconscious statistician or as Proschan and Presnell (1998) wrote it

statisticians make liberal use of conditioning arguments to shorten what would otherwise be long proofs

We can now compute conditional frequency, given some risk characteristics, for some quantity of interest y, such as the age of death, in life insurance contracts.

Demographic risk and heterogeneity

First, we will see some gender-based life tables, starting with the one obtained by Nicolaas Struyck (see e.g. Alberts et al. (2014))

More recently, in France, some wealth based life tables were obtained, with various quantiles

And finally, we will see some life tables obtained 50 years ago in the US, with racial distinction

Mean and variance decomposition

About pure premiums, an important property is the law of total expectations, and a desirable property, that we will name “balance property”

We will also mention variance and variance decomposition, depending if we take into heterogeneity, or not. With homogenous pricing, we have

If we use the “true” underlying risk factor, \Theta, we have the standard variance decomposition, also called law of total variance

i.e.

And finally, if we do not observe \Theta, but we have a collection of covariates, \boldsymbol{X}=(X_1,\cdots,X_k),

Some historical perspectives

In the textbook, Insurance: Biases, Discrimination and Fairness, I have several paragraph about an historical perspective, starting with insurance as clubs, without segmentation. Then segmentation started, with risk classes and groups. For example, according to Issues And Needed Improvements In State Regulation Of The Insurance Business, by Harry Havens, in 1979,

The price which a person pays for automobile insurance depends on age, sex, marital status, place of residence and other factors. This risk classification system produces widely differing prices for the same coverage for different people. Questions have been raised about the fairness of this system, and especially about its reliability as a predictor of risk for a particular individual. While we have not tried to judge the propriety of these groupings, and the resulting price differences, we believe that the questions about them warrant careful consideration by the State insurance departments. In most States the authority to examine classification plans is based on the requirement that insurance rates are neither inadequate, excessive, nor unfairly discriminatory. The only criterion for approving classifications in most States is that the classifications be statistically justified — that is, that they reasonably reflect loss experience. Relative rates with respect to age, sex, and marital status are based on the analysis of national data. A youthful male driver, for example, is charged twice as much as an older driver all over the country} (…) t has also been claimed that insurance companies engage in redlining – the arbitrary denial of insurance to everyone living in a particular neighborhood. Community groups and others have complained that State regulators have not been diligent in preventing redlining and other forms of improper discrimination that make insurance unavailable in certain areas. In addition to outright refusals to insure, geographic discrimination can include such practices as: selective placement of agents to reduce business in some areas, terminating agents and not renewing their book of business, pricing insurance at un-affordable levels, and instructing agents to avoid certain areas. We reviewed what the State insurance departments were doing in response to these problem. To determine if redlining exists, it is necessary to collect data on a geographic oasis. Such data should include current insurance policies, new policies being written, cancellations, and non-renewals. It is also important to examine data on losses by neighborhoods within existing rating territories because marked discrepancies within territories would cast doubt on the validity of territorial boundaries. Yet, not even a fifth of the States collect anything other than loss data, and that data is gathered on a territory-wide basis.

According to The Role of Risk Classification in Property and Casualty Insurance: A Study of the Risk Assessment Process : Final Report, by Barbara Casey, Jacques Pezier and Carl Spetzler, in 1976,

On the other hand, the opinion that distinctions based on sex, or any other group variable, necessarily violate individual rights reflects ignorance of the basic rules of logical inference in that it would arbitrarily forbid the use of relevant information. It would be equally fallacious to reject a classification system based on socially acceptable variables because the results appear discriminatory. For example, a classification system may be built on use of car, mileage, merit rating, and other variables, excluding sex. However, when verifying the average rates according to sex one may discover significant differences between males and females. Refusing to allow such differences would be attempting to distort reality by choosing to be selectively blind. The use of rating territories is a case in point. Geographical divisions, however designed, are often correlated with socio-demographic factors such as income level and race because of natural aggregation or forced segregation according to these factors. Again we conclude that insurance companies should be free to delineate territories and assess territorial differences as well as they can. At the same time, insurance companies should recognize that it is in their best interest to be objective and use clearly relevant factors to define territories lest they be accused of invidious discrimination by the public. (…) One possible standard does exist for exception to the counsel that particular rating variables should not be proscribed. What we have called `equal treatment’ standard of fairness may precipitate a societal decision that the process of differentiating among individuals on the basis of certain variables is discriminatory and intolerable. This type of decision should be made on a specific, statutory basis. Once taken, it must be adhered to in private and public transactions alike and enforced by the insurance regulator. This is, in effect, a standard for conduct that by design transcends and preempts economic considerations. Because it is not applied without economic cost, however, insurance regulators and the industry should participate in and inform legislative deliberations that would ban the, use of particular rating variables as discriminatory.

And then, more recently, we started to talk about personalization, as in Barry and Charpentier (2020). And next week, we will start talking about predictive modeling, and machine learning.

Melting contestation: insurance fairness and machine learning

Nice review of our paper , with Laurence Barry, on montrealethics.ai,

Machine learning tends to replace the actuary in the selection of features and the building of pricing models. However, avoiding subjective judgments thanks to automation does not necessarily mean that biases are removed. Nor does the absence of bias warrant fairness. This paper critically analyzes discrimination and insurance fairness with machine learning.

Melting contestation: insurance fairness and machine learning