Tag Archives: discrimination

Measuring and mitigating biases in motor insurance pricing

Our paper, with Mulah Moriah and Franck Vermet, Measuring and mitigating biases in motor insurance pricing, has been recenlty published in the European Actuarial Journal,

The non-life insurance sector operates within a highly competitive and tightly regulated framework, confronting a pivotal juncture in the formulation of pricing strategies. Insurers are compelled to harness a range of statistical methodologies and available data to construct optimal pricing structures that align with the overarching corporate strategy while accommodating the dynamics of market competition. Given the fundamental societal role played by insurance, premium rates are subject to rigorous scrutiny by regulatory authorities. Consequently, the act of pricing transcends mere statistical calculations and carries the weight of strategic and societal factors. These multifaceted concerns may drive insurers to establish equitable premiums, considering various variables. For instance, regulations mandate the provision of equitable premiums, considering factors such as policyholder gender. Or mutualist groups in accordance with respective corporate strategies can implement age-based premium fairness. In certain insurance domains, the presence of serious illnesses or disabilities are emerging as new dimensions for evaluating fairness. Regardless of the motivating factor prompting an insurer to adopt fairer pricing strategies for a specific variable, the insurer must possess the capability to define, measure, and ultimately mitigate any fairness biases inherent in its pricing practices while upholding standards of consistency and performance. This study seeks to provide a comprehensive set of tools for these endeavors and assess their effectiveness through practical application in the context of automobile insurance. Results show that fairness bias can be found in historical data and models, and that fairer outcomes can be obtained by more fairness-aware approaches.

Insurance, Biases, Discrimination and Fairness

Insurance, Biases, Discrimination and Fairness was published a few weeks ago. I still plan to spend some time this summer on the R package, including data and some functions…

This book offers an introduction to the technical foundations of discrimination and equity issues in insurance models, catering to undergraduates, postgraduates, and practitioners. It is a self-contained resource, accessible to those with a basic understanding of probability and statistics. Designed as both a reference guide and a means to develop fairer models, the book acknowledges the complexity and ambiguity surrounding the question of discrimination in insurance. In insurance, proposing differentiated premiums that accurately reflect policyholders’ true risk—termed “actuarial fairness” or “legitimate discrimination”—is economically and ethically motivated. However, such segmentation can appear discriminatory from a legal perspective. By intertwining real-life examples with academic models, the book incorporates diverse perspectives from philosophy, social sciences, economics, mathematics, and computer science. Although discrimination has long been a subject of inquiry in economics and philosophy, it has gained renewed prominence in the context of “big data,” with an abundance of proxy variables capturing sensitive attributes, and “artificial intelligence” or specifically “machine learning” techniques, which often involve less interpretable black box algorithms.

The book distinguishes between models and data to enhance our comprehension of why a model may appear unfair. It reminds us that while a model may not be inherently good or bad, it is never neutral and often represents a formalization of a world seen through potentially biased data. Furthermore, the book equips actuaries with technical tools to quantify and mitigate potential discrimination, featuring dedicated chapters that investigate into these methods.

Fresh from the oven…

14 litres d’encre de chine, 30 pinceaux, 62 crayons à mine grasse, 1 crayon à mine dure, 27 gommes à effacer, 38 kilos de papier, 16 rubans de machine à écrire, 2 machines à écrire, 67 litres de bière ont été nécessaires à la réalisation de cette aventure…

(Goscinny and Uderzo (1965*), Astérix et Cléopâtre)

Almost better than hot, freshly baked bagels…

the textbook Insurance, Biases, Discrimination and Fairness is now out, and just arrived today ! Even though I’ve spent so much time re-reading it, getting nauseous, checking references, quotes, reworking graphics, re-launching codes, etc., it’s still an immense feeling of pride to open your book for the very first time.

Astérix et Cléopâtre est le dernier Astérix de la fameuse Collection Pilote, comme me le rappelait Michel Bera (professeur émérite du CNAM, rattaché à la Chaire de modélisation statistique du risque, et mémoire de la bande dessinée francophone, le “B” du fameux “BDM”, Trésors de la bande dessinée). “Lorsque la collection Pilote a basculé en éditions avec les titres des seuls Asterix dans le menhir, je pense que la phrase a disparu“… C’était la version qui était chez mes grands parents, et que je (re)dévorais, tous les ans, quand j’étais petit.

“Scope and limits of artificial intelligence” at the SCOR foundation monthly webinar

This morning, I will give a talk on “scope and limits of artificial intelligence” at the SCOR foundation monthly webinar. As discussed previsously, we currently have ongoing research on discrimination and fairness founded by the fondation (newsletter #1 is online).

Insurance (and further motivations)

Since we will talk about fairness, I will start with a couple of motivations. The first one is about COMPAS,

Interestingly, we have the data to analyse that one. In the original analysis, conditional on non-re-offending, proportions of being wrongly classified in the two protected groups are significantly different, so the algorithm is racist,

The answer was that actually, conditional on being classified as high risk, the probability of re-offense in the two protected groups are significantly similar, so the algorithm is not racist,

So clearly, we can start to see that it will not be so easy, since using the same data and the same models, two different conclusions can be obtained.

We will also disccuss legal aspects.

This idea of “determining actuarial factor” has been remove in Europe, but we can still find it in Québec

I can also mention some recent projects, in Colorado, where insurers are asked  to predict race and ethnicity (that specific topic is on our agenda for the summer)

And finally, I should stress that discrimination has not much to do with the intention of the statistician. This is the idea of indirect discrimination

I should also mention “redlining“. About 100 years ago, in the US, we started to see maps, created by HOLC (based on City Survey Files, 1935-1940). Those maps contained “red” areas and “green” areas. Bankers were supposed to avoid the red areas, because they were considered too risky.

As a sidenote, we see nowadays some blue-lining related to climate risks,

“Blue-lining,” from the consumer’s perspective, is when banks or mortgage lenders draw lines of risk around certain streets or neighborhoods, often without clear disclosure.

Finally, I just want to recall that algorithms just tend to reproduce what can be observed in data. If there is a difference between men and women, they will reproduce it.

A bit more on insurance

I should also stress an important problem (that could be related to a paper we wrote, in French, a few years ago). Classically, when modeling categorical variables, such as a binary variable y\in\{0,1\}, practitionners just care about getting the good category. On the left, we have pictures of cats and dogs to train a model, then we try on a new picture, that is either a cat or a dog. Somehow, there is a ground truth and it is possible to see if we are right or wrong. Same if we want to detect a disease on medical pictures. Now, if we move to the right. In the middle, we have a model that predicts if it will rain, or not. But here, maybe, what we care about is actually the probability to have rain. On the right, we have the actuarial problem of modeling claims frequencies. We do not want to predict who will claim a loss, but we want a good estimator of the probability to claim a loss. The challenge, clearly, is that we cannot observe that one. We cannot observe the latent risk factor. We only observe if people got an accident or not. But some people with a very small probability can still claim a loss. And very bad drivers can actually be very lucky, and got no accident one year.

Again, in insurance, we care more about the score, the estimation of the probability than the class \widehat{y}. So we can slightly modify standard fairness definitions, to be based not on predicted classes \widehat{y}, but on the score m(\boldsymbol{x},s). As we will discuss, there are usually three general definitions of so-called “group fairness”

Quantifying unfairness with optimal transport

Let us start with demographic parity. A weak version is that, on average, scores in the two groups should identical (or close). An alternative is the strong version, asking for equalities in distributions : for any set \mathcal{I}\subset[0,1], the probability that the score is in \mathcal{I} (e.g. between 40% and 60%) should be the same in the two groups.

Mathematically, we need a distance between the distributions of scores in the two groups. And a popular distance is Wasserstein distance, that is related to optimal transport.

The empirical version is perhaps easier to understand, and mapping is based on matching of individuals. xxx

As a cultural sidenote, a couple of slides to explain why it has to do with “optimal transport”, going back to Monge (1781)‘s problem. It’s all about transporting the sand, grain by grain, from the hole to the pile. Below, we have a (purely) random transport. Which is not efficient at all…

and then the optimal version (for a strictly convex cost function), he leftmost grain in the hole goes on the lefttmost part of the stack, etc.

Mitigation

For mitigation (once we have observe that there was discrimination, as discussed previously) heuristically, we want to be somewhere in between the two distributions in the two subgroups,

Being “in between” can be interpreted locally: for someone in group A, it should be between (weights are related to proportions in the two groups) the prediction, as someone in group A, and then some sort of counterfactual in the other group, namely the prediction that person would have obtained if she had been in group B, based on the same probability level,

For the other group it is the opposite

Beyond demographic parity

If we get back to our COMPAS examples, demographic parity, in the standard classification-based definition, would be translated as

If we get back to the original motivation we gave, it had nothing to do with demographic parity, the first slide had to do with separation, or equalized odds, while the second one had to do with sufficiency, or calibration.

More generally, if we consider a weak version of the independence criterias, we have moments equality, within each protected subgroup,

Let us mention a bit more calibration. Calibration is deeply related to the interpretation of “probabilities” as returned by models as “real probabilities”. In machine learning, it is hard to define properly what those “probabilities” are.

Calibration is related to the following idea, discussed above: if we consider all cases where the predicted probability was 40% (or say, close to 40%), then the proportion on 1’s should be close to 40%.

To conclude that disgression, I can mention the following example highlighting that we should be concerned by probabilities returned by machine learning algorithms. Consider some pictures, generated by some algorithm, and more precisely, some flow of pictures, from a woman to a man

Below, we can see probabilities given by some online appplication, that returns probabilities to be a woman, given a picture. Can’t we agree that it is surprising that those probabilities (of beeing a woman) do not decrease continuous, from the picture in the top left corner and the one in the bottom right one ?

Finally, I can also mention “individual fairness”, or “counterfactual fairness”. Here also, optimal transport can be used, to quantify counterfactual unfairness. But I won’t be too long here.

Finally, an opening for next year’s agenda, with interpretability. Interpretability is a very important issue in actuarial science, which is not as objective as people might think, and the popular

let the data speak for itself

In insurance, interpretation is very important, probably more important than model assumptions

Interpretation become a key concept when dealing with multiple sensitive attributes

To conclude, just a final reminder that dealing with mitigation is a complex philosophical problem….

Tomorrow, we will discuss further at our workshop, in Québec city

Workshop on fairness and discrimination in insurance (registration is open)

Almost two years ago, on May 13th 2022, we organized a Workshop on fairness and discrimination in insurance, JEDA’22, at Laval University (in Québec city), with Marie-Pier Côté.

It was a beautiful sucess, with a lot of persons in person, for one of the first event post-pandemic. The second workshop (JEDA’24) will be organized in less than a month, on May 16th.

Registrations are open ! We will have in the room Fei Huang (UNSW Sidney), David Schraub (Chicago Actuarial Association), Marie-Ève Lainez, Autorité des marchés financiers), Laurence Barry (Chaire PARI), Agathe Fernandes Machado (UQÀM) Mallika Bender (Casualty Actuarial Society), Christopher Cooney (TD Insurance) and Olivier Côté (Université Laval).

Online Seminar Finance & Modeling, Centre d’Économie de la Sorbonne

In a week,  I will give a talk at the Modélisation Financière seminar (“Online Seminar Finance & Modeling” according to the invitation) on Using optimal transport to mitigate unfair predictions. Slides are now on line.

The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this talk, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications. The talk will be based on recent work with François Hu and Philipp Ratz (2310.20508, 2309.06627, 2306.12912 and 2306.10155).

Fairness and discrimination, PhD Course, #7 Sensitive attributes and proxies

In our previous post, we discussed “group fairness“. I might have gone a bit fast, and I decided to add some material about sensistive attributes, and proxies.

Sensitive attributes ?

Almost everywhere, we can find a list of variables that are considered, by law, as sensitive, since they will lead to discrimination. As mentioned earlier, sensitive variable might change with time, and accross regions…

Another issue with black boxes is that it might be hard to assess if they are related to sensitive attribute. In order to extract informations in pictures to classify pictures, or detect pictures, algorithm might use information that could be considered as sensitive. First, recall the popular wolf-husky classifier, that detects snow in the background (since wolf were with snow in the training sample)

This can also be the case for health issues, where classifiers can be influenced by the color of the skin (or possibly some unexpected information)

Racism

The first sensitive attribute is probably the race, that has been discussed in insurance for decaded.

One should keep in mind that race is a social information, and most of the time, it is based on self-identification

This leads to popular maps in the U.S.

Racism is usually related to “colourism” (discrimination based on skin tone)

Is it relevant in the context of insurance, and risk ?

It has been observed that African Americans, in the U.S. were usually asked a higher insurance premium.

Have in mind that discrimination has nothing to do with intention, as mentioned previous. An insurance pricing can be racist, without any intention to be so. An important issue to quantify that problem is actually to observe that variable.

Sexism

Sexism is another popular exemple of discrimination, related to sex, or gender.

Actuaries have been using life tables that are gender related for more than 300 years. And indeed, it seems that women live longer than men.

Ageism

Age is another possible sensitive attribute, but it is more complicated. First, it is not a “club” and second, it is (somehow) clearly related to risk.

In dataset, there can also be selection bias, related to the age. For instance, during the COVID pandemic, triage was based on the age of patients. Treatements and tests can be related to the age of patients. So this bias will probably have an impact on observed risks.

Genetics

Another important sensitive variable is related to “genetic information”.

Such information is usually classified as sensitive everywhere.

To conclude, I wanted to mentioned that several important variables considered as sensitive have not much to do with genetics, but more with a social construction.

Finally, let us discuss proxies that can be related to those sensitive variables.

Names and language

The first one was discussed in the introduction : names contain information about race and ethnical origin,

Text and discussion can also reveal sensitive information.

Pictures

Pictures can also provide information. That was discussed 150 years ago, where researchers tried to identify criminals using solely pictures.

Some insurers have been trying, at some point, to detect diseases on facial pictures. And it possible to reveal informations from pictures. Possibly the age, and the gender.

One can also use satellite pictures, or pictures from Google Street View, such as the wealth in the neighborhood. And possibly sensitive information, such as the presence of an access ramp for disabled people.

Credit Scoring

Credit scoring is also a variable used by insurers, that can be related to variables considered as sensitive

Clearly, a bad credit score will have a big impact not only on mortgages and loans,

but also on insurance rates ! As we explained here, it costs a lot to be poor.

Networks

Finally, insurance can use information related to friends, or family, to assess the risk. And netword data capture a lot of sensitive information.

We will talk a little bit about network, to explain why using your friends risks to assess your own risk might not be a great idea…

It is an extension of th friendship paradox.

Proxies

Finally, we will conclude by showing that removing a sensitive attribute from a training dataset will not mitigate discrimination.

Discrimination by proxy (a real case study)

Yesterday, with Laurence Barry, we posted a blog post “Who benefits from data sharing?” explaining why data sharing, in insurance, could end mutualization. Actually, it can also be bad in the context of discrimination. Consider here the same dataset, with claim occurence, in a real insurance portfolio,

library(InsurFair)
library(randomForest)

Consider a version of this dataset without the gender, and use variable importance to get a list of variables we can use in a predictive model

subfrenchmotor = frenchmotor[,-which(names(frenchmotor)=="sensitive")]
RF = randomForest(y~. ,data=subfrenchmotor)
vi = varImpPlot(RF , sort = TRUE)

We sort variables based on variable importance (the first one is the “most important” one), and add splines for three continuous variables

dfvi = data.frame(nom = names(subfrenchmotor)[-15], g = as.numeric(vi))
dfvi = dfvi[rev(order(dfvi$g)),]
nom = dfvi$nom
nom[1] = "bs(LicAge)"
nom[3] = "bs(DrivAge)"
nom[7] = "bs(BonusMalus)"

Then, the idea is simple : at stage k, we keep the k most important variables, and run a logistic regression on those k variables. Again, I should stress that the gender of the driver is not among those k variables. Then, we compute the average prediction of claim frequency, for mean and women.

n=nrow(subfrenchmotor)
library(splines)
idx_F = which(frenchmotor$sensitive == "Female")
idx_M = which(frenchmotor$sensitive == "Male")
metric_gender= function(k =3){
if(k==0){
reg = glm(y~1, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm = paste("y ~ ",vr,sep="")
reg = glm(fm, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
sortie}

Let us not compute it for all variables

N = 0:15
M = Vectorize(metric_gender)(N)

and plot it

plot(N,M[1,]*100, xlab="Number of predictive variables (without gender)", ylab=
"Average predicted claims frequency (%)", type="b", pch=19, col=COLORS[2], ylim=c(8.12,9))
lines(N, M[2,]*100, type="b", pch=15, col=COLORS[3])

Interestingly, we can clearly see that with 15 explanatory variables, even if our model is gender-blind (since it is not in the training dataset), our model reproduce the difference we can observe in the dataset : annual claim frequency for men is almost 9% and 8.2% for women.

Actually, it is not possible to predict the gender for our 15 variables (below is the ROC curve of the logistic regression to predict the gender)

metric_gender_2= function(k =3){
if(k==0){
reg = glm((sensitive=="Female")~1, family=binomial, data=frenchmotor)
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm_genre = paste('(sensitive=="Female") ~ ',vr,sep="")
reg = glm(fm_genre, family=binomial, data=frenchmotor)
}
pred = prediction(predict(reg,type="response"),(frenchmotor$sensitive=="Female"))
performance(pred,"tpr","fpr")}
plot(metric_gender_2(15))

but still, when using 15 variables, we obtain discrimination in our portfolio, since the average predictions for mean and women are significantly difference (even if our models are, per se, gender-blind).

Fairness and discrimination, PhD Course, #3 Machine Learning, losses and distances

For the third course, we will get back a little bit on machine learning (slides are still online on the github repository). The starting point will be loss functions and risk.

Loss functions and risk

A general definition for a loss is that it is positive, and null when we consider \ell(y,y). As we will discuss further, it is neither a distance, nor a dissimilarity measure

Then, define the empirical risk (and the associated empirical risk minimization principle, as coined in Vapnik (1991))

Given a loss \ell and some probabilistic space, define the optimal decision, also called Bayes decision rule

And instead of the risk of a model, define the excess of risk.

A classical loss for a classifier is \ell_{0/1},

In that case, Bayes decision rule, ism^\star(\boldsymbol{x}) = \boldsymbol{1}(\mu(\boldsymbol{x})>1/2) =\begin{cases}1 \text{ if }\mu(\boldsymbol{x})>1/2\\0 \text{ if }\mu(\boldsymbol{x})\leq1/2\\\end{cases}where (of course), one needs to know \mu, otherwise, we can consider some plug-in estimator based on \widehat\mu. For continuous variables y, consider the quadratic loss \ell_2,

In that case, Bayes decision rule (the optimal model) is the conditional expectation

Observe that we can also define the quantile loss (or the expectile loss)

Observe that this loss is not symmetric..

From loss functions to distances

Let us discuss a bit more the fact that losses are not distances. As mentioned, it is neither necessarily symmetric nor seperable,

But furthermore, it has no reason to satisfy the triangle inequality. Actually, if d is the distance, it is very likely that d^2 is not (since exponentiating is not a subadditive transformation)

Another related concept could be the concept of similarity, or dissimilarity.

Another one is the concept of divergence, that we will use much more. For instance, Bregman divergence is

which safisfies desirables properties.

Interestingly it is possible to define “projections” even if we have neither an orthogonal projection (since there is no orthogonal concept since there is no inner-product), nor a distance. But still

One can use a nice algorithm to estimate that quantity, if the convex set can we expressed simply

When considering “distances” between distributions, instead of y‘s, among other interesting properties in statistics, we can mention the one of unbiased gradients,

and Müller (1997) defined integral probability metrics

Standard “distances” between distributions

The first one will be Hellinger distance

that can lead so simple expressions for standard parametric distributions, such as Beta distributions,

or (multivariate) Gaussian ones

We can also mention Pearson divergence

More interesting (and popular in probability theory), total variation

There are several ways to express that distance.

If instead of general sets \mathcal{A}, we can consider half lines, (-\infty,t][\latex], and we obtain Kolmogorov distance (or Kolmogorov-Smirnov)

Another important one in statistics is Kullback–Leibler divergence

For instance, with Gaussian vectors

Observe that the measure is actually a dissimilarity measure

If we want a symmetric version, we can consider Jeffreys divergence

or Jensen–Shannon divergence

Finally, we will mention f-divergence

and Rényi divergence

We will discuss a little bit more those "distances" (yes, I usually use that term, abusively) and next week, we will present the most interesting distance, that will be Wasserstein's.

Fairness and discrimination, PhD Course, #2 Insurance and risk classes

For the second course, we will get back a little bit on insurance pricing in a context of heterogeneous portfolio, and risk classification (slides are still online on the github repository). The starting point will be the pure premium.

See our online textbook, with Michel Denuit, Non Life Insurance Mathematics, for additional motivation. If we have some risk related variables \boldsymbol{x}=(x_1,\cdots,x_k), the pure premium will be the conditional expectation,

Here also, we have some law of numbers, for the conditional expected value,

This relationship, which defines the conditional expected value using the limiting value of a conditional frequency cannot be used to define properly \mathbb{P}[Y|\boldsymbol{X}=\boldsymbol{x}] and \mathbb{E}[Y|\boldsymbol{X}=\boldsymbol{x}]. One can consider a limit,\mathbb{P}\big(Y\in \mathcal{A}\big\vert X = x\big)=\lim_{\epsilon\to0}\frac{\mathbb{P}(\{Y\in \mathcal{A}\}\cap\{|X -x|\leq \epsilon\})}{\mathbb{P}(\{|X -x|\leq \epsilon\})}or\mathbb{P}\big(Y\in \mathcal{A}\big\vert X = x\big)=\lim_{\epsilon\to0}\mathbb{P}\big(Y\in \mathcal{A}\big\vert |X -x|\leq \epsilon\big)as in the law of the unconscious statistician or as Proschan and Presnell (1998) wrote it

statisticians make liberal use of conditioning arguments to shorten what would otherwise be long proofs

We can now compute conditional frequency, given some risk characteristics, for some quantity of interest y, such as the age of death, in life insurance contracts.

Demographic risk and heterogeneity

First, we will see some gender-based life tables, starting with the one obtained by Nicolaas Struyck (see e.g. Alberts et al. (2014))

More recently, in France, some wealth based life tables were obtained, with various quantiles

And finally, we will see some life tables obtained 50 years ago in the US, with racial distinction

Mean and variance decomposition

About pure premiums, an important property is the law of total expectations, and a desirable property, that we will name “balance property”

We will also mention variance and variance decomposition, depending if we take into heterogeneity, or not. With homogenous pricing, we have

If we use the “true” underlying risk factor, \Theta, we have the standard variance decomposition, also called law of total variance

i.e.

And finally, if we do not observe \Theta, but we have a collection of covariates, \boldsymbol{X}=(X_1,\cdots,X_k),

Some historical perspectives

In the textbook, Insurance: Biases, Discrimination and Fairness, I have several paragraph about an historical perspective, starting with insurance as clubs, without segmentation. Then segmentation started, with risk classes and groups. For example, according to Issues And Needed Improvements In State Regulation Of The Insurance Business, by Harry Havens, in 1979,

The price which a person pays for automobile insurance depends on age, sex, marital status, place of residence and other factors. This risk classification system produces widely differing prices for the same coverage for different people. Questions have been raised about the fairness of this system, and especially about its reliability as a predictor of risk for a particular individual. While we have not tried to judge the propriety of these groupings, and the resulting price differences, we believe that the questions about them warrant careful consideration by the State insurance departments. In most States the authority to examine classification plans is based on the requirement that insurance rates are neither inadequate, excessive, nor unfairly discriminatory. The only criterion for approving classifications in most States is that the classifications be statistically justified — that is, that they reasonably reflect loss experience. Relative rates with respect to age, sex, and marital status are based on the analysis of national data. A youthful male driver, for example, is charged twice as much as an older driver all over the country} (…) t has also been claimed that insurance companies engage in redlining – the arbitrary denial of insurance to everyone living in a particular neighborhood. Community groups and others have complained that State regulators have not been diligent in preventing redlining and other forms of improper discrimination that make insurance unavailable in certain areas. In addition to outright refusals to insure, geographic discrimination can include such practices as: selective placement of agents to reduce business in some areas, terminating agents and not renewing their book of business, pricing insurance at un-affordable levels, and instructing agents to avoid certain areas. We reviewed what the State insurance departments were doing in response to these problem. To determine if redlining exists, it is necessary to collect data on a geographic oasis. Such data should include current insurance policies, new policies being written, cancellations, and non-renewals. It is also important to examine data on losses by neighborhoods within existing rating territories because marked discrepancies within territories would cast doubt on the validity of territorial boundaries. Yet, not even a fifth of the States collect anything other than loss data, and that data is gathered on a territory-wide basis.

According to The Role of Risk Classification in Property and Casualty Insurance: A Study of the Risk Assessment Process : Final Report, by Barbara Casey, Jacques Pezier and Carl Spetzler, in 1976,

On the other hand, the opinion that distinctions based on sex, or any other group variable, necessarily violate individual rights reflects ignorance of the basic rules of logical inference in that it would arbitrarily forbid the use of relevant information. It would be equally fallacious to reject a classification system based on socially acceptable variables because the results appear discriminatory. For example, a classification system may be built on use of car, mileage, merit rating, and other variables, excluding sex. However, when verifying the average rates according to sex one may discover significant differences between males and females. Refusing to allow such differences would be attempting to distort reality by choosing to be selectively blind. The use of rating territories is a case in point. Geographical divisions, however designed, are often correlated with socio-demographic factors such as income level and race because of natural aggregation or forced segregation according to these factors. Again we conclude that insurance companies should be free to delineate territories and assess territorial differences as well as they can. At the same time, insurance companies should recognize that it is in their best interest to be objective and use clearly relevant factors to define territories lest they be accused of invidious discrimination by the public. (…) One possible standard does exist for exception to the counsel that particular rating variables should not be proscribed. What we have called `equal treatment’ standard of fairness may precipitate a societal decision that the process of differentiating among individuals on the basis of certain variables is discriminatory and intolerable. This type of decision should be made on a specific, statutory basis. Once taken, it must be adhered to in private and public transactions alike and enforced by the insurance regulator. This is, in effect, a standard for conduct that by design transcends and preempts economic considerations. Because it is not applied without economic cost, however, insurance regulators and the industry should participate in and inform legislative deliberations that would ban the, use of particular rating variables as discriminatory.

And then, more recently, we started to talk about personalization, as in Barry and Charpentier (2020). And next week, we will start talking about predictive modeling, and machine learning.

Fairness and discrimination, PhD Course, #1 Motivation

This week, we will start our MAT998P course, in Montréal, entitled “équité et discrimination des modèles prédictifs“. It will mainly be based on the forthcoming textbook,

I can also mention the R package

> library(devtools)
> install_github("freakonometrics/InsurFair")

And because it is the first course, this week, I will start with some motivations this week… First of all, let me recall a definition, from Schauer (2006)

To be an actuary is to be a specialist in generalization, and actuaries engage in a form of decision making that is sometimes called actuarial. Actuaries guide insurance companies in making decisions about large categories that have the effect of attributing to the entire category certain characteristics that are probabilistically indicated by membership in the category, but that still may not be possessed by a particular member of the category.

Motivation #1 Redlining

In 1937, the HOLC (Home Owners’ Loan Corporation) produced the following map of Philadelphia, related to “residential security”

These maps were related to concept of “redlining”. According to Merriam Webster dictionary,

to redline is (1) to withhold home-loan funds or insurance from neighborhoods considered poor economic risks; (2) to discriminate against in housing or insurance.

On the (fictitious) maps below, we have three variables, ploted

  • some red and green areas (risky-non risky)
  • some unsanitary index (on a 0-100 scale)
  • the proportion of Black inhabitants per neiborhood

In an insurance context, risky areas (with a higher premium) should be correlated with unsanitarity index (or any risk-related variable), and those variables are legitimate predictive variables. But they can also be related to less-legitimate variable, that could be racial, here. The challenge here is that a lot of variables are correlated…

I could mention here that, for  Glenn (2000), insurer’s risk selection process has two sides:

  • the one presented to regulators and policyholders (numbers, statistics and objectivity),
  •  the other presented to underwriters (stories, character and subjective judgment).

The rhetoric of insurance exclusion – numbers, objectivity and statistics – forms what Brian Glenn calls

the myth of the actuary (…) a powerful rhetorical situation in which decisions appear to be based on objectively determined criteria when they are also largely based on subjective ones (…) or the subjective nature of a seemingly objective process.

Glenn  (2003) claimed that there are many ways to rate accurately. Insurers can rate risks in many different ways depending on the stories they tell on which characteristics are important and which are not.

The fact that the selection of risk factors is subjective and contingent upon narratives of risk and responsibility has in the past played a far larger role than whether or not someone with a wood stove is charged higher premiums (…) virtually every aspect of the insurance industry is predicated on stories first and then numbers

Motivation #2. “Gender directive”, 2004/113/EC

From the Treaty on European Union (26.10.2012)

Art. 2 The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. These values are common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail.

Art. 3 (…) It shall combat social exclusion and discrimination, and shall promote social justice and protection, equality between women and men, solidarity between generations and protection of the rights of the child.

from the Charter of Fundamental Rights of the European Union (18.12.2000)

Art. 21 (Non discrimination): Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.

Art. 23 (Equality between men and women) Equality between men and women must be ensured in all areas, including employment, work and pay. The principle of equality shall not prevent the maintenance or adoption of measures providing for specific advantages in favour of the under-represented sex.

and from the EU Directive 2004/113/EC (2004 version)

Art. 5 (Actuarial factors)

1. Member States shall ensure that in all new contracts concluded after 21 December 2007 at the latest, the use of sex as a factor in the calculation of premiums and benefits for the purposes of insurance and related financial services shall not result in differences in individuals’ premiums and benefits.

2. Notwithstanding paragraph 1, Member States may decide before 21 December 2007 to permit proportionate differences in individuals’ premiums and benefits where the use of sex is a determining factor in the assessment of risk based on relevant and accurate actuarial and statistical data. The Member States concerned shall inform the Commission and ensure that accurate data relevant to the use of sex as a determining actuarial factor are compiled, published and regularly updated.

There was initially (2004) an opt-out clause (Article 5(2)), since, where gender is a determining factor in the assessment of
risk based on relevant and accurate actuarial and statistical
data then proportionate differences in individual premiums or
benefits are allowed.

In March 2011, the European Court of Justice issued its judgement into the “Test-Achats case”. The ECJ ruled Article 5(2) was invalid. Thus, insurers were no longer able to use gender as a risk factor when pricing policies.

Other legal documents in Europe can be mentioned such that the “Ten Oever” judgement (Gerardus Cornelis Ten Oever v Stichting Bedrijfspensioenfonds voor het Glazenwassers — en Schoonmaakbedrijf). In April 1993, the Advocate General Vangerven argued that (see De Baere (2012))

the fact that women generally live longer than men has no significance at all for the life expectancy of a specific individual and it is not acceptable for an individual to be penalized on account of assumptions which are not certain to be true in his specific case,

which could be related to the concept of “injustice by generalization”.

Motivation #3. Colorado (September 27, 2023)

In September 27, 2023, the Colorado Division of Insurance exposed a new proposed regulation entitled Concerning Quantitative Testing of External Consumer Data and Information Sources, Algorithms, and Predictive Models Used for Life Insurance Underwriting for Unfairly Discriminatory Outcomes.

Section 4 (Definitions) Bayesian Improved First Name Surname Geocoding, or “BIFSG” means, for the purposes of this regulation, the statistical methodology developed by the RAND corporation for estimating race and ethnicity.

External Consumer Data and Information Source, or “ECDIS” means, for the purposes of this regulation, a data source or an information source that is used by a life insurer to supplement or supplant traditional underwriting factors. This term includes credit scores, credit history, social media habits, purchasing habits, home ownership, educational attainment, licensures, civil judgments, court records, occupation that does not have a direct relationship to mortality, morbidity or longevity risk, consumer-generated Internet of Things data, biometric data, and any insurance risk scores derived by the insurer or third-party from the above listed or similar data and/or information source.

Then we have different sections, where insurers are asked to “estimate” the race or ethnicity of policyholders

Section 5 (Estimating Race and Ethnicity) : Insurers shall estimate the race or ethnicity of all proposed insureds that have applied for coverage on or after the insurer’s initial adoption of the use of ECDIS, or algorithms and predictive models that use ECDIS, including a third party acting on behalf of the insurer that used ECDIS, or algorithms and predictive models that used ECDIS, in the underwriting decision-making process, by utilizing:

1. BIFSG and the insureds’ or proposed insureds’ name and geolocation information included in the applications) for life insurance shall be used to estimate the race and ethnicity of each insured or proposed insured.

2. For the purposes of BIFSG, the following racial and ethnic categories shall be used: Hispanic, Black, Asian Pacific Islander (API), and White.

Section 6 (Application Approval Decision Testing Requirements) : Using the BIFSG estimated race and ethnicity of proposed insureds and the following methodology, insurers shall calculate whether Hispanic, Black, and API proposed insureds are disapproved at a statistically significant different rate relative to White applicants for whom the insurer, or a third party acting on behalf of the insurer, used ECDIS, or an algorithm or predictive model that used ECDIS, in the underwriting decision-making process.

1. Logistic regression shall be used to model the binary underwriting outcome of either approved or denied.

2. The following factors may be accounted for as control variables in the regression model: policy type, face amount, age, gender, and tobacco use.

3. The estimated race or ethnicity of the proposed insureds shall be accounted for by including Hispanic, Black, and Asian Pacific Islander (API) as separate dummy variables in the regression model.

4. Determine if there is a statistically significant difference in approval rates for each BIFSG estimated race or ethnicity variable as indicated by a p-value of less than .05.

a. If there is not a statistically significant difference in approval rates, no further testing is required.

b. If there is a statistically significant difference in approval rates, the insurer shall determine whether the difference in approval rates is five (5) percentage points or greater as indicated by the marginal effects value of each BIFSG estimated race or ethnicity variable. (…)

or

Section 7 (Premium Rate Testing Requirements) : Using the insureds’ BIFSG estimated race and ethnicity, insurers shall determine if there is a statistically significant difference in the premium rate per $1,000 of face amount for policies issued to Hispanic, Black, and API insureds relative to White insureds for whom the insurer, or a third party acting on behalf of the insurer, used ECDIS, or an algorithm or predictive model that used ECDIS, in the underwriting decision-making process.

1. Linear regression shall be used to model the continuous numerical outcome of premium rate per $1,000 of face amount.

2. The following factors may be accounted for as control variables in the regression model: policy type, face amount, age, gender, and tobacco use.

3. The estimated race or ethnicity of the proposed insureds shall be accounted for by including Hispanic, Black, and Asian Pacific  Islander (API) as separate dummy variables in the regression model.

4. Determine if there is a statistically significant difference in the premium rate per $1,000 of face amount for each BIFSG estimated race or ethnicity variable as indicated by a p-value of less than .05.

a. If there is not a statistically significant difference in premium rate per $1,000 of face amount, no further testing is required.

b. If there is a statistically significant difference in premium rate per $1,000 of face amount, determine whether the premium rate per $1,000 of face amount is at least 5% more than the average premium rate per $1,000 for all policies.

i. If the difference in premium rate per $1,000 of face amount is less than 5%, no further testing is required.

ii. If the difference in premium rate per $1,000 of face amount is 5% or greater, further testing is required as described in Section 8.

(etc). In order to illustrate, we can use some data, in the region of Atlanta

 

We can change the first and last name of people (and keep other relevant information, including the ZIP code) and compare “predictions” of race (white, black, hispanic, asian, etc)

Motivation #4. Motor Insurance in the U.S.

In the context of motor insurance in the U.S., recall that legal restrictions are per states, and we can observe some diversity about what “sensitive” could mean (via thezebra)

(etc). We will also discuss Avraham et al. (2013) that provides a long discussion accross US states.

Motivation #5. Graduate Admission (UC Berkeley)

Another motivation is the popular article, Bickel,  Hammel, and O’Connell (1975)

The dataset mentioned in the article is the following

the bias in the aggregated data stems not from any pattern of discrimination on the part of admissions committees, which seems quite fair on the whole, but apparently from prior screening at earlier levels of the educational system. Women are shunted by their socialization and education toward fields of graduate study that are generally more crowded, less productive of completed degrees, and less well funded, and that frequently offer poorer professional employment prospects

As we can see, if we formalize, we have (almost)

This is Simpson’s paradox. Another simple example is related to mortality : the (overall) mortality rate for women (picked at random in the entiere population) was 0.812% in Costa Rica, lower than 0.929% in Sweden. But as we can see on the left, below, at any age, mortality rates are lower in Sweden than in Costa Rica.

The paradox can easily be explained if we look at age structures in both countries. Long story short, in Costa Rica, picking someone randomly means that the person is very likely to be (very) young, with a low mortality rate; in Sweden, the person is more likely to be older, with a higher mortality rate.

Motivation #6. Propublica, Actuarial Justice

We will also mention actuarial justice, and et al (2016)

Hence, looking at the same data, with difference perspective, could lead to different conclusions. More robust conclusions can be obtained when look at distributions of scores (instead of simple binary predictions)

and we can also consider temporal process (again, instead of simply binary variables, with temporal censoring)

Motivation #7. Insurance in Québec

Two final motivations, in French this time. In Québec, there is the Charte des droits et libertés de la personne (C-12) with some very clear definition of what “discrimination” means,

Art. 10  Toute personne a droit à la reconnaissance et à l’exercice, en pleine égalité, des droits et libertés de la personne, sans distinction, exclusion ou préférence fondée sur la race, la couleur, le sexe, l’identité ou l’expression de genre, la grossesse, l’orientation sexuelle, l’état civil, l’âge sauf dans la mesure prévue par la loi, la religion, les convictions politiques, la langue, l’origine ethnique ou nationale, la condition sociale, le handicap ou l’utilisation d’un moyen pour pallier ce handicap.

Il y a discrimination lorsqu’une telle distinction, exclusion ou préférence a pour effet de détruire ou de compromettre ce droit.

But, interestingly, insurers can almost do anything they want,

Art 20.1 Dans un contrat d’assurance ou de rente, un régime d’avantages sociaux, de retraite, de rentes ou d’assurance ou un régime universel de rentes ou d’assurance, une distinction, exclusion ou préférence fondée sur l’âge, le sexe ou l’état civil est réputée non discriminatoire lorsque son utilisation est légitime et que le motif qui la fonde constitue un facteur de détermination de risque, basé sur des données actuarielles.

Motivation #8. Intention

And finally, I can mention that in many countries (such as France), “indirect discrimination” is considered as discriminatory, so “intention” has nothing to do with the problem… The Loi no 2008-496 du 27 mai 2008 states that

Art. 1 Constitue une discrimination indirecte une disposition, un critère ou une pratique neutre en apparence, mais susceptible d’entraîner, pour l’un des motifs mentionnés au premier alinéa, un désavantage particulier pour des personnes par rapport à d’autres personnes, à moins que cette disposition, ce critère ou cette pratique ne soit objectivement justifié par un but légitime et que les moyens pour réaliser ce but ne soient nécessaires et appropriés.

This law is an extension of Loi no. 72-546 du 1er juillet 1972, which abolished the requirement for specific intent.

Again, following Avraham (2017), keep in mind that insurance is very specific, regarding discrimination

What is unique about insurance is that even statistical discrimination which by definition is absent of any malicious intentions, poses significant moral and legal challenges. Why? Because on the one hand, policy makers would like insurers to treat their insureds equally, without discriminating based on race, gender, age, or other characteristics, even if it makes statistical sense to discriminate (…) On the other hand, at the core of insurance business lies discrimination between risky and non-risky insureds. But riskiness often statistically correlates with the same characteristics policy makers would like to prohibit insurers from taking into account.

That will be the topic of the course…