L’automne dernier, avec Patrick Thourot et Gilles Bénéplanc, on était invité pour parler du Manuel d’Assurance, avec l’équipe d’Annie, qui s’occupe de formation au sein du groupe Séroni, et News Assurances Pro.
Monthly Archives: February 2024
38th Annual AAAI Conference on Artificial Intelligence, in Vancouver
Fairness and discrimination, PhD Course, #8 Individual fairness
After our post on “group fairness“, it’s time to discuss so-called “individual fairness“.
Similarity
The first idea is discussed in Dwork et al. (2012)
our approach is centered around the notion of a task-specific similarity metric describing the extent to which pairs of individuals should be regarded as similar for the classification task at hand. The similarity metric expresses ground truth. When ground truth is unavailable, the metric may reflect the “best” available approximation as agreed upon by society. Following established tradition – Rawls (1971) – the metric is assumed to be public and open to discussion and continual refinement. Indeed, we envision that, typically, the distance metric would be externally imposed, for example, by a regulatory body or externally proposed by a civil rights organization
or
Counterfactual fairness
The second one is related to causal inference. Ensuring fairness using causal methods will produce “counterfactual fairness” (to use the term introduced in Kusner et al. (2017)), based on the idea a decision is fair towards an individual if the outcome is the same in reality as it would be in a ‘counterfactual’ world, in which the individual belongs to the other group (with respect to the sensitive attribute).
Quite naturally, we should compare potential outcomes, either globally (average treatement effect) or a local version, conditional on characteristics \boldsymbol{x} of an individual.
Based on causal graphs (discussed previously) we can define several notions of individual fairness.
Hence, it is possible to use Plečko et al. (2021), based on transport, and quantile regressions,
To illustrate, we can consider some causal graph on our toy dataset
and then, on some specific individuals in the dataset
Here, we can also get a counterfactual version of all individuals with one-to-one matching, and optimal transport
i.e.
and we can get a counterfactual version, and possibly, a different prediction, using the fairadapt R package
We can also consider the German credit dataset
or the causal graph used in Watson et al. (2021),
Then, those techniques can be used to see compare the predictions of 6 fictious individuals,
TD General Insurance Pricing Seminar
Tomorrow, I will give a talk at TD General Insurance Pricing Seminar, on fairness and ethics in insurance. Slides are now online.
After a very general (and long) introduction, to motivate our recent work on discrimination, I will try to explain how to quantify possible discrimination (with respect to a binary sensitive attribute), using Wasserstein distance, and optimal transport
and the use of Wasserstein Barycenter to mitigate discrimination
I will also mention our worshop in May, at Laval University,
Histoire de l’assurance (petite recension)
Pour le dernier numéro de la revue Risques (nouveau format), j’avais écrit une courte recension du livre de Nicolas Da Silva, La Bataille de la Sécu. Je parle d’histoire de l’assurance parce que c’est comme ça que je l’ai lu…
Talk at the 38th Annual AAAI Conference on Artificial Intelligence, in Vancouver
This week, François is in Vancouver, at the 38th Annual AAAI Conference on Artificial Intelligence,
presenting our joint work on Sequentially Fair Mechanism for Multiple Sensitive Attributes,
In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score. Throughout recent years, the scientific community has developed a host of definitions and tools to solve this task, which work well in many practical applications. However, the applicability and effectivity of these tools and definitions becomes less straightfoward in the case of multiple sensitive attributes. To tackle this issue, we propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features. We accomplish this by leveraging multi-marginal Wasserstein barycenters, which extends the standard notion of Strong Demographic Parity to the case with multiple sensitive characteristics. This method also provides a closed-form solution for the optimal, sequentially fair predictor, permitting a clear interpretation of inter-sensitive feature correlations. Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness. This extension permits a targeted prioritization of fairness improvements for a specific attribute within a set of sensitive attributes, allowing for a case specific adaptation. A data-driven estimation procedure for the derived solution is developed, and comprehensive numerical experiments are conducted on both synthetic and real datasets. Our empirical findings decisively underscore the practical efficacy of our post-processing approach in fostering fair decision-making.
La revue Risque change de format…
Le dernier numéro de la revue Risques est sorti, et j’y avais écrit un court article. La forme de la revue a changé, avec un format plus proche du magazine…
Fairness and discrimination, PhD Course, #7 Sensitive attributes and proxies
In our previous post, we discussed “group fairness“. I might have gone a bit fast, and I decided to add some material about sensistive attributes, and proxies.
Sensitive attributes ?
Almost everywhere, we can find a list of variables that are considered, by law, as sensitive, since they will lead to discrimination. As mentioned earlier, sensitive variable might change with time, and accross regions…
Another issue with black boxes is that it might be hard to assess if they are related to sensitive attribute. In order to extract informations in pictures to classify pictures, or detect pictures, algorithm might use information that could be considered as sensitive. First, recall the popular wolf-husky classifier, that detects snow in the background (since wolf were with snow in the training sample)
This can also be the case for health issues, where classifiers can be influenced by the color of the skin (or possibly some unexpected information)
Racism
The first sensitive attribute is probably the race, that has been discussed in insurance for decaded.
One should keep in mind that race is a social information, and most of the time, it is based on self-identification
This leads to popular maps in the U.S.
Racism is usually related to “colourism” (discrimination based on skin tone)
Is it relevant in the context of insurance, and risk ?
It has been observed that African Americans, in the U.S. were usually asked a higher insurance premium.
Have in mind that discrimination has nothing to do with intention, as mentioned previous. An insurance pricing can be racist, without any intention to be so. An important issue to quantify that problem is actually to observe that variable.
Sexism
Sexism is another popular exemple of discrimination, related to sex, or gender.
Actuaries have been using life tables that are gender related for more than 300 years. And indeed, it seems that women live longer than men.
Ageism
Age is another possible sensitive attribute, but it is more complicated. First, it is not a “club” and second, it is (somehow) clearly related to risk.
In dataset, there can also be selection bias, related to the age. For instance, during the COVID pandemic, triage was based on the age of patients. Treatements and tests can be related to the age of patients. So this bias will probably have an impact on observed risks.
Genetics
Another important sensitive variable is related to “genetic information”.
Such information is usually classified as sensitive everywhere.
To conclude, I wanted to mentioned that several important variables considered as sensitive have not much to do with genetics, but more with a social construction.
Finally, let us discuss proxies that can be related to those sensitive variables.
Names and language
The first one was discussed in the introduction : names contain information about race and ethnical origin,
Text and discussion can also reveal sensitive information.
Pictures
Pictures can also provide information. That was discussed 150 years ago, where researchers tried to identify criminals using solely pictures.
Some insurers have been trying, at some point, to detect diseases on facial pictures. And it possible to reveal informations from pictures. Possibly the age, and the gender.
One can also use satellite pictures, or pictures from Google Street View, such as the wealth in the neighborhood. And possibly sensitive information, such as the presence of an access ramp for disabled people.
Credit Scoring
Credit scoring is also a variable used by insurers, that can be related to variables considered as sensitive
Clearly, a bad credit score will have a big impact not only on mortgages and loans,
but also on insurance rates ! As we explained here, it costs a lot to be poor.
Networks
Finally, insurance can use information related to friends, or family, to assess the risk. And netword data capture a lot of sensitive information.
We will talk a little bit about network, to explain why using your friends risks to assess your own risk might not be a great idea…
It is an extension of th friendship paradox.
Proxies
Finally, we will conclude by showing that removing a sensitive attribute from a training dataset will not mitigate discrimination.
Data Talk at Generali
Tomorrow, I will give a talk at Generali “data talk” internal seminar, on fairness and ethics in insurance. Slides are now online.
Discrimination by proxy (a real case study)
Yesterday, with Laurence Barry, we posted a blog post “Who benefits from data sharing?” explaining why data sharing, in insurance, could end mutualization. Actually, it can also be bad in the context of discrimination. Consider here the same dataset, with claim occurence, in a real insurance portfolio,
library(InsurFair)
library(randomForest)
Consider a version of this dataset without the gender, and use variable importance to get a list of variables we can use in a predictive model
subfrenchmotor = frenchmotor[,-which(names(frenchmotor)=="sensitive")]
RF = randomForest(y~. ,data=subfrenchmotor)
vi = varImpPlot(RF , sort = TRUE)
We sort variables based on variable importance (the first one is the “most important” one), and add splines for three continuous variables
dfvi = data.frame(nom = names(subfrenchmotor)[-15], g = as.numeric(vi))
dfvi = dfvi[rev(order(dfvi$g)),]
nom = dfvi$nom
nom[1] = "bs(LicAge)"
nom[3] = "bs(DrivAge)"
nom[7] = "bs(BonusMalus)"
Then, the idea is simple : at stage k, we keep the k most important variables, and run a logistic regression on those k variables. Again, I should stress that the gender of the driver is not among those k variables. Then, we compute the average prediction of claim frequency, for mean and women.
n=nrow(subfrenchmotor)
library(splines)
idx_F = which(frenchmotor$sensitive == "Female")
idx_M = which(frenchmotor$sensitive == "Male")
metric_gender= function(k =3){
if(k==0){
reg = glm(y~1, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm = paste("y ~ ",vr,sep="")
reg = glm(fm, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
sortie}
Let us not compute it for all variables
N = 0:15
M = Vectorize(metric_gender)(N)
and plot it
plot(N,M[1,]*100, xlab="Number of predictive variables (without gender)", ylab=
"Average predicted claims frequency (%)", type="b", pch=19, col=COLORS[2], ylim=c(8.12,9))
lines(N, M[2,]*100, type="b", pch=15, col=COLORS[3])
Interestingly, we can clearly see that with 15 explanatory variables, even if our model is gender-blind (since it is not in the training dataset), our model reproduce the difference we can observe in the dataset : annual claim frequency for men is almost 9% and 8.2% for women.
Actually, it is not possible to predict the gender for our 15 variables (below is the ROC curve of the logistic regression to predict the gender)
metric_gender_2= function(k =3){
if(k==0){
reg = glm((sensitive=="Female")~1, family=binomial, data=frenchmotor)
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm_genre = paste('(sensitive=="Female") ~ ',vr,sep="")
reg = glm(fm_genre, family=binomial, data=frenchmotor)
}
pred = prediction(predict(reg,type="response"),(frenchmotor$sensitive=="Female"))
performance(pred,"tpr","fpr")}
plot(metric_gender_2(15))
but still, when using 15 variables, we obtain discrimination in our portfolio, since the average predictions for mean and women are significantly difference (even if our models are, per se, gender-blind).
Who benefits from data sharing?
This post was co-written with Laurence Barry, originally in French.
Recently, the European Commission has laid the groundwork for a new framework for accessing financial data (FIDA, or Financial Data Access), allowing consumers and businesses to authorize third parties to access their data held by financial institutions, including insurers.
One of the main arguments in favor of this regulation is transparency, or as the texts put it, ‘promoting financial transparency.’ However, it is difficult to argue against transparency unless one has something to hide. This is the famous ‘nothing to hide’ argument! As Solove (2011) reminds us, the British government used it as an argument to install surveillance cameras: ‘if you’ve got nothing to hide, you’ve got nothing to fear.’ Academic Shoshana Zuboff is much more reserved, stating, ‘if you have nothing to hide, then you are nothing…’ Sharing personal data without limits or accountability for how this information is used is dangerous, both for the individual doing so and collectively. We focus here on how insurers could potentially use more information: this opening of data access significantly compromises the very idea of risk pooling and sharing.
Partage des données, à qui profite le crime ?
Ce billet a été co-écrit avec Laurence Barry.
Récemment, la Commission Européenne a posé les bases d’un nouveau cadre d’accès aux données financières (FIDA, ou Financial Data Access), permettant aux consommateurs et aux entreprises d’autoriser des tiers à accéder à leurs données détenues par des institutions financières, y compris les assurances.
Un des principaux arguments en faveur de cette réglementation est la transparence, ou comme le disent les textes « promoting financial transparency ». Or il est difficile d’être contre la transparence, à moins d’avoir quelque chose à cacher. C’est le fameux argument “nothing to hide” ! Comme le rappelle Solove (2011), le gouvernement britannique l’avait utilisé comme argument, pour installer des caméras de surveillance: « if you’ve got nothing to hide, you’ve got nothing to fear. » L’universitaire Shoshana Zuboff est beaucoup plus réservée, « if you have nothing to hide, then you are nothing... » Partager des données personnelles sans limites, ni responsabilité quant à la manière dont ces informations sont utilisées, est dangereux, pour la personne qui le fait, mais aussi collectivement: Nous nous concentrons ici sur l’utilisation que pourraient faire les assureurs de davantage d’informations : cette ouverture de l’accès aux données compromet largement l’idée même de mutualisation et de partage des risques.
Continue reading Partage des données, à qui profite le crime ?
From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration
Our paper From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration, written with Agathe Fernandes Machadoa, Emmanuel Flachaire, Ewen Gallic and François Hu is now online on ArXiv,
The assessment of binary classifier performance traditionally centers on discriminative ability using metrics, such as accuracy. However, these metrics often disregard the model’s inherent uncertainty, especially when dealing with sensitive decision-making domains, such as finance or healthcare. Given that model-predicted scores are commonly seen as event probabilities, calibration is crucial for accurate interpretation. In our study, we analyze the sensitivity of various calibration measures to score distortions and introduce a refined metric, the Local Calibration Score. Comparing recalibration methods, we advocate for local regressions, emphasizing their dual role as effective recalibration tools and facilitators of smoother visualizations. We apply these findings in a real-world scenario using Random Forest classifier and regressor to predict credit default while simultaneously measuring calibration during performance optimization.
A Fair price to pay: exploiting causal graphs for fairness in insurance
Our paper, a fair price to pay: exploiting causal graphs for fairness in insurance, writen with Olivier Côté and Marie-Pier Côté is now available, on SSRN
In many jurisdictions, insurance companies must not discriminate on some given policyholder characteristics. Omission of prohibited variables from models prevents direct discrimination, but fails to address proxy discrimination, a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of acceptable covariates. The lack of formal definition for key fairness concepts, in particular indirect discrimination, hinders the fairness assessment of methodologies. We review causal inference notions and introduce a causal graph tailored for fairness in insurance. Exploiting these, we discuss potential sources of bias, formally define direct and indirect discrimination, and study the properties of fairness methodologies. A novel categorization of fair methodologies into five families (best-estimate, unaware, aware, hyperaware, and corrective) is constructed based on their expected fairness properties. A comprehensive pedagogical example illustrates the practical implications of our findings: the interplay between our fair score families, group fairness criteria, and sources of discrimination.
Assurances : des collectivités désemparées face aux effets du dérèglement climatique
Article intéressant la semaine passée sur le site de Politis.