Tomorrow, I will give a talk at Generali “data talk” internal seminar, on fairness and ethics in insurance. Slides are now online.
Daily Archives: 15/02/2024
Discrimination by proxy (a real case study)
Yesterday, with Laurence Barry, we posted a blog post “Who benefits from data sharing?” explaining why data sharing, in insurance, could end mutualization. Actually, it can also be bad in the context of discrimination. Consider here the same dataset, with claim occurence, in a real insurance portfolio,
library(InsurFair)
library(randomForest)
Consider a version of this dataset without the gender, and use variable importance to get a list of variables we can use in a predictive model
subfrenchmotor = frenchmotor[,-which(names(frenchmotor)=="sensitive")]
RF = randomForest(y~. ,data=subfrenchmotor)
vi = varImpPlot(RF , sort = TRUE)
We sort variables based on variable importance (the first one is the “most important” one), and add splines for three continuous variables
dfvi = data.frame(nom = names(subfrenchmotor)[-15], g = as.numeric(vi))
dfvi = dfvi[rev(order(dfvi$g)),]
nom = dfvi$nom
nom[1] = "bs(LicAge)"
nom[3] = "bs(DrivAge)"
nom[7] = "bs(BonusMalus)"
Then, the idea is simple : at stage k, we keep the k most important variables, and run a logistic regression on those k variables. Again, I should stress that the gender of the driver is not among those k variables. Then, we compute the average prediction of claim frequency, for mean and women.
n=nrow(subfrenchmotor)
library(splines)
idx_F = which(frenchmotor$sensitive == "Female")
idx_M = which(frenchmotor$sensitive == "Male")
metric_gender= function(k =3){
if(k==0){
reg = glm(y~1, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm = paste("y ~ ",vr,sep="")
reg = glm(fm, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
sortie}
Let us not compute it for all variables
N = 0:15
M = Vectorize(metric_gender)(N)
and plot it
plot(N,M[1,]*100, xlab="Number of predictive variables (without gender)", ylab=
"Average predicted claims frequency (%)", type="b", pch=19, col=COLORS[2], ylim=c(8.12,9))
lines(N, M[2,]*100, type="b", pch=15, col=COLORS[3])
Interestingly, we can clearly see that with 15 explanatory variables, even if our model is gender-blind (since it is not in the training dataset), our model reproduce the difference we can observe in the dataset : annual claim frequency for men is almost 9% and 8.2% for women.
Actually, it is not possible to predict the gender for our 15 variables (below is the ROC curve of the logistic regression to predict the gender)
metric_gender_2= function(k =3){
if(k==0){
reg = glm((sensitive=="Female")~1, family=binomial, data=frenchmotor)
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm_genre = paste('(sensitive=="Female") ~ ',vr,sep="")
reg = glm(fm_genre, family=binomial, data=frenchmotor)
}
pred = prediction(predict(reg,type="response"),(frenchmotor$sensitive=="Female"))
performance(pred,"tpr","fpr")}
plot(metric_gender_2(15))
but still, when using 15 variables, we obtain discrimination in our portfolio, since the average predictions for mean and women are significantly difference (even if our models are, per se, gender-blind).
Who benefits from data sharing?
This post was co-written with Laurence Barry, originally in French.
Recently, the European Commission has laid the groundwork for a new framework for accessing financial data (FIDA, or Financial Data Access), allowing consumers and businesses to authorize third parties to access their data held by financial institutions, including insurers.
One of the main arguments in favor of this regulation is transparency, or as the texts put it, ‘promoting financial transparency.’ However, it is difficult to argue against transparency unless one has something to hide. This is the famous ‘nothing to hide’ argument! As Solove (2011) reminds us, the British government used it as an argument to install surveillance cameras: ‘if you’ve got nothing to hide, you’ve got nothing to fear.’ Academic Shoshana Zuboff is much more reserved, stating, ‘if you have nothing to hide, then you are nothing…’ Sharing personal data without limits or accountability for how this information is used is dangerous, both for the individual doing so and collectively. We focus here on how insurers could potentially use more information: this opening of data access significantly compromises the very idea of risk pooling and sharing.