Tag Archives: bias

Fairness and discrimination, PhD Course, #9 Mitigation, Pre-processing and In-processing

Finally, after defining (and quantifying) “group fairness“ and “individual fairness“, we can now start to discuss the idea of mitigating a possible discrimination. Here, we will see, how based on some data that were initially collected, and a model (a pricing model), it is possible to remove the discrimination in our pricing model.

Biases everywhere

As mentioned previously, insurance princing is based on the use of different datasets, at least one from “claims” and one from “underwriting”. And obviously, there might be biases in those data, conscious, or intended, or not.

Somehow, idea of tackling the problem from the end, as proposed, may not be the right one, and it might be better to tackle it from the beginning, through the biases in the data. The outcome of models will be less discriminatory if we could get rid of sexist or racist bias in underwriting, or even in the assessment of claims costs. Unfortunately, I cannot discuss that here since I do not have data that could be used to assess selection biases related to sensitive attributes.

On mitigation…

From a philosophical perspective, asking for mitigation might lead to some paradoxes.  I mention here two statements, by two judges in the U.S., that have very opposite perspective on the same problem,

versus

I will not talk much about those philosophical aspects (discussed a bit more in the textbook), we will not discuss how we can achieve fairness if required.

Interestingly, we have a nice property, on the price to pay to achieve fairness (price in terms of risk)

More precisely, we have the following result,

Interestingly, we have not only a lower bound, we can actually reach that bound (we will discuss that point next week).

Pre-processing

The first approach is related to the idea of “distorting” inputs, to get legitimate explanatory variables that are uncorrelated with sensitive ones.

If that makes sense in the context of linear models, it is not working well in the general case.

But one should not be (too) suprised: as mentioned in a previous post, on independence (and correlation), there is no statistical guarantees to keep it with nonlinear transforms of variables

or

It is what we observe on our datasets.

In-processing

An alternative is to use a penalized approach, where fairness is added as the constraint in the optimization procedure. For example Zafar et al. (2017), considered the following approach, with a constraint based on the covariance between the outcome and the sensitive attribute. We can adapt it for non-linear models.

We can look at the evolution of \widehat{\boldsybol{\beta}} as a function of c.

We can also visualize the evolution of predictions,

(including prediction for unwaware model, blind to the sensitive attribute)

On this slide, we can see that we have a tradeoff between accuracy and fairness.

It is also possible to visualize the distance between the distributions of scores in the two groups. We can see that c\to0 gives here strong fairness actuarlly, since Wasserstein distance tends to 0.

An alternative that we can find in the litterature (that I include in the in-processing section) is based on adversarial learning,

Formally, it is

which is related to minimax theorems

Standard references about adversarial learning and fairness are the following

Next week, we will discuss post-processing approaches.

Unbiased Estimators vs. Minimizing a Quadratic Loss Function

Unbiased estimators are important in statistics. I guess because of Cramér Rao bound, for the variance. In the sense that if  , then  https://latex.codecogs.com/gif.latex?\text{Var}[\widehat{\theta}]\geq%20I_\theta^{-1}, where   denotes Fisher information (the proof was writen in an old post).

But what could we be the variance if   is not unbiased ?

Consider the following simple case, with a Gaussian i.id. sample   from a  . We know that the estimator of the Method of Moments is the same as the Maximum Likelihood estimator, i.e.  . And this estimator is efficient, in the sense that its variance is equal to Cramér-Rao lower bound,  .

But what if we consider another estimator? For instance  , with   not necessarily equal to 1. In that case, this estimator is (usually) biased since

 

And its variance is

 

We can visualise those two functions (the bias and the variance) using


n=10
alpha=seq(0,2,by=.01)
b=1-alpha
v=alpha^2/n
plot(alpha,b,xlab="alpha",col="red",type="l")
par(new=TRUE)
plot(alpha,v,col="blue",type="l",axes=FALSE
axis(4,)
mtext("bias",side=2,line=-1,col="red")
mtext("variance",side=4,line=-1,col="blue")

Observe that if is small (smaller than 1), the variance is smaller than the Cramér-Rao lower bound. And here, the mean squared error, defined as

which is, here,

The optimal value is obtained when the first order condition is satisfied

i.e.

So biased estimators can be more interesting than unbiased estimators, if the goal is the minimize the mean square error.

Heuristics on bias and variance for kernel density estimators

Consider the simple case of a moving histogram (which is a very simple kernel). The idea is to recall that

where

is the slope close to point .

Then we use the empirical cumulative density to approximate the slope, i.e.

which can also be writen

Consider now the density seen as a random variable

where the‘s are i.i.d. where , with

Thus, observe that , but that’s not what we’re looking for… From Taylor’s expansion,

thus

where the bias comes from the approximation of the density by some string. About the variance,

thus, since ,

i.e.

We can observe that

is decreasing as , while the variance is increasing as . This is the standard bias-variance tradeoff in statistics.

Bias of Hill Estimators

In the MAT8595 course, we’ve seen yesterday Hill estimator of the tail index. To be more specific, we did see see that if https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, then Hill estimators for https://latex.codecogs.com/gif.latex?\alpha are given by

https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20=%20\left[\frac{1}{k}\sum_{i=0}^{k-1}%20\log%20X_{n-i,n}%20-\log%20X_{n-k,n}\right]^{-1}
for https://latex.codecogs.com/gif.latex?k\in\{1,2,\cdots,n\}. Then we did say that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k satisfies some consistency in the sense that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{\mathbb{P}}{\rightarrow}%20\alpha if https://latex.codecogs.com/gif.latex?k\rightarrow\infty, but not too fast, i.e. https://latex.codecogs.com/gif.latex?k/n\rightarrow0 (under additional assumptions on the rate of convergence, it is possible to prove that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{a.s.}{\rightarrow}%20\alpha). Further, under additional technical conditions

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}(0,\alpha^2)

In order to illustrate this point, consider the following code. First, let us consider a Pareto survival function, and the associated quantile function

> alpha=1.5
> S=function(x){ifelse(x>1,x^(-alpha),1)}
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

The code here is obviously too complicated, since this power function can easily be inverted. But later on, we will consider a more complex survival function. Here are the survival function, and the quantile function,

> u=seq(0,5,by=.01)
> plot(u,Vectorize(S)(u),type="l",col="red")
> u=seq(0,99/100,by=.01)
> plot(u,Vectorize(Q)(u),type="l",col="blue",ylim=c(0,20))

Here, we need the quantile function to generate a random sample from this distribution,

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

Hill plot is here

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

We can now generate thousands of random samples, and see how those estimators behave (for some specific https://latex.codecogs.com/gif.latex?k‘s).

> ns=10000
> HillK=matrix(NA,ns,10)
> for(s in 1:ns){
+ X=Vectorize(Q)(runif(n))
+ H=hill(X,plot=FALSE)
+ hillk=function(k) H$y[H$x==k]
+ HillK[s,]=Vectorize(hillk)(15*(1:10))
+ }

and if we compute the average,

> plot(15*(1:10),apply(HillK,2,mean)

we do get a series of estimators that can be considered as unbiased.

So far, so good. Now, recall that being in the max-domain of attraction of the Fréchet distribution does not mean that https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, but is means that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=%20x^{-\alpha}%20\mathcal{L}(x)

for some slowly varying function https://latex.codecogs.com/gif.latex?\mathcal{L}, not necessarily constant! In order to understand what could happen, we have to be slightly more specific. And this can be done only by looking at second order regular variation property of the survival function. Assume, here that there is some auxilary function https://latex.codecogs.com/gif.latex?a such that

https://latex.codecogs.com/gif.latex?\lim_{t\rightarrow\infty}\frac{\overline{F}(xt)/\overline{F}(t)-x^{-\alpha}}{a(t)}=x^{-\alpha}\frac{1-x^{-\beta}}{\beta}{}

This (positive) constant https://latex.codecogs.com/gif.latex?\beta is – somehow – related to the speed of convergence of the ratio of the survival functions to the power function (see e.g. Geluk et al. (2000) for some examples).

To be more specific, assume that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

then, the second order regular variation property is obtained using https://latex.codecogs.com/gif.latex?a(t)=\beta%20t^{-\beta}, and then, if https://latex.codecogs.com/gif.latex?k goes to infinity too fast, then the estimator will be biased. More precisely (see Chapter 6 in Embrechts et al. (1997)), if https://latex.codecogs.com/gif.latex?k=O(n^{2\beta/(\alpha+2\beta)}), then, for some https://latex.codecogs.com/gif.latex?\lambda%3E0,

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}\left(\frac{\alpha^3}{\beta-\alpha}\lambda,\alpha^2\right)

The intuitive interpretation of this result is that if https://latex.codecogs.com/gif.latex?k is too large, and if the underlying distribution is not exactly a Pareto distribution (and we do have this second order property), then Hill’s estimator is biased. This is what we mean when we say

  • if https://latex.codecogs.com/gif.latex?k is too large, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a biased estimator
  • if https://latex.codecogs.com/gif.latex?k is too small, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a volatile estimator

(the later comes from properties of a sample mean: the more observations, the less the volatility of the mean).

Let us run some simulations to get a better understanding of what’s going on. Using the previous code, it is actually extremly simple to generate a random sample with survival function

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

> beta=.5
> S=function(x){+ ifelse(x>1,.5*x^(-alpha)*(1+x^(-beta)),1) }
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

If we use the code above. Here, with

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

the Hill plot becomes

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

But it’s based on one sample, only. Again, consider thousands of samples, and let us see how Hill’s estimator is behaving,

so that the (empirical) mean of those estimator is

Bias and MLE

Before leaving the office, this evening, JP decided to knock at my door to ask me a “quick and very basic question” (as he put it). This is JP’s stategy, and he knows it works. His question was – more or less – what do we know about the bias in maximum likelihood estimation when we have a small sample, from a Gamma distribution. He was surprised by some results he got. If I wanted to be naughty, too, I would say that he was suprised to see how long his student spent to code that in SAS. So he wanted to challenge me, and see how fast I could give him a valuable answer. Given the fact that I had to leave early because my elder son had a fencing competition, I tried to write a simple code to “visualize” the bias of the parameter (the first one) of a Gamma distribution, with MLE.

Before showing the graph, I wanted to add that I hate one thing about mathematical statistics courses: we learn nothing interesting there. I mean, we can see nice mathematical concepts, but after this class, you can hardly say anything when you see your first dataset. Like with real data. For instance, this course usually emphasize asymptotical results, using limiting theorem. When you take this course, you learn a lot of thing about maximumum likelihood for instance. You can compute the asymptotic variance and derive asymptotic confidence intervals. But are those results relevant when you have 50 observations? Is it possible, with 50 observations, to have a bias which has the same size as the parameter?

As usual, one possible answer is “if you don’t have a lot of observations, be Bayesian!“. Maybe. Someday. What I tried, here, is to run simulations to see how MLE estimators behave. Given a -i.i.d. sample, from a  distribution, let  and  denote the maximum likelihood estimators of the two parameters.

library(fitdistrplus)
maxl=function(x) fitdist(x,"gamma",method="mle")$estimate
VK=floor(exp(seq(log(5),log(200),length=25)))
V=NULL
for(k in 1:length(VK)){
n=VK[k]
N=5000
m=matrix(rgamma(n*N,1,2),n,N)
ss=apply(m,2,maxl)
V=rbind(V,ss)}
y=as.vector(V[seq(1,length(VK)*2,by=2),])
x=rep(c(VK),ncol(V))
boxplot(y~x,
xlab="Nb. observations (log scale)",ylim=c(0,6))
abline(h=1,lty=2,col="blue")

Here, in our simulations, the shape parameter was 1. On the graph, we have boxplots of   obtained on several scenarios. We clearly see the positive bias of the MLE. And the bias reduces with  (as expected, since the MLE is asymptotically unbiased). We can also visualize the distribution of    (instead of boxplots)

It is also possible to derive analytical results. David Cox and Joyce Snell did the maths in 1968 and actually did obtain analytical expressions for the biases. More recently, David Giles (a.k.a. @deagiles on Twitter) and Hui Feng did look at the behavior of bias-adjusted estimators, a few years ago. For instance, one can get that

where

 being the so-called digamma function,

and where  and  are the first and second order derivatives, see e.g. Bowman and Shenton (1982) – yes, there is an book on the topic of estimating parameters of the Gamma distribution…

Observe that the bias of   does not depend on , while the bias of   will depend on .

d1digamma=function(x,h=1e-7)
return((digamma(x+h)-digamma(x-h))/(2*h))
d2digamma=function(x,h=1e-7)
return((d1digamma(x+h)-d1digamma(x-h))/(2*h))
biasalpha=function(a,n){
return((a*d1digamma(a)-a^2*d2digamma(a)
-2)/(2*n*(a*d1digamma(a)-1)^2))
}

The way I compute it is probably not optimal, so if you want to improve it, please, go ahead ! If we compare the average bias obtained on our simulation, and the one obtained this first order approximation, we get

m=apply(V,1,mean)
plot(VK,m[seq(1,length(VK)*2,by=2)],type="b",col="red",xlab="Nb. observations (log scale)",log="x")
abline(h=1,lty=2,col="blue")
B=Vectorize(function(n) biasalpha(a=1,n))(1:200)
lines(1:200,B+1,col="orange")

Observe here that neglecting the  factor yield an underestimation of the real biais… Fun, isn’t it?

Border bias and weighted kernels

With Ewen (aka @3wen), not only we have been playing on Twitter this month, we have also been working on kernel estimation for densities of spatial processes. Actually, it is only a part of what he was working on, but that part on kernel estimation has been the opportunity to write a short paper, that can now be downloaded on hal.

The problem with kernels is that kernel density estimators suffer a strong bias on borders. And with geographic data, it is not uncommon to have observations very close to the border (frontier, or ocean). With standard kernels, some weight is allocated outside the area: the density does not sum to one. And we should not look for a global correction, but for a local one. So we should use weighted kernel estimators (see on hal for more details). The problem that weights can be difficult to derive, when the shape of the support is a strange polygon. The idea is to use a property of product Gaussian kernels (with identical bandwidth) i.e. with the interpretation of having noisy observation, we can use the property of circular isodensity curve. And this can be related to Ripley (1977) circumferential correction. And the good point is that, with R, it is extremely simple to get the area of the intersection of two polygons. But we need to upload some R packages first,

require(maps)
require(sp)
require(snow)
require(ellipse)
require(ks)
require(gpclib)
require(rgeos)
require(fields)

To be more clear, let us illustrate that technique on a nice example. For instance, consider some bodiliy injury car accidents in France, in 2008 (that I cannot upload but I can upload a random sample),

base_cara=read.table(
"http://freakonometrics.blog.free.fr/public/base_fin_morb.txt",
sep=";",header=TRUE)

The border of the support of our distribution of car accidents will be the contour of the Finistère departement, that can be found in standard packages

geoloc=read.csv(
"http://freakonometrics.free.fr/popfr19752010.csv",
header=TRUE,sep=",",comment.char="",check.names=FALSE,
colClasses=c(rep("character",5),rep("numeric",38)))
geoloc=geoloc[,c("dep","com","com_nom",
"long","lat","pop_2008")]
geoloc$id=paste(sprintf("%02s",geoloc$dep),
sprintf("%03s",geoloc$com),sep="")
geoloc=geoloc[,c("com_nom","long","lat","pop_2008")]
head(geoloc)
france=map('france',namesonly=TRUE,
plot=FALSE)
francemap=map('france', fill=TRUE, col="transparent",
plot=FALSE)
detpartement_bzh=france[which(france%in%
c("Finistere","Morbihan","Ille-et-Vilaine",
"Cotes-Darmor"))]
bretagne=map('france',regions=detpartement_bzh,
fill=TRUE, col="transparent", plot=FALSE,exact=TRUE)
finistere=cbind(bretagne$x[321:678],bretagne$y[321:678])
FINISTERE=map('france',regions="Finistere", fill=TRUE,
col="transparent", plot=FALSE,exact=TRUE)
monFINISTERE=cbind(FINISTERE$x[c(8:414)],FINISTERE$y[c(8:414)])

Now we need simple functions,

cercle=function(n=200,centre=c(0,0),rayon)
{theta=seq(0,2*pi,length=100)
m=cbind(cos(theta),sin(theta))*rayon
m[,1]=m[,1]+centre[1]
m[,2]=m[,2]+centre[2]
names(m)=c("x","y")
return(m)}
poids=function(x,h,POL)
{leCercle=cercle(centre=x,rayon=5/pi*h)
POLcercle=as(leCercle, "gpc.poly")
return(area.poly(intersect(POL,POLcercle))/
area.poly(POLcercle))}
lissage = function(U,polygone,optimal=TRUE,h=.1)
{n=nrow(U)
IND=which(is.na(U[,1])==FALSE)
U=U[IND,]
if(optimal==TRUE) {H=Hpi(U,binned=FALSE);
H=matrix(c(sqrt(H[1,1]*H[2,2]),0,0,
sqrt(H[1,1]*H[2,2])),2,2)}
if(optimal==FALSE){H= matrix(c(h,0,0,h),2,2)

before defining our weights.

poidsU=function(i,U,h,POL)
{x=U[i,]
poids(x,h,POL)}
OMEGA=parLapply(cl,1:n,poidsU,U=U,h=sqrt(H[1,1]),
POL=as(polygone, "gpc.poly"))
OMEGA=do.call("c",OMEGA)
stopCluster(cl)
}else
{OMEGA=lapply(1:n,poidsU,U=U,h=sqrt(H[1,1]),
POL=as(polygone, "gpc.poly"))
OMEGA=do.call("c",OMEGA)}

Note that it is possible to parallelize if there are a lot of observations,

if(n>=500)
{cl <- makeCluster(4,type="SOCK")
worker.init <- function(packages)
{for(p in packages){library(p, character.only=T)}
NULL}
clusterCall(cl, worker.init, c("gpclib","sp"))
clusterExport(cl,c("cercle","poids"))

Then, we can use standard bivariate kernel smoothing functions, but with the weights we just calculated, using a simple technique that can be related to one suggested in Ripley (1977),

fhat=kde(U,H,w=1/OMEGA,xmin=c(min(polygone[,1]),
min(polygone[,2])),xmax=c(max(polygone[,1]),
max(polygone[,2])))
fhat$estimate=fhat$estimate*sum(1/OMEGA)/n
vx=unlist(fhat$eval.points[1])
vy=unlist(fhat$eval.points[2])
VX = cbind(rep(vx,each=length(vy)))
VY = cbind(rep(vy,length(vx)))
VXY=cbind(VX,VY)
Ind=matrix(point.in.polygon(VX,VY, polygone[,1],
polygone[,2]),length(vy),length(vx))
f0=fhat
f0$estimate[t(Ind)==0]=NA
return(list(
X=fhat$eval.points[[1]],
Y=fhat$eval.points[[2]],
Z=fhat$estimate,
ZNA=f0$estimate,
H=fhat$H,
W=fhat$W))}
lissage_without_c = function(U,polygone,optimal=TRUE,h=.1)
{n=nrow(U)
IND=which(is.na(U[,1])==FALSE)
U=U[IND,]
if(optimal==TRUE) {H=Hpi(U,binned=FALSE);
H=matrix(c(sqrt(H[1,1]*H[2,2]),0,0,sqrt(H[1,1]*H[2,2])),2,2)}
if(optimal==FALSE){H= matrix(c(h,0,0,h),2,2)}
fhat=kde(U,H,xmin=c(min(polygone[,1]),
min(polygone[,2])),xmax=c(max(polygone[,1]),
max(polygone[,2])))
vx=unlist(fhat$eval.points[1])
vy=unlist(fhat$eval.points[2])
VX = cbind(rep(vx,each=length(vy)))
VY = cbind(rep(vy,length(vx)))
VXY=cbind(VX,VY)
Ind=matrix(point.in.polygon(VX,VY, polygone[,1],
polygone[,2]),length(vy),length(vx))
f0=fhat
f0$estimate[t(Ind)==0]=NA
return(list(
X=fhat$eval.points[[1]],
Y=fhat$eval.points[[2]],
Z=fhat$estimate,
ZNA=f0$estimate,
H=fhat$H,
W=fhat$W))}

So, now we can play with those functions,

base_cara_FINISTERE=base_cara[which(point.in.polygon(
base_cara$long,base_cara$lat,monFINISTERE[,1],
monFINISTERE[,2])==1),]
coord=cbind(as.numeric(base_cara_FINISTERE$long),
as.numeric(base_cara_FINISTERE$lat))
nrow(coord)
map(francemap)
lissage_FIN_withoutc=lissage_without_c(coord,
monFINISTERE,optimal=TRUE)
lissage_FIN=lissage(coord,monFINISTERE,
optimal=TRUE)
lesBreaks_sans_pop=range(c(
range(lissage_FIN_withoutc$Z),
range(lissage_FIN$Z)))
lesBreaks_sans_pop=seq(min(lesBreaks_sans_pop)*.95,
max(lesBreaks_sans_pop)*1.05,length=21)

plot_article=function(lissage,breaks,
polygone,coord){
par(mar=c(3,1,3,1))
image.plot(lissage$X,lissage$Y,(lissage$ZNA),
xlim=range(polygone[,1]),ylim=range(polygone[,2]),
breaks=breaks, col=rev(heat.colors(20)),xlab="",
ylab="",xaxt="n",yaxt="n",bty="n",zlim=range(breaks),
horizontal=TRUE)
contour(lissage$X,lissage$Y,lissage$ZNA,add=TRUE,
col="grey")
points(coord[,1],coord[,2],pch=19,cex=.1,
col="dodger blue")
polygon(polygone,lwd=2,)}

plot_article(lissage_FIN_withoutc,breaks=
lesBreaks_sans_pop,polygone=monFINISTERE,
coord=coord)

plot_article(lissage_FIN,breaks=
lesBreaks_sans_pop,polygone=monFINISTERE,
coord=coord)

If we look at the graphs, we have the following densities of car accident, with a standard kernel on the left, and our proposal on the right (with local weight adjustment when the estimation is done next to the border of the region of interest),

Similarly, in Morbihan,

With those modified kernels, hot spots appear much more clearly. For more details, the paper is online on hal.