Category Archives: Course

More neurons in the hidden layer than predictive features in neural nets

This week, we were talking about neural networks for the first time, and I was saying that, in many illustrations of neural networks, there was a layer with fewer neurons than predictive variables,

but sometimes, it could make sense to have more neurons in the layer than predictive variables,

To illustrate, consider a simple example with one single variable x, and a binary outcome y\in\{0,1\}

set.seed(12345)
n = 100
x = c(runif(n),1+runif(n),2+runif(n))
y = rep(c(0,1,0),each=n)

We should insure that observations are in the [0,1] interval,

minmax = function(z) (z-min(z))/(max(z)-min(z))
xm = minmax(x)
df = data.frame(x=xm,y=y)

just like what we can visualize below

plot(df$x,rep(0,3*n),col=1+df$y)

Here, the blue and the red dots (when y is either 0 or 1) are not linearly separable. The standard activation function in neural nets is the sigmoid

sigmoid = function(x) 1 / (1 + exp(-x))

Let us compute a neural network

library(nnet)
set.seed(1234)
model_nnet = nnet(y~x,size=2,data=df)

We can then get the weights, and we can visualize the two neurons

w = neuralweights(model_nnet)
x1 = cbind(1,df$x)%*%w$wts$"hidden 1 1"
x2 = cbind(1,df$x)%*%w$wts$"hidden 1 2"
b = w$wts$`out 1`
plot(sigmoid(x1),sigmoid(x2),col=1+df$y)

 

Now, the the blue and the red dots (when y is either 0 or 1) are actually linearly separable.

abline(a=-b[1]/b[3],b=-b[2]/b[3])

If we do not specify the seed of the random generator, we can get a different outcome since, obviously, this model is not identifiable

or

If we now have

set.seed(12345)
n=100
x=c(runif(n),1+runif(n),2+runif(n),3+runif(n))
y=rep(c(0,1,0,1),each=n)
xm = minmax(x)
df = data.frame(x=xm,y=y)
plot(df$x,rep(0,4*n),col=1+df$y)

then we need more neurons (one more, at least)

set.seed(321)
model_nnet = nnet(y~x,size=3,data=df)
w = neuralweights(model_nnet)
x1 = cbind(1,df$x)%*%w$wts$"hidden 1 1"
x2 = cbind(1,df$x)%*%w$wts$"hidden 1 2"
x3 = cbind(1,df$x)%*%w$wts$"hidden 1 3"
b = w$wts$`out 1`
library(scatterplot3d)
s3d = scatterplot3d(x=sigmoid(x1),
y=sigmoid(x2), z=sigmoid(x3),color=1+df$y)

And one more time, we have been able to separate (linearly) the blue and the red points (just imagine the plane, I did not manage to add it on the 3d scatterplot)

Finally, consider

set.seed(123)
n=500
x1=runif(n)*3-1.5
x2=runif(n)*3-1.5
y = (x1^2+x2^2)<=1
x1m = minmax(x1)
x2m = minmax(x2)
df = data.frame(x1=x1m,x2=x2m,y=y)
plot(df$x1,df$x2,col=1+df$y)

and again, we three neurons (for two explanatory variables) we can, linearly, separate the blue and the red points

set.seed(1234)
model_nnet = nnet(y~x1+x2,size=3,data=df)
w = neuralweights(model_nnet)
x1 = cbind(1,df$x1,df$x2)%*%w$wts$"hidden 1 1"
x2 = cbind(1,df$x1,df$x2)%*%w$wts$"hidden 1 2"
x3 = cbind(1,df$x1,df$x2)%*%w$wts$"hidden 1 3"
b = w$wts$`out 1`
library(scatterplot3d)
s3d = scatterplot3d(x=sigmoid(x1), y=sigmoid(x2), z=sigmoid(x3),
color=1+df$y)

Here, neural networks play the rule of the kernel trick, as coined in Koutroumbas, K. & Theodoridis, S. (2008). Pattern Recognition. Academic Press

Fairness and discrimination, PhD Course, #10 Mitigation, Post-processing

For the last part, in our graduate course, we will discuss further mitigation, and after pre-processing and in-processing techniques, we will present post-processing ones. It simply means that we created a model, that could be seen as discriminatory. But besides that, it was a “good” model, so still want to use the predictions we obtained. Quite heuristically, in the context of binary sensitive attributes, we could agree that we got two sets of predictions, for the two sensitive attribute, and the fair model should probability “in between”. And that is a natural idea when dealing with convex objects, and it is related to averages, centroids or barycenters,

As mentioned above, there are several ways to defined such a quantity

but the most interesting one will be related to the one based on optimization.

Interestingly, this idea can be extended to more complexe objects than points, in some metric space, but more generally on distributions. And since we have seen several distances on the set of distributions, we can consider for instance the Wasserstein (2) barycenter, as in Agueh and Carlier (2011),

An interesting point, it that in the univariate setting, there is a simple connection with optimal transport, where averages of push-forward mapping are considered,

(even if it remains computationnaly difficult to get)

In our context, if we have a model, consider two scores, m(\boldsymbol{x},s=A) and m(\boldsymbol{x},s=B), on the two sensistive groups, and consider quite naturally the fair barycenter.

The heuristic interpretation is simple, and interstingly, using that new model m^\star(\boldsymbol{x}), individuals we be ordered the same way within each group. In the Gaussian case, it is also possible to compute those barycenters.

To do so, we need to define \boldsymbol{\Sigma}^t, for t\in[0,1], including some square root of \boldsymbol{\Sigma}, i.e. \boldsymbol{\Sigma}^{1/2}, that we expect to be symmetric. To do so, we need simply the exponential of matrices

and the logarithm

Then define the square root, for instance,

Here, we can prove that the barycenter of Gaussian vectors is Gaussian. The mean is the average of means, and a slightly more complex formula is used for the variance

based on matrix equations,

In dimension 2, we can write it more simply

Interestingly, the average is variances is larger than the variance of the barycenter. And a natural property can be obtained whem variances are diagonalizables in the same basis

Here we can illustrate iso-probabilities curves

More generally, use histograms

But it becomes hard to compute

An easier approach is to use numerical simulations and a kernel estimate to obtain a smooth density,

We can use it on pictures. For instance, we have several observations of a “3”, handwriten

We can compute there barycenter, and generate based on it

Quite naturally, once we have barycenters, we can consider geodesics

and apply it again to Gaussian vectors

On our dataset, with scores, we have

We can then compute the “barycenter score”, that will be, for people in group A

and people in group B

Consider now the score of three models, on the motor dataset

The score m^\star for people in group A is

while the score m^\star for people in group B is

We can now compare predictions for people in the two groups

Numerically, we obtain

So we are now able to mitigate unfair scores.

Fairness and discrimination, PhD Course, #9 Mitigation, Pre-processing and In-processing

Finally, after defining (and quantifying) “group fairness“ and “individual fairness“, we can now start to discuss the idea of mitigating a possible discrimination. Here, we will see, how based on some data that were initially collected, and a model (a pricing model), it is possible to remove the discrimination in our pricing model.

Biases everywhere

As mentioned previously, insurance princing is based on the use of different datasets, at least one from “claims” and one from “underwriting”. And obviously, there might be biases in those data, conscious, or intended, or not.

Somehow, idea of tackling the problem from the end, as proposed, may not be the right one, and it might be better to tackle it from the beginning, through the biases in the data. The outcome of models will be less discriminatory if we could get rid of sexist or racist bias in underwriting, or even in the assessment of claims costs. Unfortunately, I cannot discuss that here since I do not have data that could be used to assess selection biases related to sensitive attributes.

On mitigation…

From a philosophical perspective, asking for mitigation might lead to some paradoxes.  I mention here two statements, by two judges in the U.S., that have very opposite perspective on the same problem,

versus

I will not talk much about those philosophical aspects (discussed a bit more in the textbook), we will not discuss how we can achieve fairness if required.

Interestingly, we have a nice property, on the price to pay to achieve fairness (price in terms of risk)

More precisely, we have the following result,

Interestingly, we have not only a lower bound, we can actually reach that bound (we will discuss that point next week).

Pre-processing

The first approach is related to the idea of “distorting” inputs, to get legitimate explanatory variables that are uncorrelated with sensitive ones.

If that makes sense in the context of linear models, it is not working well in the general case.

But one should not be (too) suprised: as mentioned in a previous post, on independence (and correlation), there is no statistical guarantees to keep it with nonlinear transforms of variables

or

It is what we observe on our datasets.

In-processing

An alternative is to use a penalized approach, where fairness is added as the constraint in the optimization procedure. For example Zafar et al. (2017), considered the following approach, with a constraint based on the covariance between the outcome and the sensitive attribute. We can adapt it for non-linear models.

We can look at the evolution of \widehat{\boldsybol{\beta}} as a function of c.

We can also visualize the evolution of predictions,

(including prediction for unwaware model, blind to the sensitive attribute)

On this slide, we can see that we have a tradeoff between accuracy and fairness.

It is also possible to visualize the distance between the distributions of scores in the two groups. We can see that c\to0 gives here strong fairness actuarlly, since Wasserstein distance tends to 0.

An alternative that we can find in the litterature (that I include in the in-processing section) is based on adversarial learning,

Formally, it is

which is related to minimax theorems

Standard references about adversarial learning and fairness are the following

Next week, we will discuss post-processing approaches.

Fairness and discrimination, PhD Course, #8 Individual fairness

After our post on “group fairness“, it’s time to discuss so-called “individual fairness“.

Similarity

The first idea is discussed in Dwork et al. (2012)

our approach is centered around the notion of a task-specific similarity metric describing the extent to which pairs of individuals should be regarded as similar for the classification task at hand. The similarity metric expresses ground truth. When ground truth is unavailable, the metric may reflect the “best” available approximation as agreed upon by society. Following established tradition – Rawls (1971) – the metric is assumed to be public and open to discussion and continual refinement. Indeed, we envision that, typically, the distance metric would be externally imposed, for example, by a regulatory body or externally proposed by a civil rights organization

or

Counterfactual fairness

The second one is related to causal inference. Ensuring fairness using causal methods will produce “counterfactual fairness” (to use the term introduced in Kusner et al. (2017)), based on the idea a decision is fair towards an individual if the outcome is the same in reality as it would be in a ‘counterfactual’ world, in which the individual belongs to the other group (with respect to the sensitive attribute).

Quite naturally, we should compare potential outcomes, either globally (average treatement effect) or a local version, conditional on characteristics \boldsymbol{x} of an individual.

Based on causal graphs (discussed previously) we can define several notions of individual fairness.

Hence, it is possible to use Plečko et al. (2021), based on transport, and quantile regressions,

To illustrate, we can consider some causal graph on our toy dataset

and then, on some specific individuals in the dataset

Here, we can also get a counterfactual version of all individuals with one-to-one matching, and optimal transport

i.e.

and we can get a counterfactual version, and possibly, a different prediction, using the fairadapt R package

We can also consider the German credit dataset

or the causal graph used in Watson et al. (2021),

Then, those techniques can be used to see compare the predictions of 6 fictious individuals,

Fairness and discrimination, PhD Course, #7 Sensitive attributes and proxies

In our previous post, we discussed “group fairness“. I might have gone a bit fast, and I decided to add some material about sensistive attributes, and proxies.

Sensitive attributes ?

Almost everywhere, we can find a list of variables that are considered, by law, as sensitive, since they will lead to discrimination. As mentioned earlier, sensitive variable might change with time, and accross regions…

Another issue with black boxes is that it might be hard to assess if they are related to sensitive attribute. In order to extract informations in pictures to classify pictures, or detect pictures, algorithm might use information that could be considered as sensitive. First, recall the popular wolf-husky classifier, that detects snow in the background (since wolf were with snow in the training sample)

This can also be the case for health issues, where classifiers can be influenced by the color of the skin (or possibly some unexpected information)

Racism

The first sensitive attribute is probably the race, that has been discussed in insurance for decaded.

One should keep in mind that race is a social information, and most of the time, it is based on self-identification

This leads to popular maps in the U.S.

Racism is usually related to “colourism” (discrimination based on skin tone)

Is it relevant in the context of insurance, and risk ?

It has been observed that African Americans, in the U.S. were usually asked a higher insurance premium.

Have in mind that discrimination has nothing to do with intention, as mentioned previous. An insurance pricing can be racist, without any intention to be so. An important issue to quantify that problem is actually to observe that variable.

Sexism

Sexism is another popular exemple of discrimination, related to sex, or gender.

Actuaries have been using life tables that are gender related for more than 300 years. And indeed, it seems that women live longer than men.

Ageism

Age is another possible sensitive attribute, but it is more complicated. First, it is not a “club” and second, it is (somehow) clearly related to risk.

In dataset, there can also be selection bias, related to the age. For instance, during the COVID pandemic, triage was based on the age of patients. Treatements and tests can be related to the age of patients. So this bias will probably have an impact on observed risks.

Genetics

Another important sensitive variable is related to “genetic information”.

Such information is usually classified as sensitive everywhere.

To conclude, I wanted to mentioned that several important variables considered as sensitive have not much to do with genetics, but more with a social construction.

Finally, let us discuss proxies that can be related to those sensitive variables.

Names and language

The first one was discussed in the introduction : names contain information about race and ethnical origin,

Text and discussion can also reveal sensitive information.

Pictures

Pictures can also provide information. That was discussed 150 years ago, where researchers tried to identify criminals using solely pictures.

Some insurers have been trying, at some point, to detect diseases on facial pictures. And it possible to reveal informations from pictures. Possibly the age, and the gender.

One can also use satellite pictures, or pictures from Google Street View, such as the wealth in the neighborhood. And possibly sensitive information, such as the presence of an access ramp for disabled people.

Credit Scoring

Credit scoring is also a variable used by insurers, that can be related to variables considered as sensitive

Clearly, a bad credit score will have a big impact not only on mortgages and loans,

but also on insurance rates ! As we explained here, it costs a lot to be poor.

Networks

Finally, insurance can use information related to friends, or family, to assess the risk. And netword data capture a lot of sensitive information.

We will talk a little bit about network, to explain why using your friends risks to assess your own risk might not be a great idea…

It is an extension of th friendship paradox.

Proxies

Finally, we will conclude by showing that removing a sensitive attribute from a training dataset will not mitigate discrimination.

Fairness and discrimination, PhD Course, #6 Group fairness

As mentioned previously, the first idea is to be blind to sensitive attributes.

Before investigating popular definitions of fairness, it could be nice to get back to some historical approaches. Inspired by Cleary (1968), Darlington (1971) suggested some formal definitions of what was called “cultural fairness”, based on properties on correlation. As we will see, most of those definitions have been translated into properties based on independence.

Statisticians also have been investigating discrimination, and economists, with the Economics of Discrimination, by Gary Becker

A popular technique, among economists, to quantify discrimination is the one introduced by Evelyn Kitagawa in her 1955 paper, called Blinder–Oaxaca decomposition

(used ususally to quantify a gender gap in salaries, based on the gender, or ethnical variables)

In the classification of group fairness, the first criteria is related to the independence between the prediction \widehat{y} and the sensitive attribute s

This is “demographic parity“, usually define on the classifier \widehat{y}\in\{0,1\}

Quite naturally, we can compute those ratios, for various thresholds t and see how far away we are from 1

We can plot those ratio

with here the probability to predict a 1, or below, to predict a 0

This classification on the class – \widehat{y}\in\{0,1\}  – will correspond to a weak version of demographic parity, where we assume, in a regression context, that averages are equal, in the two groups. A stronger concept will be considered, asking to equality of distribution (or the score, if we consider a classification problem \widehat{y}\in[0,1] )

The later corresponds to the total distance between distributions of scores in the two groups

And since we compare distributions, quite naturally, we can use Wasserstein distance (that offers a nice visualization too)

For instance, in our motor insurance dataset, we have the following distributions for the score, the probability to claim a loss

and we can compute the optimal transport mapping

Based on those distances, or divergences, we can define some unfairness measures

A weaker form can be considered

A second criteria is related to the independence between the prediction \widehat{y} and the sensitive attribute s, conditional on the outcome y, and will be coind “separation

With that conditionning, we can see that this concept will be related to true positive rates,

or false positive rates, when considering classifiers

Therefore, the standard tool here will be ROC curves

The popular fairness criteria associated to that separation is called “equalized odds

As previously, we can visualize those conditional probabilities, as a function of threshold t

Other concepts have been introduced in the litterature

Asking for equality of ROC curves could be seen as strong, and a weaker version would be based on AUC.

As discussed previously, recall that if we have a score m, obtained e.g., from a logistic regression m^\alpha could be another score, and it will lead to the same ROC curve. But if \alpha>1, the prediction in the second group is always smaller than in the first group (with a first order stochastic dominance), nevertheless, there is fairness with respect to the ROC curve.

The third criteria is related to the independence between the outcome y and the sensitive attribute s, conditional on the prediction \widehat{y}

This is related to the concept of calibration, discussed previously

Another concept, introduced in the litterature, is that we should not be able to predict the sensitive attribute

Statistical versions of the quantities have been discussed (with a tolerance band)

We will illustrate using real data, for instance different scores obtained on the German Credit datast,

with on top, empirical cumulative distribution function of scores, conditional on y (discrimination based on the risk) on the left, and conditional on s (discrimination based on gender) on the right. A summary of all fairness metrics is in the table below, for a threshold t at 40%

Here is the distribution of scores on the motor dataset

To conclure, we should mention that there are impossibility theorems, so it will be impossible to satisfy all those concepts at the same time…

Finally, here are some references on those various concepts of group fairness… Next step will be individual fairness

Fairness and discrimination, PhD Course, #5 Models and Data

For the fifth course, we will discuss machine learning and standard techniques used to get predictive models, and to assess accuracy of those models.

GLM (possibly constrained)

Classically, we use a penalized version of least squares (but this can be adapted to GLMs, when penalizing the negative log-likelihood).  Because of Karush–Kuhn–Tucker conditions, having a constraint on the parameter is equivalent to the following penalized problem, when the constraint is on the \ell_2 norm of \boldsymbol{\beta},

We can also consider the \ell_1 norm of \boldsymbol{\beta},

Those two approaches can be see as a trade-off between accuracy (here the empirical risk on the left) and complexity of the model (on the right). And we can also consider a mixture of the two norms,

As we will see, it will also be possible to consider some penality related to fairness and discriminiation measures (in-processing).

Classifier and ROC Curves

We will also recall metrics used in the context of classification, such as the ROC curve

Each point of the curve can be related to two areas related to the distributions of the scores (in the two groups), for the same threshold – namely the false positive rate and true positive rate

Based on the ROC curve, we can define the AUC, the area under the ROC curve,

But for classifiers, the important challenge is to have calibrated scores, meaning that we want the score to be interpreted as the true underlying probability.

Calibration

Well-calibration is defined as follows

or (with different notations)

It is a well know properties in several applications.

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-07.png

The plot on the right is the calibration plot,

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-10.png

We can easily get that plot

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-09.png

This concept is related to the question “do probabilities returned by some model represent reals probabilities ?” For instance, below, we have pictures generated as some sort of geodesic between two pictures, with a woman on the top left, and a man in the bottom right, published in the New York Times. And below, “probabilities” given by  https://www.picpurify.com/demo-face-gender-age.html.

We could agree that it is rather strange that probabilities (to have a man) do not increase continuously, but on top, with extremely high confidence, the model predicts that the picture is the one of a woman, and below, also with extremely high confidence, that the person is a man…

Data, observations vs. experiments

Then, after concept and notations related to models, we will talk about data. More specifically, the distinction between observations and experimentations.

Another popular classification is the one discussed by Judea Pearl.

So we will talk about association, correlation, causal inference, and counterfactuals.

“Correlated variables” or proxies

One important issue, is that with massive data, one can easily get a (good) proxy of almost any sensitive variable.

The concept is related to comonotonicity, or perfect correlation.

But this is clearly too strong, so we will discuss depedence measures, too.

Independence properties

Recall that independence is defined as follows

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-11.png

and we can consider a weaker form, based on null-covariance

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-12.png

or null-correlation

(sidenote, this correlation measure is bounded, and those bounds are related to Hardy-Littlewood inequality and optimal transport)

An interesting measure is the maximal correlation

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-13.png

or we can consider a weaker version, without consider all possible transformation, but only a subset

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-14.png

Another important concept is the one of conditional independence

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-16.png

(the later will be used in the context of causal graphs).

Causality

Before talking about causality, recall that what non-independence mean…

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-17.png

We can then construct causal graphs, or “directed acyclic graphs”

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-20.png

where nodes are the variables used in the model, and the outcome (usually that the end of the causal graph). Then we define paths

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-18.png

and the concept of d-separation

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-19.png

This concept is related to the statistical property of conditional independence

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-21.png

More precisely, we have the following Markov property on causal graphs

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-22.png

For example, for such a graphical model,

the joint distribution is \mathbb{P}[x_1,x_2,x_3,x_4]=\mathbb{P}[x_1]\times \mathbb{P}[x_2|x_1]\times \mathbb{P}[x_3|x_2]\times \mathbb{P}[x_4|x_3]and for the graphical model below

we have\mathbb{P}[x_1,x_2,x_3,x_4]=\mathbb{P}[x_1]\times \mathbb{P}[x_2]\times \mathbb{P}[x_3|x_1,x_2]\times \mathbb{P}[x_4|x_3]Those graphs can be related to structural models (with idiosyncratic noise denoted U), since

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-23.png

Potential outome

Another important concept is the concept of counterfactuals, and potential outome. In an ideal world, we would have observed the outome in both cases, with and without the treatement

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-24.png

but in real life, it’s only one of them,

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-25.png

And the goal will be, somehow, to estimate what the non-observed outcome would be. And then, classical quantites we wish to estimate are the average treatement effect, and the conditional version, based on some covariates.

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-26.png

This concept will be related to counterfactual fairness actually, when the “treatement” will be the sensitive attribute.

Twin network representation of the counterfactual

Finally, we will consider a so-called “twin network representation”. Consider a DAG, associated with some simple structural model

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-27.png

Based on a structural model, we can get values of idiosyncratic noise component

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-28.png

Then, we use those values on the twin representation, when the treatement is not 0, but 1. Counterfactuals are created by using the same noises

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-29.png

The difference between the two outcomes is the treatement effect, or the disparate treatement

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-30.png

or more generally, we write

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-31.png

This is an idea used in Plecko & Meinshausen, 2019, in the context of fairness, but we will discuss this more, later on…

Fairness and discrimination, PhD Course, #4 Wasserstein Distances and Optimal Transport

For the fourth course, we will discuss Wasserstein distance and Optimal Transport. Last week, we mentioned distances, dissimilarity and divergences. But before talking about Wasserstein, we should mention Cramer distance.

Cramer and Wasserstein distances

The definition of Cramér distance, for k\geq1, is

while Wasserstein will be (also for k\geq1)

If we consider cumulative distribution functions, for the first one (Cramer), we consider some sort of “vertical” distance, while for the second one (Wasserstein), we consider some “horizontal” one,

Obviously, when k=1, the two distances are identical

c1 = function(x) abs(pnorm(x,0,1)-pnorm(x,1,2))
w1 = function(x) abs(qnorm(x,0,1)-qnorm(x,1,2))
integrate(c1,-Inf,Inf)$value
[1] 1.166631
integrate(w1,0,1)$value
[1] 1.166636

But when k>1, it is no longer the case.

c2 = function(x) (pnorm(x,0,1)-pnorm(x,1,2))^2
w2 = function(u) (qnorm(u,0,1)-qnorm(u,1,2))^2
sqrt(integrate(c2,-Inf,Inf)$value)
[1] 0.5167714
sqrt(integrate(w2,0,1)$value)
[1] 1.414214

For instance, we can illustrate with a simple multinomial distribution, and the distance with some Binomial one, with some parametric inference based on distance minimization \theta^\star=\text{argmin}\{d(p,q_{\theta})\}(where here a multinomial distribution with parameters \boldsymbol{p}=(.5,.1,.4), taking values respectively in \{0,1,10\}, while the binomial distribution has probabilities \boldsymbol{q}_{\theta}=(1-\theta,\theta), taking values in \{0,10\})

One can prove that

while

When k=1, observe that the distance is easy to compute when distributions are ordered

When k=2, the two distances are not equal

In the Gaussian (and the Bernoulli) case, we can get an expression for the distance (and much more as we will see later on)

There are several representations for W_2

And finally, we can also discuss W_{\infty}

Wasserstein distances, and optimal transport

Wasserstein distance can also we written using some sort of expected values, when considering random variables instead of distributions, and some best-case scenario, or cheapest transportation cost,

which lead to the so call Kantorovich problem

An alternative way to look at this problem is to consider a transport map, and a push-forward measure

This is simply

Of course such mapping exist

We can then consider Monge problem

And interestingly, those two problems are (somehow) equivalent

Discrete case

If \boldsymbol{a}_{{A}}\in\mathbb{R}_+^{\color{red}{n_{{A}}}} and \boldsymbol{a}_{{B}}\in\mathbb{R}_+^{\color{blue}{n_{{B}}}}, defineU(\boldsymbol{a}_{{A}},\boldsymbol{a}_{{B}})=\big\lbrace M\in\mathbb{R}_+^{\color{red}{n_{{A}}}\times\color{blue}{n_{{B}}}}:M\boldsymbol{1}_{\color{blue}{n_{{B}}}}=\boldsymbol{a}_{A}\text{ and }{M}^\top\boldsymbol{1}_{\color{red}{n_{{A}}}}=\boldsymbol{a}_{B}\big\rbraceFor convenience, let U_{\color{red}{n_{{A}}},\color{blue}{n_{{B}}}} denote \displaystyle{U\left(\boldsymbol{1}_{n_{{A}}},\frac{\color{red}{n_{{A}}}}{\color{blue}{n_{{B}}}}\boldsymbol{1}_{n_{{B}}}\right)} (so that U_{\color{red}{n},\color{blue}{n}} is the set of permutation matrices associated with \mathcal{S}_n). Let C_{i,j}=d(x_i,y_{j})^kso that W_k^k(\boldsymbol{x},\boldsymbol{y}) = \underset{P\in U_{\color{red}{n_{{A}}},\color{blue}{n_{{B}}}}}{\text{argmin}} \Big\lbrace \langle P,C\rangle \Big\rbracewhere\langle P,C\rangle = \sum_{i=1}^{\color{red}{n_{{A}}}} \sum_{j=1}^{\color{blue}{n_{{B}}}} P_{i,j}C_{i,j} then consider P^* \in \underset{P\in U_{\color{red}{n_A},\color{blue}{n_B}}}{\text{argmin}} \Big\lbrace \langle P,C\rangle \Big\rbraceFor the slides, if we have the same sample sizes in the two groups, we have

we can illustrate below (with costs, or distances)

And with different group sizes,

i.e., if we consider real datasets

And as usual, we can consider some penalized version. Define \mathcal{E}(P) = -\sum_{i=1}^{\color{red}{n_{{A}}}} \sum_{j=1}^{\color{blue}{n_{{B}}}} P_{i,j}\log P_{i,j}or\mathcal{E}'(P) = -\sum_{i=1}^{\color{red}{n_{{A}}}} \sum_{j=1}^{\color{blue}{n_{{B}}}} P_{i,j}\big[\log P_{i,j}-1\big] or \mathcal{E}'(P) = -\sum_{i=1}^{\color{red}{n_{{A}}}} \sum_{j=1}^{\color{blue}{n_{{B}}}} P_{i,j}\big[\log P_{i,j}-1\big] Define P^*_\gamma = \underset{P\in U_{\color{red}{n_{{A}}},\color{blue}{n_{{B}}}}}{\text{argmin}} \Big\lbrace \langle P,C\rangle -\gamma \mathcal{E}(P) \Big\rbrace The problem is strictly convex.

Sinkhorn relaxation

This idea is related to the following theorem

Consider a simple optimal transportation problem between 6 points to 6 other points,

set.seed(123)
x = (1:6)/7
y = runif(9)
x
[1] 0.14 0.29 0.43 0.57 0.71 0.86
y[1:6]
[1] 0.29 0.79 0.41 0.88 0.94 0.05
library(T4transport)
Wxy = wasserstein(x,y[1:6])
Wxy$plan

that we can visualize below (the first observation of \boldsymbol{x} is matched with the last one of \boldsymbol{y}, the second observation of \boldsymbol{x} is matched with the first one of \boldsymbol{y}, etc)

We observe that we simply match according to ranks.

But we can also use a penalized version

Sxy = sinkhorn(x, y[1:6], p = 2, lambda = 0.001)
Sxy$plan

here with a very small pernalty

or a larger one

Sxy = sinkhorn(x, y[1:6], p = 2, lambda = 0.05)
Sxy$plan

In the discrete case, optimal transport can be related to Hardy-Littlewood-Polya inequality, that is related to the idea of matching based on ranks (corresponding to a monotone mapping function)

We have then

In the bivariate dicrete case, we have

Optimal mapping

We have mentioned that, in the univariate setting

and clearly, \mathcal{T}^\star is increasing. In the Gaussian case, for examplex_{{B}}=\mathcal{T}^\star(x_{{A}})= \mu_{{B}}+\sigma_{{B}}\sigma_{{A}}^{-1} (x_A-\mu_{{A}}).In the multivariate case, we need a more general concept of increasingness to define an “increasing” mapping \mathcal{T}^\star:\mathbb{R}^k\to\mathbb{R}^k.

In the Gaussian case, for example, we have a linear mapping,\boldsymbol{x}_{{B}} = \mathcal{T}^\star(\boldsymbol{x}_{{A}})=\boldsymbol{\mu}_{{B}} + \boldsymbol{A}(\boldsymbol{x}_{{A}}-\boldsymbol{\mu}_{{A}})where \boldsymbol{A} is a symmetric positive matrix that satisfies \boldsymbol{A}\boldsymbol{\Sigma}_{{A}}\boldsymbol{A}=\boldsymbol{\Sigma}_{{B}}, which has a unique solution given by \boldsymbol{A}=\boldsymbol{\Sigma}_{{A}}^{-1/2}\big(\boldsymbol{\Sigma}_{{A}}^{1/2}\boldsymbol{\Sigma}_{{B}}\boldsymbol{\Sigma}_{{A}}^{1/2}\big)^{1/2}\boldsymbol{\Sigma}_{{A}}^{-1/2}, where \boldsymbol{M}^{1/2} is the square root of the square (symmetric) positive matrix \boldsymbol{M} based on the Schur decomposition (\boldsymbol{M}^{1/2} is a positive symmetric matrix). In R, for example, use the expm package,

M = expm::sqrtm(matrix(c(1,1.2,1.2,2),2,2))
M
[,1] [,2]
[1,] 0.8244771 0.5658953
[2,] 0.5658953 1.2960565
M %*% M
[,1] [,2]
[1,] 1.0 1.2
[2,] 1.2 2.0

Optimal mapping, on real data

To illustrate, it is possible to consider the optimal matching, between the height of n men and n women,

Another example (discussed in Optimal Transport for Counterfactual Estimation: A Method for Causal Inference – with a nice R notebook created by Ewen), consider Black and non-Black mothers in the U.S.

or the joint mapping, in dimension 2

We will spend more time on those functions (and the related concept) in a few weeks, when discussing barycenters and geodesics… More details in the slides (online) and in the forthcoming textbook,

Fairness and discrimination, PhD Course, #3 Machine Learning, losses and distances

For the third course, we will get back a little bit on machine learning (slides are still online on the github repository). The starting point will be loss functions and risk.

Loss functions and risk

A general definition for a loss is that it is positive, and null when we consider \ell(y,y). As we will discuss further, it is neither a distance, nor a dissimilarity measure

Then, define the empirical risk (and the associated empirical risk minimization principle, as coined in Vapnik (1991))

Given a loss \ell and some probabilistic space, define the optimal decision, also called Bayes decision rule

And instead of the risk of a model, define the excess of risk.

A classical loss for a classifier is \ell_{0/1},

In that case, Bayes decision rule, ism^\star(\boldsymbol{x}) = \boldsymbol{1}(\mu(\boldsymbol{x})>1/2) =\begin{cases}1 \text{ if }\mu(\boldsymbol{x})>1/2\\0 \text{ if }\mu(\boldsymbol{x})\leq1/2\\\end{cases}where (of course), one needs to know \mu, otherwise, we can consider some plug-in estimator based on \widehat\mu. For continuous variables y, consider the quadratic loss \ell_2,

In that case, Bayes decision rule (the optimal model) is the conditional expectation

Observe that we can also define the quantile loss (or the expectile loss)

Observe that this loss is not symmetric..

From loss functions to distances

Let us discuss a bit more the fact that losses are not distances. As mentioned, it is neither necessarily symmetric nor seperable,

But furthermore, it has no reason to satisfy the triangle inequality. Actually, if d is the distance, it is very likely that d^2 is not (since exponentiating is not a subadditive transformation)

Another related concept could be the concept of similarity, or dissimilarity.

Another one is the concept of divergence, that we will use much more. For instance, Bregman divergence is

which safisfies desirables properties.

Interestingly it is possible to define “projections” even if we have neither an orthogonal projection (since there is no orthogonal concept since there is no inner-product), nor a distance. But still

One can use a nice algorithm to estimate that quantity, if the convex set can we expressed simply

When considering “distances” between distributions, instead of y‘s, among other interesting properties in statistics, we can mention the one of unbiased gradients,

and Müller (1997) defined integral probability metrics

Standard “distances” between distributions

The first one will be Hellinger distance

that can lead so simple expressions for standard parametric distributions, such as Beta distributions,

or (multivariate) Gaussian ones

We can also mention Pearson divergence

More interesting (and popular in probability theory), total variation

There are several ways to express that distance.

If instead of general sets \mathcal{A}, we can consider half lines, (-\infty,t][\latex], and we obtain Kolmogorov distance (or Kolmogorov-Smirnov)

Another important one in statistics is Kullback–Leibler divergence

For instance, with Gaussian vectors

Observe that the measure is actually a dissimilarity measure

If we want a symmetric version, we can consider Jeffreys divergence

or Jensen–Shannon divergence

Finally, we will mention f-divergence

and Rényi divergence

We will discuss a little bit more those "distances" (yes, I usually use that term, abusively) and next week, we will present the most interesting distance, that will be Wasserstein's.

Fairness and discrimination, PhD Course, #2 Insurance and risk classes

For the second course, we will get back a little bit on insurance pricing in a context of heterogeneous portfolio, and risk classification (slides are still online on the github repository). The starting point will be the pure premium.

See our online textbook, with Michel Denuit, Non Life Insurance Mathematics, for additional motivation. If we have some risk related variables \boldsymbol{x}=(x_1,\cdots,x_k), the pure premium will be the conditional expectation,

Here also, we have some law of numbers, for the conditional expected value,

This relationship, which defines the conditional expected value using the limiting value of a conditional frequency cannot be used to define properly \mathbb{P}[Y|\boldsymbol{X}=\boldsymbol{x}] and \mathbb{E}[Y|\boldsymbol{X}=\boldsymbol{x}]. One can consider a limit,\mathbb{P}\big(Y\in \mathcal{A}\big\vert X = x\big)=\lim_{\epsilon\to0}\frac{\mathbb{P}(\{Y\in \mathcal{A}\}\cap\{|X -x|\leq \epsilon\})}{\mathbb{P}(\{|X -x|\leq \epsilon\})}or\mathbb{P}\big(Y\in \mathcal{A}\big\vert X = x\big)=\lim_{\epsilon\to0}\mathbb{P}\big(Y\in \mathcal{A}\big\vert |X -x|\leq \epsilon\big)as in the law of the unconscious statistician or as Proschan and Presnell (1998) wrote it

statisticians make liberal use of conditioning arguments to shorten what would otherwise be long proofs

We can now compute conditional frequency, given some risk characteristics, for some quantity of interest y, such as the age of death, in life insurance contracts.

Demographic risk and heterogeneity

First, we will see some gender-based life tables, starting with the one obtained by Nicolaas Struyck (see e.g. Alberts et al. (2014))

More recently, in France, some wealth based life tables were obtained, with various quantiles

And finally, we will see some life tables obtained 50 years ago in the US, with racial distinction

Mean and variance decomposition

About pure premiums, an important property is the law of total expectations, and a desirable property, that we will name “balance property”

We will also mention variance and variance decomposition, depending if we take into heterogeneity, or not. With homogenous pricing, we have

If we use the “true” underlying risk factor, \Theta, we have the standard variance decomposition, also called law of total variance

i.e.

And finally, if we do not observe \Theta, but we have a collection of covariates, \boldsymbol{X}=(X_1,\cdots,X_k),

Some historical perspectives

In the textbook, Insurance: Biases, Discrimination and Fairness, I have several paragraph about an historical perspective, starting with insurance as clubs, without segmentation. Then segmentation started, with risk classes and groups. For example, according to Issues And Needed Improvements In State Regulation Of The Insurance Business, by Harry Havens, in 1979,

The price which a person pays for automobile insurance depends on age, sex, marital status, place of residence and other factors. This risk classification system produces widely differing prices for the same coverage for different people. Questions have been raised about the fairness of this system, and especially about its reliability as a predictor of risk for a particular individual. While we have not tried to judge the propriety of these groupings, and the resulting price differences, we believe that the questions about them warrant careful consideration by the State insurance departments. In most States the authority to examine classification plans is based on the requirement that insurance rates are neither inadequate, excessive, nor unfairly discriminatory. The only criterion for approving classifications in most States is that the classifications be statistically justified — that is, that they reasonably reflect loss experience. Relative rates with respect to age, sex, and marital status are based on the analysis of national data. A youthful male driver, for example, is charged twice as much as an older driver all over the country} (…) t has also been claimed that insurance companies engage in redlining – the arbitrary denial of insurance to everyone living in a particular neighborhood. Community groups and others have complained that State regulators have not been diligent in preventing redlining and other forms of improper discrimination that make insurance unavailable in certain areas. In addition to outright refusals to insure, geographic discrimination can include such practices as: selective placement of agents to reduce business in some areas, terminating agents and not renewing their book of business, pricing insurance at un-affordable levels, and instructing agents to avoid certain areas. We reviewed what the State insurance departments were doing in response to these problem. To determine if redlining exists, it is necessary to collect data on a geographic oasis. Such data should include current insurance policies, new policies being written, cancellations, and non-renewals. It is also important to examine data on losses by neighborhoods within existing rating territories because marked discrepancies within territories would cast doubt on the validity of territorial boundaries. Yet, not even a fifth of the States collect anything other than loss data, and that data is gathered on a territory-wide basis.

According to The Role of Risk Classification in Property and Casualty Insurance: A Study of the Risk Assessment Process : Final Report, by Barbara Casey, Jacques Pezier and Carl Spetzler, in 1976,

On the other hand, the opinion that distinctions based on sex, or any other group variable, necessarily violate individual rights reflects ignorance of the basic rules of logical inference in that it would arbitrarily forbid the use of relevant information. It would be equally fallacious to reject a classification system based on socially acceptable variables because the results appear discriminatory. For example, a classification system may be built on use of car, mileage, merit rating, and other variables, excluding sex. However, when verifying the average rates according to sex one may discover significant differences between males and females. Refusing to allow such differences would be attempting to distort reality by choosing to be selectively blind. The use of rating territories is a case in point. Geographical divisions, however designed, are often correlated with socio-demographic factors such as income level and race because of natural aggregation or forced segregation according to these factors. Again we conclude that insurance companies should be free to delineate territories and assess territorial differences as well as they can. At the same time, insurance companies should recognize that it is in their best interest to be objective and use clearly relevant factors to define territories lest they be accused of invidious discrimination by the public. (…) One possible standard does exist for exception to the counsel that particular rating variables should not be proscribed. What we have called `equal treatment’ standard of fairness may precipitate a societal decision that the process of differentiating among individuals on the basis of certain variables is discriminatory and intolerable. This type of decision should be made on a specific, statutory basis. Once taken, it must be adhered to in private and public transactions alike and enforced by the insurance regulator. This is, in effect, a standard for conduct that by design transcends and preempts economic considerations. Because it is not applied without economic cost, however, insurance regulators and the industry should participate in and inform legislative deliberations that would ban the, use of particular rating variables as discriminatory.

And then, more recently, we started to talk about personalization, as in Barry and Charpentier (2020). And next week, we will start talking about predictive modeling, and machine learning.

Fairness and discrimination, PhD Course, #1 Motivation

This week, we will start our MAT998P course, in Montréal, entitled “équité et discrimination des modèles prédictifs“. It will mainly be based on the forthcoming textbook,

I can also mention the R package

> library(devtools)
> install_github("freakonometrics/InsurFair")

And because it is the first course, this week, I will start with some motivations this week… First of all, let me recall a definition, from Schauer (2006)

To be an actuary is to be a specialist in generalization, and actuaries engage in a form of decision making that is sometimes called actuarial. Actuaries guide insurance companies in making decisions about large categories that have the effect of attributing to the entire category certain characteristics that are probabilistically indicated by membership in the category, but that still may not be possessed by a particular member of the category.

Motivation #1 Redlining

In 1937, the HOLC (Home Owners’ Loan Corporation) produced the following map of Philadelphia, related to “residential security”

These maps were related to concept of “redlining”. According to Merriam Webster dictionary,

to redline is (1) to withhold home-loan funds or insurance from neighborhoods considered poor economic risks; (2) to discriminate against in housing or insurance.

On the (fictitious) maps below, we have three variables, ploted

  • some red and green areas (risky-non risky)
  • some unsanitary index (on a 0-100 scale)
  • the proportion of Black inhabitants per neiborhood

In an insurance context, risky areas (with a higher premium) should be correlated with unsanitarity index (or any risk-related variable), and those variables are legitimate predictive variables. But they can also be related to less-legitimate variable, that could be racial, here. The challenge here is that a lot of variables are correlated…

I could mention here that, for  Glenn (2000), insurer’s risk selection process has two sides:

  • the one presented to regulators and policyholders (numbers, statistics and objectivity),
  •  the other presented to underwriters (stories, character and subjective judgment).

The rhetoric of insurance exclusion – numbers, objectivity and statistics – forms what Brian Glenn calls

the myth of the actuary (…) a powerful rhetorical situation in which decisions appear to be based on objectively determined criteria when they are also largely based on subjective ones (…) or the subjective nature of a seemingly objective process.

Glenn  (2003) claimed that there are many ways to rate accurately. Insurers can rate risks in many different ways depending on the stories they tell on which characteristics are important and which are not.

The fact that the selection of risk factors is subjective and contingent upon narratives of risk and responsibility has in the past played a far larger role than whether or not someone with a wood stove is charged higher premiums (…) virtually every aspect of the insurance industry is predicated on stories first and then numbers

Motivation #2. “Gender directive”, 2004/113/EC

From the Treaty on European Union (26.10.2012)

Art. 2 The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. These values are common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail.

Art. 3 (…) It shall combat social exclusion and discrimination, and shall promote social justice and protection, equality between women and men, solidarity between generations and protection of the rights of the child.

from the Charter of Fundamental Rights of the European Union (18.12.2000)

Art. 21 (Non discrimination): Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.

Art. 23 (Equality between men and women) Equality between men and women must be ensured in all areas, including employment, work and pay. The principle of equality shall not prevent the maintenance or adoption of measures providing for specific advantages in favour of the under-represented sex.

and from the EU Directive 2004/113/EC (2004 version)

Art. 5 (Actuarial factors)

1. Member States shall ensure that in all new contracts concluded after 21 December 2007 at the latest, the use of sex as a factor in the calculation of premiums and benefits for the purposes of insurance and related financial services shall not result in differences in individuals’ premiums and benefits.

2. Notwithstanding paragraph 1, Member States may decide before 21 December 2007 to permit proportionate differences in individuals’ premiums and benefits where the use of sex is a determining factor in the assessment of risk based on relevant and accurate actuarial and statistical data. The Member States concerned shall inform the Commission and ensure that accurate data relevant to the use of sex as a determining actuarial factor are compiled, published and regularly updated.

There was initially (2004) an opt-out clause (Article 5(2)), since, where gender is a determining factor in the assessment of
risk based on relevant and accurate actuarial and statistical
data then proportionate differences in individual premiums or
benefits are allowed.

In March 2011, the European Court of Justice issued its judgement into the “Test-Achats case”. The ECJ ruled Article 5(2) was invalid. Thus, insurers were no longer able to use gender as a risk factor when pricing policies.

Other legal documents in Europe can be mentioned such that the “Ten Oever” judgement (Gerardus Cornelis Ten Oever v Stichting Bedrijfspensioenfonds voor het Glazenwassers — en Schoonmaakbedrijf). In April 1993, the Advocate General Vangerven argued that (see De Baere (2012))

the fact that women generally live longer than men has no significance at all for the life expectancy of a specific individual and it is not acceptable for an individual to be penalized on account of assumptions which are not certain to be true in his specific case,

which could be related to the concept of “injustice by generalization”.

Motivation #3. Colorado (September 27, 2023)

In September 27, 2023, the Colorado Division of Insurance exposed a new proposed regulation entitled Concerning Quantitative Testing of External Consumer Data and Information Sources, Algorithms, and Predictive Models Used for Life Insurance Underwriting for Unfairly Discriminatory Outcomes.

Section 4 (Definitions) Bayesian Improved First Name Surname Geocoding, or “BIFSG” means, for the purposes of this regulation, the statistical methodology developed by the RAND corporation for estimating race and ethnicity.

External Consumer Data and Information Source, or “ECDIS” means, for the purposes of this regulation, a data source or an information source that is used by a life insurer to supplement or supplant traditional underwriting factors. This term includes credit scores, credit history, social media habits, purchasing habits, home ownership, educational attainment, licensures, civil judgments, court records, occupation that does not have a direct relationship to mortality, morbidity or longevity risk, consumer-generated Internet of Things data, biometric data, and any insurance risk scores derived by the insurer or third-party from the above listed or similar data and/or information source.

Then we have different sections, where insurers are asked to “estimate” the race or ethnicity of policyholders

Section 5 (Estimating Race and Ethnicity) : Insurers shall estimate the race or ethnicity of all proposed insureds that have applied for coverage on or after the insurer’s initial adoption of the use of ECDIS, or algorithms and predictive models that use ECDIS, including a third party acting on behalf of the insurer that used ECDIS, or algorithms and predictive models that used ECDIS, in the underwriting decision-making process, by utilizing:

1. BIFSG and the insureds’ or proposed insureds’ name and geolocation information included in the applications) for life insurance shall be used to estimate the race and ethnicity of each insured or proposed insured.

2. For the purposes of BIFSG, the following racial and ethnic categories shall be used: Hispanic, Black, Asian Pacific Islander (API), and White.

Section 6 (Application Approval Decision Testing Requirements) : Using the BIFSG estimated race and ethnicity of proposed insureds and the following methodology, insurers shall calculate whether Hispanic, Black, and API proposed insureds are disapproved at a statistically significant different rate relative to White applicants for whom the insurer, or a third party acting on behalf of the insurer, used ECDIS, or an algorithm or predictive model that used ECDIS, in the underwriting decision-making process.

1. Logistic regression shall be used to model the binary underwriting outcome of either approved or denied.

2. The following factors may be accounted for as control variables in the regression model: policy type, face amount, age, gender, and tobacco use.

3. The estimated race or ethnicity of the proposed insureds shall be accounted for by including Hispanic, Black, and Asian Pacific Islander (API) as separate dummy variables in the regression model.

4. Determine if there is a statistically significant difference in approval rates for each BIFSG estimated race or ethnicity variable as indicated by a p-value of less than .05.

a. If there is not a statistically significant difference in approval rates, no further testing is required.

b. If there is a statistically significant difference in approval rates, the insurer shall determine whether the difference in approval rates is five (5) percentage points or greater as indicated by the marginal effects value of each BIFSG estimated race or ethnicity variable. (…)

or

Section 7 (Premium Rate Testing Requirements) : Using the insureds’ BIFSG estimated race and ethnicity, insurers shall determine if there is a statistically significant difference in the premium rate per $1,000 of face amount for policies issued to Hispanic, Black, and API insureds relative to White insureds for whom the insurer, or a third party acting on behalf of the insurer, used ECDIS, or an algorithm or predictive model that used ECDIS, in the underwriting decision-making process.

1. Linear regression shall be used to model the continuous numerical outcome of premium rate per $1,000 of face amount.

2. The following factors may be accounted for as control variables in the regression model: policy type, face amount, age, gender, and tobacco use.

3. The estimated race or ethnicity of the proposed insureds shall be accounted for by including Hispanic, Black, and Asian Pacific  Islander (API) as separate dummy variables in the regression model.

4. Determine if there is a statistically significant difference in the premium rate per $1,000 of face amount for each BIFSG estimated race or ethnicity variable as indicated by a p-value of less than .05.

a. If there is not a statistically significant difference in premium rate per $1,000 of face amount, no further testing is required.

b. If there is a statistically significant difference in premium rate per $1,000 of face amount, determine whether the premium rate per $1,000 of face amount is at least 5% more than the average premium rate per $1,000 for all policies.

i. If the difference in premium rate per $1,000 of face amount is less than 5%, no further testing is required.

ii. If the difference in premium rate per $1,000 of face amount is 5% or greater, further testing is required as described in Section 8.

(etc). In order to illustrate, we can use some data, in the region of Atlanta

 

We can change the first and last name of people (and keep other relevant information, including the ZIP code) and compare “predictions” of race (white, black, hispanic, asian, etc)

Motivation #4. Motor Insurance in the U.S.

In the context of motor insurance in the U.S., recall that legal restrictions are per states, and we can observe some diversity about what “sensitive” could mean (via thezebra)

(etc). We will also discuss Avraham et al. (2013) that provides a long discussion accross US states.

Motivation #5. Graduate Admission (UC Berkeley)

Another motivation is the popular article, Bickel,  Hammel, and O’Connell (1975)

The dataset mentioned in the article is the following

the bias in the aggregated data stems not from any pattern of discrimination on the part of admissions committees, which seems quite fair on the whole, but apparently from prior screening at earlier levels of the educational system. Women are shunted by their socialization and education toward fields of graduate study that are generally more crowded, less productive of completed degrees, and less well funded, and that frequently offer poorer professional employment prospects

As we can see, if we formalize, we have (almost)

This is Simpson’s paradox. Another simple example is related to mortality : the (overall) mortality rate for women (picked at random in the entiere population) was 0.812% in Costa Rica, lower than 0.929% in Sweden. But as we can see on the left, below, at any age, mortality rates are lower in Sweden than in Costa Rica.

The paradox can easily be explained if we look at age structures in both countries. Long story short, in Costa Rica, picking someone randomly means that the person is very likely to be (very) young, with a low mortality rate; in Sweden, the person is more likely to be older, with a higher mortality rate.

Motivation #6. Propublica, Actuarial Justice

We will also mention actuarial justice, and et al (2016)

Hence, looking at the same data, with difference perspective, could lead to different conclusions. More robust conclusions can be obtained when look at distributions of scores (instead of simple binary predictions)

and we can also consider temporal process (again, instead of simply binary variables, with temporal censoring)

Motivation #7. Insurance in Québec

Two final motivations, in French this time. In Québec, there is the Charte des droits et libertés de la personne (C-12) with some very clear definition of what “discrimination” means,

Art. 10  Toute personne a droit à la reconnaissance et à l’exercice, en pleine égalité, des droits et libertés de la personne, sans distinction, exclusion ou préférence fondée sur la race, la couleur, le sexe, l’identité ou l’expression de genre, la grossesse, l’orientation sexuelle, l’état civil, l’âge sauf dans la mesure prévue par la loi, la religion, les convictions politiques, la langue, l’origine ethnique ou nationale, la condition sociale, le handicap ou l’utilisation d’un moyen pour pallier ce handicap.

Il y a discrimination lorsqu’une telle distinction, exclusion ou préférence a pour effet de détruire ou de compromettre ce droit.

But, interestingly, insurers can almost do anything they want,

Art 20.1 Dans un contrat d’assurance ou de rente, un régime d’avantages sociaux, de retraite, de rentes ou d’assurance ou un régime universel de rentes ou d’assurance, une distinction, exclusion ou préférence fondée sur l’âge, le sexe ou l’état civil est réputée non discriminatoire lorsque son utilisation est légitime et que le motif qui la fonde constitue un facteur de détermination de risque, basé sur des données actuarielles.

Motivation #8. Intention

And finally, I can mention that in many countries (such as France), “indirect discrimination” is considered as discriminatory, so “intention” has nothing to do with the problem… The Loi no 2008-496 du 27 mai 2008 states that

Art. 1 Constitue une discrimination indirecte une disposition, un critère ou une pratique neutre en apparence, mais susceptible d’entraîner, pour l’un des motifs mentionnés au premier alinéa, un désavantage particulier pour des personnes par rapport à d’autres personnes, à moins que cette disposition, ce critère ou cette pratique ne soit objectivement justifié par un but légitime et que les moyens pour réaliser ce but ne soient nécessaires et appropriés.

This law is an extension of Loi no. 72-546 du 1er juillet 1972, which abolished the requirement for specific intent.

Again, following Avraham (2017), keep in mind that insurance is very specific, regarding discrimination

What is unique about insurance is that even statistical discrimination which by definition is absent of any malicious intentions, poses significant moral and legal challenges. Why? Because on the one hand, policy makers would like insurers to treat their insureds equally, without discriminating based on race, gender, age, or other characteristics, even if it makes statistical sense to discriminate (…) On the other hand, at the core of insurance business lies discrimination between risky and non-risky insureds. But riskiness often statistically correlates with the same characteristics policy makers would like to prohibit insurers from taking into account.

That will be the topic of the course…

Trees and forests

For my ACT6100 weekly quiz, I usually generate some datasets, and then ask students to compare various predictive algorithms. Last week, it was about classification trees and random forests. And students were surprised to have such differences (they had to estimate the probability to have a specific label, for the barycenter of the covariates).

Usually, I use the following to generate some (here 12) covariates that could be correlated

library(FactoMineR)
n=279
library(clusterGeneration)
library(mnormt)
k=12
S=genPositiveDefMat("unifcorrmat",dim=k)
X=round(rmnorm(n,varcov=S$Sigma)+8,2)
rownames(X)=1:n
colnames(X)=LETTERS[1:k]

Then I need to generate some data, based on some covariates (5 out of 12), with various strengths

idx = sample(1:k,size=5)
u = sample(c(-(4:1),1:4),5)
beta = rep(0,k)
beta[idx] = u
U = X%*%beta
U = U-min(U)
U = U/max(U)*6-3
p = exp(( U))/(1+exp((U )))
Y = rbinom(n,size=1,prob=p)
df = data.frame(Y=as.factor(Y),X)
levels(df$Y)=levels=c("blue","red")

We can run a classification tree

library(rpart)
arbre = rpart(Y~., data=df)

and a random forest,

library(randomForest)
set.seed(1)
arbres = randomForest(Y~., data=df)

Here are the partial plots for 4 of the explanatory variables that actually have an impact

partialPlot(arbres,pred.data = df, x.var = "A")


Predictions for the “average” point of the dataset is here

(parbre = predict(arbre,newdata=data.frame(t(apply(df[,-1],2,mean))),type = "prob"))
       blue       red
1 0.8064516 0.1935484
(parbres = predict(arbres,newdata=data.frame(t(apply(df[,-1],2,mean))),type = "prob"))
   blue   red
1 0.422 0.578
attr(,"class")
[1] "matrix" "votes"

and there is a substantial difference, with a probability of 19% with a single tree, 58% with 500 trees (the default value of the function).

To understand why we can have such a difference, we should not only focus on the bagging stratgy, but look at the variability of the predictions, obtained with trees,

B=1e4
parbres = rep(NA,B)
m=data.frame(t(apply(df[,-1],2,mean)))
for(b in 1:B){
  idx = sample(1:nrow(df),size=nrow(df),replace=TRUE)
  arbre = rpart(Y~., data=df[idx,])
  parbres[b] = predict(arbre,newdata=m,type = "prob")[2]
}
hist(parbres)

Surprisingly, we have here a bimodal function for \hat{y} which is either very small for some trees, of very large for others. On average, we have a value close to 55%… I think I will use more that generative algorithm for future quiz…

GLM, STT5100

Dernière ligne droite dans le cours STT5100 de modèles linéaires appliqués. Les supports de cours sont en ligne sur https://github.com/freakonometrics/STT5100. Cette session étant en distanciel, le cours est asynchrone, et je poste régulièrement des capsules vidéos. Les capsules en lien avec les modèles linéaires généralisés (GLM) sont maintenant en ligne,

  1. introduction générale video pdf (10:15)
  2. lois de Bernoulli, binomiale, multnomiale video pdf (29:23)
  3. régression logistique (Bernoulli) video pdf (23:04)
  4. régression multinomiale video pdf (21:45)
  5. régression logistique sur variables catégorielles video pdf (30:24)
  6. régression logistique sur variables continues video pdf (21:35)
  7. analyse discriminante et courbe ROC video pdf (56:53)
  8. modèles de comptage et loi de Poisson video pdf (19:28)
  9. régression de Poisson video pdf (25:38)
  10. régression de Poisson et interprétations video pdf (40:15)
  11. régression de Poisson et méthode des marges video pdf (25:29)
  12. régression de Poisson et application en assurance video pdf (36:23)
  13. famille exponentielle video pdf (30:01)
  14. famille exponentielle et GLM video pdf (41:02)
  15. loi et lien video video pdf (36:05)
  16. déviance et résidus video pdf (15:10)
  17. modèle Tweedie et poids video pdf (29:33)
  18. surdispersion video pdf (21:01)
  19. tests et GLM video pdf (19:41)
  20. GLM en petite dimension video pdf (23:34)
  21. méthode stepwise video pdf (17:57)
  22. Poisson vs. Binomiale, application en démographie video pdf (22:25)
  23. Exemple (1) video + pdf