Ce jeudi, j’interviendrai à une conférence organisée par Covéa sur l’éthique de l’IA et la science de données, avec Astrid Bertrand et Chaouki Boutharouite. Les slides sont maintenant en ligne.
Tag Archives: discrimination
Assurance et discrimination, quel rôle pour les actuaires ?
Le rôle essentiel d’un actuaire en charge de la tarification est la segmentation du portefeuille (ou « insurance classification » en anglais), correspondant à une activité de discrimination (mathématiquement parlant) au sens où l’actuaire va chercher les variables les plus « discriminantes », pour en expliquer une autre (en lien avec la sinistralité). Mais au sens juridique, discriminer, c’est interdit par la loi, ce qui place l’actuaire dans une position souvent délicate et complexe.
Continue reading Assurance et discrimination, quel rôle pour les actuaires ?
Classification from scratch, linear discrimination 8/8
Eighth post of our series on classification from scratch. The latest one was on the SVM, and today, I want to get back on very old stuff, with here also a linear separation of the space, using Fisher’s linear discriminent analysis.
Bayes (naive) classifier
Consider the follwing naive classification rulem^\star(\mathbf{x})=\text{argmin}_y\{\mathbb{P}[Y=y\vert\mathbf{X}=\mathbf{x}]\}orm^\star(\mathbf{x})=\text{argmin}_y\left\{\frac{\mathbb{P}[\mathbf{X}=\mathbf{x}\vert Y=y]}{\mathbb{P}[\mathbf{X}=\mathbf{x}]}\right\}(where \mathbb{P}[\mathbf{X}=\mathbf{x}] is the density in the continuous case).
In the case where y takes two values, that will be standard \{0,1\} here, one can rewrite the later asm^\star(\mathbf{x})=\begin{cases}1\text{ if }\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})>\displaystyle{\frac{1}{2}}\\0\text{ otherwise}\end{cases}and the set\mathcal{D}_S =\left\{\mathbf{x},\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})=\frac{1}{2}\right\}is called the decision boundary.
Assume that\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})and\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})then explicit expressions can be derived.m^\star(\mathbf{x})=\begin{cases}1\text{ if }r_1^2< r_0^2+2\displaystyle{\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}+\log\frac{\vert\mathbf{\Sigma}_0\vert}{\vert\mathbf{\Sigma}_1\vert}}\\0\text{ otherwise}\end{cases}where r_y^2 is the Manalahobis distance, r_y^2 = [\mathbf{X}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[\mathbf{X}-\mathbf{\mu}_y]
Let \delta_ybe defined as\delta_y(\mathbf{x})=-\frac{1}{2}\log\vert\mathbf{\Sigma}_y\vert-\frac{1}{2}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]+\log\mathbb{P}(Y=y)the decision boundary of this classifier is \{\mathbf{x}\text{ such that }\delta_0(\mathbf{x})=\delta_1(\mathbf{x})\}which is quadratic in {\color{blue}{\mathbf{x}}}. This is the quadratic discriminant analysis. This can be visualized bellow.
The decision boundary is here
But that can’t be the linear discriminant analysis, right? I mean, the frontier is not linear… Actually, in Fisher’s seminal paper, it was assumed that \mathbf{\Sigma}_0=\mathbf{\Sigma}_1.
In that case, actually, \delta_y(\mathbf{x})={\color{blue}{\mathbf{x}}}^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y-\frac{1}{2}\mathbf{\mu}_y^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y+\log\mathbb{P}(Y=y) and the decision frontier is now linear in {\color{blue}{\mathbf{x}}}. This is the linear discriminant analysis. This can be visualized bellow
Here the two samples have the same variance matrix and the frontier is
Link with the logistic regression
Assume as previously that\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})and\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})then\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}is equal to \mathbf{x}^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_y]-\frac{1}{2}[\mathbf{\mu}_1-\mathbf{\mu}_0]^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]+\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}which is linear in \mathbf{x}\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}=\mathbf{x}^{\text{{T}}}\mathbf{\beta}Hence, when each groups have Gaussian distributions with identical variance matrix, then LDA and the logistic regression lead to the same classification rule.
Observe furthermore that the slope is proportional to \mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0], as stated in Fisher’s article. But to obtain such a relationship, he observe that the ratio of between and within variances (in the two groups) was\frac{\text{variance between}}{\text{variance within}}=\frac{[\mathbf{\omega}\mathbf{\mu}_1-\mathbf{\omega}\mathbf{\mu}_0]^2}{\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_1\mathbf{\omega}+\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_0\mathbf{\omega}}which is maximal when \mathbf{\omega} is proportional to \mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0], when \mathbf{\Sigma}_0=\mathbf{\Sigma}_1.
Homebrew linear discriminant analysis
To compute vector \mathbf{\omega}
m0 = apply(myocarde[myocarde$PRONO=="0",1:7],2,mean) m1 = apply(myocarde[myocarde$PRONO=="1",1:7],2,mean) Sigma = var(myocarde[,1:7]) omega = solve(Sigma)%*%(m1-m0) omega [,1] FRCAR -0.012909708542 INCAR 1.088582058796 INSYS -0.019390084344 PRDIA -0.025817110020 PAPUL 0.020441287970 PVENT -0.038298291091 REPUL -0.001371677757 |
For the constant – in the equation \omega^T\mathbf{x}+b=0 – if we have equiprobable probabilities, use
b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2 |
Application (on the small dataset)
In order to visualize what’s going on, consider the small dataset, with only two covariates,
x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85) y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3) z = c(1,1,1,1,1,0,0,1,0,0) df = data.frame(x1=x,x2=y,y=as.factor(z)) m0 = apply(df[df$y=="0",1:2],2,mean) m1 = apply(df[df$y=="1",1:2],2,mean) Sigma = var(df[,1:2]) omega = solve(Sigma)%*%(m1-m0) omega [,1] x1 -2.640613174 x2 4.858705676 |
Using R regular function, we get
library(MASS) fit_lda = lda(y ~x1+x2 , data=df) fit_lda Coefficients of linear discriminants: LD1 x1 -2.588389554 x2 4.762614663 |
which is the same coefficient as the one we got with our own code. For the constant, use
b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2 |
If we plot it, we get the red straight line
plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")]) abline(a=b/omega[2],b=-omega[1]/omega[2],col="red") |
As we can see (with the blue points), our red line intersects the middle of the segment of the two barycenters
points(m0["x1"],m0["x2"],pch=4) points(m1["x1"],m1["x2"],pch=4) segments(m0["x1"],m0["x2"],m1["x1"],m1["x2"],col="blue") points(.5*m0["x1"]+.5*m1["x1"],.5*m0["x2"]+.5*m1["x2"],col="blue",pch=19) |
Of course, we can also use R function
predlda = function(x,y) predict(fit_lda, data.frame(x1=x,x2=y))$class==1 vv=outer(vu,vu,predlda) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5) |
One can also consider the quadratic discriminent analysis since it might be difficult to argue that \mathbf{\Sigma}_0=\mathbf{\Sigma}_1
fit_qda = qda(y ~x1+x2 , data=df) |
The separation curve is here
plot(df$x1,df$x2,pch=19, col=c("blue","red")[1+(df$y=="1")]) predqda=function(x,y) predict(fit_qda, data.frame(x1=x,x2=y))$class==1 vv=outer(vu,vu,predlda) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5) |
Want to say one thing and the exact oppositive with strong confidence ?
No need to do politics. Just take a statistical course. And I do not talk about misinterpretation of statistics, but I talk about the mathematical foundations of statistical tests.
Consider the following parametric test, with a one-dimensional parameter: versus
, for some (fixed)
. A standard way of doing such a test is to consider an rejection region
. The test works as follows: consider a sample
,
- if
, then we accept
- if
, the we reject
For instance, consider the case of a Bernoulli sample, with probability . The standard idea is to define

The rejection region is then based on statistic ,
- if
, then we accept
- if
, the we reject
where threshold is taken so that the probability to make a first type error is
(say 5%) using the Gaussian approximation for z. Here

Thus, the acceptation region is then the green area below, while the rejection region is the red one, for .
Consider now the exact opposite test (with the same ),
versus
. Here, we use the same statistics, and the test is
- if
, then we accept
- if
, the we reject
where now

Thus, now, the acceptation region is then the green area below, while the rejection region is the red one.
So if we summarize what we just said,
- in the region on the left below, both test agree that
- in the region on the right below, both test agree that
- and in the region in blue, in the middle, the two tests disagree (one claims that
, and the other one that
)

Here is the evolution of the region as a function of (the size of the sample) when the sample frequency is 20%. With a small sample size, we can hardly say anything.
n=seq(1,100) p=0.2 x1=p+qnorm(.95)*sqrt(p*(1-p)/n) x2=p+qnorm(.05)*sqrt(p*(1-p)/n) plot(n,x1,type="l",ylim=c(0,1)) polygon(c(n,rev(n)),c(x1,rev(x2)),col="light blue",border=NA) lines(n,x1,lwd=2,col="red") lines(n,x2,lwd=2,col="red")


y1=qbinom(.95,size=n,prob=p)/n y2=qbinom(.05,size=n,prob=p)/n polygon(c(n,rev(n)),c(y1,rev(y2)),col="blue",border=NA) lines(n,y1,lwd=2,col="red") lines(n,y2,lwd=2,col="red")
and we get
This is what we can observe if we use R statistical procedures, either the asymptotic one,
> prop.test(2,10,.5,alternative="less") 1-sample proportions test with continuity correction data: 2 out of 10, null probability 0.5 X-squared = 2.5, df = 1, p-value = 0.05692 alternative hypothesis: true p is less than 0.5 95 percent confidence interval: 0.0000000 0.5100219 sample estimates: p 0.2 > prop.test(2,10,.5,alternative="greater") 1-sample proportions test with continuity correction data: 2 out of 10, null probability 0.5 X-squared = 2.5, df = 1, p-value = 0.943 alternative hypothesis: true p is greater than 0.5 95 percent confidence interval: 0.04368507 1.00000000 sample estimates: p 0.2
or a more accurate one
> binom.test(2,10,.5,alternative="less") Exact binomial test data: 2 and 10 number of successes = 2, number of trials = 10, p-value = 0.05469 alternative hypothesis: true probability of success is less than 0.5 95 percent confidence interval: 0.0000000 0.5069013 sample estimates: probability of success 0.2 > binom.test(2,10,.5,alternative="greater") Exact binomial test data: 2 and 10 number of successes = 2, number of trials = 10, p-value = 0.9893 alternative hypothesis: true probability of success is greater than 0.5 95 percent confidence interval: 0.03677144 1.00000000 sample estimates: probability of success 0.2
Here, when the sample frequency is 20% and
is equal to 10, we accept at the same time that theta is higher than 50% and lower than 50%.
And obviously it is not only a theoretical problem: it has obviously some strong implications. This morning, a good friend mentioned a post published some months ago, online here, about discrimination, and the lack of women with academic positions in mathematics, in France. As claimed by the author of the post“A Paris VI, meilleure université française selon son président, sur 11 postes de maitres de conférences, 5 filles classées premières. Il y a donc des filles excellentes ? A Toulouse, sur 4 postes, 2 filles premières. Parité parfaite. Mais à côté de cela, Bordeaux, 4 postes, 0 fille première. Littoral, 3 postes, 0 fille, Nice, 5 postes, 0 fille, Rennes, 7 postes, 0 fille…”.
Consider the latter one: in Rennes, out of 7 people hired last year, no woman. So in some sense, it looks obvious that there is some kind of discrimination ! Zero out of seven ! Well, if we consider the fact that around 30% of PhD thesis in mathematics were defended by women those years, we can also try to see is there if no “positive discrimination“, i.e. test where theta is the probability to hire a woman (just to be a little bit provocative).
> prop.test(0,7,.3,alternative="less") 1-sample proportions test with continuity correction data: 0 out of 7, null probability 0.3 X-squared = 1.7415, df = 1, p-value = 0.09347 alternative hypothesis: true p is less than 0.3 95 percent confidence interval: 0.0000000 0.3719021 sample estimates: p 0 Warning message: In prop.test(0, 7, 0.3, alternative = "less") : Chi-squared approximation may be incorrect > binom.test(0,7,.3,alternative="less") Exact binomial test data: 0 and 7 number of successes = 0, number of trials = 7, p-value = 0.08235 alternative hypothesis: true probability of success is less than 0.3 95 percent confidence interval: 0.0000000 0.3481637 sample estimates: probability of success 0
With no woman hired that year, we can still pretend that there was some kind of “positive discrimination“. An note that we do accept – with more confidence – the assumption of “positive discrimination” if we look at all universities together,
> prop.test(5+2,11+4+4+3+5+7,.3,alternative="less") 1-sample proportions test with continuity correction data: 5 + 2 out of 11 + 4 + 4 + 3 + 5 + 7, null probability 0.3 X-squared = 1.021, df = 1, p-value = 0.1561 alternative hypothesis: true p is less than 0.3 95 percent confidence interval: 0.0000000 0.3556254 sample estimates: p 0.2058824 > binom.test(5+2,11+4+4+3+5+7,.3,alternative="less") Exact binomial test data: 5 + 2 and 11 + 4 + 4 + 3 + 5 + 7 number of successes = 7, number of trials = 34, p-value = 0.1558 alternative hypothesis: true probability of success is less than 0.3 95 percent confidence interval: 0.0000000 0.3521612 sample estimates: probability of success 0.2058824
So obviously, with small sample, almost anything can be claimed !
De la qualité d’un score de classification
Un petit mot sur les courbes dites ROC, pour Receiver Operating Characteristic. Pour cela, on suppose que l’on dispose d’un prédicteur d’un variable prenant des valeurs 0 et 1 (pour simplifier), ou mieux encore “positif” et “négatif”. Peu importe le prédicteur, on peut considérer une régression logistique, une analyse discriminante, un classificateur nonparamétrique…. Bref, pour l’ensemble de nos observations, on a une valeur observée
et une valeur prédite
. En fait, comme je l’expliquais ici, on dispose plus précisément d’un score
. La règle d’affectation est alors simple: on se fixe un seuil
, et
- si
, alors
est “positif”
- si
, alors
est “négatif”
On peut alors construire une matrice dite de confusion, qui est simplement un table de contingence,
valeur observée ![]() |
|||
valeur prédite
![]() |
“positif” | “négatif” | |
“positif” | TP | FP | |
“négatif” | FN | TN |
où TP désigne les vrais positifs (true positive), TN les vrais négatifs (true negative),FP désigne les faux positifs (false positive) ou erreur de type I (dans une terminologie de théorie de la décision, ou de théorie des tests), et FN désigne les faux négatifs (false negative) ou erreur de type II.
On peut alors définir toute une batterie d’indicateurs permettant de juger de la qualité de notre prédicteur (ou plutôt de notre score),
- TPR = TP / P = TP / (TP + FN) appelé sensibilité, correspondant au taux de vrais positifs (true positive rate)
- FPR = FP / N = FP / (FP + TN) correspondant au taux de faux positifs (false positive rate)
- ACC = (TP + TN) / (P + N) appelé précision ou accuracy,
- SPC = TN / N = TN / (FP + TN) = 1 − FPR appelé spécificité ou taux de vrais négatifs (True Negative Rate)
- PPV = TP / (TP + FP) le taux de positifs prédits (positive predictive value)
- NPV = TN / (TN + FN) le taux de négatifs prédits (negative predictive value)
- FDR = FP / (FP + TP) correspondant au false discovery rate
Bref, on convertit cette matrice en probabilités conditionnelles, et beaucoup de notions peuvent être définie en changeant le conditionnement.
On peut aussi essayer de visualiser ces quantités. La représentation graphique la plus connue est probablement la courbe ROC, Receiver Operating Characteristic. L’idée est simple: il s’agit de juger du modèle, indépendmment du seuil s choisi. Ou plutôt de se donner un outils permettant de choisir le seuil s. On définie alors la fonction de sensibilité

et la fonction de spécificité

La courbe ROC est alors la courbe
Si cette courbe coïncide avec la diagonale, c’est que le modèle n’est pas plus performant qu’un modèle aléatoire (où on attribue la classe au hasard). Plus la courbe ROC s’approche du coin supérieur gauche, meilleur est le modèle, car il permet de capturer le plus possible de vrais positifs avec le moins possible de faux positifs. Notons que de part sa construction, la courbe ROC est invariante par toute transformation monotone croissante de la fonction de score (ce qui peut être particulièrement intéressant si on s’amuse à “normaliser” la fonction de score).
De plus, on notera que l’aire sous la courbe ROC doit pouvoir être vu comme une mesure de la qualité de l’ajustement. Le dessin ci-dessous montre la construction de la courbe ROC pour un échantillon discriminé de manière assez simple, par analyse discriminante.
Si on regarde une relecture probabiliste, on obtient la courbe suivante,
Il existe une autre courbe relativement classique, appelée courbe de lift. Cette dernière correspond à la courbe de Lorenz (que j’avais évoquée ici). On pose alors

et la courbe de lift est alors la courbe . Là aussi, un calcul d’aire donne un indicateur de la qualité, et on retrouve l’indice de Gini.