Tag Archives: Gini

KNN and K-means in Gini Prametric Spaces

With Cassandra Mussard, intern last summer, and Stéphane Mussard, we uploaded a paper entitled KNN and K-means in Gini Prametric Spaces on ArXiv.

This paper introduces innovative enhancements to the K-means and K-nearest neighbors (KNN) algorithms based on the concept of Gini prametric spaces. Unlike traditional distance metrics, Gini-based measures incorporate both value-based and rank-based information, improving robustness to noise and outliers. The main contributions of this work include: proposing a Gini-based measure that captures both rank information and value distances; presenting a Gini K-means algorithm that is proven to converge and demonstrates resilience to noisy data; and introducing a Gini KNN method that performs competitively with state-of-the-art approaches such as Hassanat’s distance in noisy environments. Experimental evaluations on 14 datasets from the UCI repository demonstrate the superior performance and efficiency of Gini-based algorithms in clustering and classification tasks. This work opens new avenues for leveraging rank-based measures in machine learning and statistical analysis.

Principal Component Analysis: A Generalized Gini Approach

Our paper, with Stéphane Mussard and Téa Ouraga, entitle Principal Component Analysis: A Generalized Gini Approach is finally out in the European Journal of Operations Research.

A principal component analysis based on the generalized Gini correlation index is proposed (Gini PCA). The Gini PCA generalizes the standard PCA based on the variance. It is shown, in the Gaussian case, that the standard PCA is equivalent to the Gini PCA. It is also proven that the dimensionality reduction based on the generalized Gini correlation matrix, that relies on city-block distances, is robust to outliers. Monte Carlo simulations and an application on cars data (with outliers) show the robustness of the Gini PCA and provide different interpretations of the results compared with the variance PCA.

Principal Component Analysis: A Generalized Gini Approach

With Stéphane Mussard and Téa Ouraga, we recently uploaded on arxiv a paper Principal Component Analysis: A Generalized Gini Approach,

A principal component analysis based on the generalized Gini correlation index is provided. It is proven that the reduction dimensionality based on the generalized Gini correlation index, that relies on city-block distances, is robust to outliers.

Some codes are also available on a dedicated github repo.

Gini Regressions and Heteroskedasticity

Our joint paper “Gini Regressions and Heteroskedasticity” with Ndéné Ka, Stéphane Mussard and Oumar Hamady Ndiaye just appear in Econometrics.

We propose an Aitken estimator for Gini regression. The suggested A-Gini estimator is proven to be a U-statistics. Monte Carlo simulations are provided to deal with heteroskedasticity and to make some comparisons between the generalized least squares and the Gini regression. A Gini-White test is proposed and shows that a better power is obtained compared with the usual White test when outlying observations contaminate the data

Classification from scratch, trees 9/8

Nineth post of our series on classification from scratch. Today, we’ll see the heuristics of the algorithm inside classification trees. And yes, I promised eight posts in that series, but clearly, that was not sufficient… sorry for the poor prediction.

Decision Tree

Decision trees are easy to read. So easy to read that they are everywhere

We start from the top, and we go down, with a binary choice, at each stop, each node. Let us see how it works on our dataset

library(rpart)
cart = rpart(PRONO~.,data=myocarde)
library(rpart.plot)
prp(cart,type=2,extra=1)


We start here with one single leaf. If we have two explanatory variable (the x-axis and the y-axis if we want to plot it), we will check what happens if we cut the leaf accoring to the value of the first variable (and there will be two subgroups, the one on the left and the one on the right)

or if we cut according to the second one (and there will be two subgroups, the one on top and the one below).

Why and where do we cut? Let us formalize a little bit. A node (a leaf) constains observations, i.e. \{y_i,\mathbf{x})i\}) for some i\in\mathcal{I}\subset\{1,\cdots,n\}. Hence, a leaf a caracterized by \mathcal{I}. For instance, the first node in the tree is \mathcal{I}=\{1,\cdots,n\}. A (binary) split is based on one specific variable – say x_j – and a cutoff, say s. Then, there are two options:

  • either x_{i,j}\leq s, then observation i goes on the left, in \mathcal{I}_L
  • or x_{i,j}> s, then observation i goes on the right, in \mathcal{I}_R

Thus, \mathcal{I}=\mathcal{I}_L\cup\mathcal{I}_R.

Now, define some impurity index, in some node. In the context of a classification tree, the most popular index used (the so-called impurity index) is Gini for node \mathcal{I} is defined as G(\mathcal{I})=-\sum_{y\in\{0,1\}}p_y(1-p_y)where p_y is the proportion of individuals in the leaf of type y. I use this notation here because it can be extended to the case of more than one class. Here, we consider only binary classification. Now, why p_y(1-p_y)? Because we want leaves that are extremely homogeneous. In our dataset, out of 71 individuals, 42 died, 29 survived. A perfect classification would be obtained if we can split in two, with the 29 survivors on the left, and the 42 dead on the right. In that case, leaves would be perfectly homogneous. So, when p_0\approx1 or p_1\approx1, we have strong homogenity. If we want an index to maximize, -p_y(1-p_y) might be an interesting candidate. Further more, the worst case would be a leaf with p_0\approx1/2, which is exactly what we have here. Note that we can also writeG(\mathcal{I})=-\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\left(1-\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\right)where n_{y,\mathcal{I}} is the number of individuals of type y in the leaf \mathcal{I}, and n_{\mathcal{I}} is the number of individuals in the leaf \mathcal{I}.

If we do not split, we have indexG(\mathcal{I})=-\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\left(1-\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\right)while if we split, define indexG(\mathcal{I}_L,\mathcal{I}_R)=-\sum_{x\in\{L,R\}}\frac{n_x}{n_{\mathcal{I}_x}}{n_{\mathcal{I}}}\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}_x}}{n_{\mathcal{I}_x}}\left(1-\frac{n_{y,\mathcal{I}_x}}{n_{\mathcal{I}_x}}\right)The code to compute is would be

gini = function(y,classe){
T. = table(y,classe)
nx = apply(T,2,sum)
n. = sum(T)
pxy = T/matrix(rep(nx,each=2),nrow=2)
omega = matrix(rep(nx,each=2),nrow=2)/n
g. = -sum(omega*pxy*(1-pxy))
return(g)}

Actually, one can consider other indices, like the entropic measureE(\mathcal{I})=-\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\log\left(\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\right)while if we split, E(\mathcal{I}_L,\mathcal{I}_R)=-\sum_{x\in\{L,R\}}\frac{n_x}{n_{\mathcal{I}_x}}{n_{\mathcal{I}}}\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}_x}}{n_{\mathcal{I}_x}}\log\left(\frac{n_{y,\mathcal{I}_x}}{n_{\mathcal{I}_x}}\right)

entropy = function(y,classe){
  T. = table(y,classe)
  nx = apply(T,2,sum)
  n. = sum(T)
  pxy = T/matrix(rep(nx,each=2),nrow=2)
  omega = matrix(rep(nx,each=2),nrow=2)/n
  g  = sum(omega*pxy*log(pxy))
return(g)}

This index was used originally in C4.5 algorithm.

Dividing a leaf (or not)

For instance, consider the very first split. Assume that we want to split according to the very first variable

CLASSE = myocarde[,1] <=100
table(CLASSE)
CLASSE
FALSE  TRUE 
   13    58

In that case, there will be 13 invididuals on one side (the left, say), and 58 on the other side (the right).

gini(y=myocarde$PRONO,classe=CLASSE)
[1] -0.4640415

Initially, without any split, it was

-2*mean(myocarde$PRONO)*(1-mean(myocarde$PRONO))
[1] -0.4832375

which can actually also be obtained with

CLASSE = myocarde[,1] gini(y=myocarde$PRONO,classe=CLASSE)
[1] -0.4832375

There is a net gain in spliting of

gini(y=myocarde$PRONO,classe=(myocarde[,1]<=100))-
gini(y=myocarde$PRONO,classe=(myocarde[,1]<=Inf))
[1] 0.01919591

Now, how do we split? Which variable and which cutoff? Well… let’s try all possible splits… Here, we have 7 variables. We can consider all possible values, using

sort(unique(myocarde[,1]))

But in massive datasets, it can be very long. Here, I prefer

seq(min(myocarde[,1]),max(myocarde[,1]),length=101)

so that we try 101 values of possible cutoff. Overall, the number of computations is rather low, with 707 Gini indices to compute. Again, I won’t get back here on the motivations for such a technique to create partitions, I will keep that for the course in Barcelona, but it is fast.

mat_gini = mat_v=matrix(NA,7,101)
for(v in 1:7){
  variable=myocarde[,v]
  v_seuil=seq(quantile(myocarde[,v],
6/length(myocarde[,v])),
quantile(myocarde[,v],1-6/length(
myocarde[,v])),length=101)
  mat_v[v,]=v_seuil
  for(i in 1:101){
CLASSE=variable<=v_seuil[i]
mat_gini[v,i]=
  gini(y=myocarde$PRONO,classe=CLASSE)}}

Actually, the range of possible values is slightly different: I do not want cutoff too much on the left or on the right… having a leaf with one or two observations is not the idea, here. Not, if we plot all the functions, we get

par(mfrow=c(3,2))
for(v in 2:7){
  plot(mat_v[v,],mat_gini[v,],type="l",
  ylim=range(mat_gini),xlab="",ylab="",
  main=names(myocarde)[v]) 
  abline(h=max(mat_gini),col="blue")
}


Here, the most homogenous leaves obtained using a cut in two parts is when we use variable ‘INSYS’. And the optimal cutoff variable is close to 19. So far, that’s the only information we use. Well, actually no. If the gain is sufficiently large, we go for a split. Here, the gain is

gini(y=myocarde$PRONO,classe=(myocarde[,3]<19))-
gini(y=myocarde$PRONO,classe=(myocarde[,3]<=Inf))
[1] 0.2832801

which is large. Sufficiently large to go for it, and to split in two. Actually, we look at the relative gain

-(gini(y=myocarde$PRONO,classe=(myocarde[,3]<19))-
gini(y=myocarde$PRONO,classe=(myocarde[,3]<=Inf)))/
gini(y=myocarde$PRONO,classe=(myocarde[,3]<=Inf))
[1] 0.5862131

If that gain exceed 1% (the default value in R), we split in two.

Then, we do it again. Twice. First, on go on the leaf on the left, with 27 observations. And we try to see if we can split it.

idx = which(myocarde$INSYS<19)
mat_gini = mat_v = matrix(NA,7,101)
for(v in 1:7){
  variable = myocarde[idx,v]
  v_seuil = seq(quantile(myocarde[idx,v],
7/length(myocarde[idx,v])),
quantile(myocarde[idx,v],1-7/length(
myocarde[idx,v])), length=101)
  mat_v[v,] = v_seuil
  for(i in 1:101){
    CLASSE = variable<=v_seuil[i]
    mat_gini[v,i]=
      gini(y=myocarde$PRONO[idx],classe=CLASSE)}}
par(mfrow=c(3,2))
for(v in 2:7){
  plot(mat_v[v,],mat_gini[v,],type="l",
       ylim=range(mat_gini),xlab="",ylab="",
       main=names(myocarde)[v]) 
  abline(h=max(mat_gini),col="blue")
}

The graph is here the following,

and observe that the best split is obtained using ‘REPUL’, with a cutoff around 1585. We check that the (relative) gain is sufficiently large, and then we go for it.
And then, we consider the other leaf, and we run the same code

idx = which(myocarde$INSYS>=19)
mat_gini = mat_v = matrix(NA,7,101)
for(v in 1:7){
  variable=myocarde[idx,v]
  v_seuil=seq(quantile(myocarde[idx,v],
6/length(myocarde[idx,v])),
quantile(myocarde[idx,v],1-6/length(
myocarde[idx,v])), length=101)
  mat_v[v,]=v_seuil
  for(i in 1:101){
    CLASSE=variable<=v_seuil[i]
    mat_gini[v,i]=
      gini(y=myocarde$PRONO[idx],
           classe=CLASSE)}}
par(mfrow=c(3,2))
for(v in 2:7){
  plot(mat_v[v,],mat_gini[v,],type="l",
       ylim=range(mat_gini),xlab="",ylab="",
       main=names(myocarde)[v]) 
  abline(h=max(mat_gini),col="blue")
}


Here, we should split according to ‘REPUL’, and the cutoff is about 1094. Here again, we have to make sure that the split is worth it. And we cut.

Now we have four leaves. And we should run the same code, again. Actually, not on the very first one, which is homogenous. But we should do the same for the other three. If we do it, we can see that we cannot split them any further. Gains will not be sufficiently interesting.

Now guess what… that’s exactly what we have obtained with our initial code

Note that the case of categorical explanatory variables has been discussed in a previous post, a few years ago.

Application on our small dataset

On our small dataset, we obtain (after changing the default values since in R, we should not have leaves with less than 10 observations… and here, the dataset is too small).

tree = rpart(y ~ x1+x2,data=df, 
control = rpart.control(cp = 0.25,
minsplit = 7))
prp(tree,type=2,extra=1)

u = seq(0,1,length=101)
p = function(x,y){predict(tree,newdata=data.frame(x1=x,x2=y),type="prob")[,2]}
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

We have a nice and simple cut

With less observations in the leaves, we can easily get a perfect model here

tree = rpart(y ~ x1+x2,data=df, 
control = rpart.control(cp = 0.25,
minsplit = 2))
prp(tree,type=2,extra=1)

u = seq(0,1,length=101)
p = function(x,y){predict(tree,newdata=data.frame(x1=x,x2=y),type="prob")[,2]}
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Nice, isn’t it? Now, just two little additional comments before growing some more trees…

Pruning

I did not mention pruning here. Because there are two possible strategies when growing trees. Either we keep spliting, until we obtain only homogeneous leaves. Once we have a big, deep tree, we go for pruning. Or we use the stategy mentionned here : at each step, we check if the split is worth it. If not, we stop.

Variable Importance

An interesting tool is the variable importance function. The heuristic idea is that if we use variable ‘INSYS’ to split, it is an important variable. And its importance is related to the gain in Gini index. If we get back to the visualization of the tree, it seems that two variables are interesting here: ‘INSYS’ and ‘REPUL’. And we should get back to previous computation to quantify how important both are.

This will be used in our next post, on random forests. But actually it is not the case here, with one single tree. Let us get back to the graph on the initial node.

Indeed, ‘INSYS’ is important, since we decided to use it. But what about ‘INCAR’ or ‘REPUL’? They were very close… And actually, in R, those surrogate splits are considered in the computation, as briefly explained in the vignette. Let us look more carefully at the output of the R function

cart = rpart(PRONO~., myocarde)
split = summary(cart)$splits

If we look at the first part of that object, we get

split
      count ncat    improve    index       adj
INSYS    71   -1 0.58621312   18.850 0.0000000
REPUL    71    1 0.55440034 1094.500 0.0000000
INCAR    71   -1 0.54257020    1.690 0.0000000
PRDIA    71    1 0.27284114   17.000 0.0000000
PAPUL    71    1 0.20466714   23.250 0.0000000

So indeed, ‘INSYS’ was the most important variable, but surrogate splits can also be considered, and ‘INCAR’ and ‘REPUL’ are indeed very important. The gain was 58% (as we obtained) using ‘INSYS’ but there were gains of 55% (nothing to be ashamed of). So it would be unfair to claim that they have no importance, at all. And it is the same for the other leaves that we split,

REPUL    27    1 0.18181818 1585.000 0.0000000
PVENT    27   -1 0.10803571   14.500 0.0000000
PRDIA    27    1 0.10803571   18.500 0.0000000
PAPUL    27    1 0.10803571   22.500 0.0000000
INCAR    27    1 0.04705882    1.195 0.0000000

On the left, we did use ‘REPUL’ (with 18% gain), but ‘PVENT’, ‘PRDIA’ and ‘PAPUL’ were not that bad, with (almost) 11% gain… We can obtain variable importance by summing all those values, and we have

cart$variable.importance
     INSYS      REPUL      INCAR      PAPUL      PRDIA      FRCAR      PVENT 
10.3649847 10.0510872  8.2121267  3.2441501  2.8276121  1.8623046  0.3373771

that we can visualize using

barplot(t(cart$variable.importance),horiz=TRUE)


To be continued with more trees…

La parade des nains, ou la visualisation des inégalités

Dans mon précédant billet, sur la répartition de la population sur le territoire français, j’utilisais l’indice de Gini, comme mesure d’inégalité. Et je notais que l’indice était passé de 0.45 au début du XIXème siècle à 0.75 aujourd’hui. Mais force est de constater que cet indice n’est pas très visuel (à moins que l’on ne soit à l’aise avec les courbes de Lorenz). En 1970, Jan Pen proposa introduisit la “parade des nains” afin d’illustrer les inégalités. Pour décrire des inégalités de revenus, on représente des individus dont la taille est proportionnelle au revenu.

Il y a quelques (rares) géants, et beaucoup de nains. On peut tracer la courbe associée à cette parade des nains. Formelle, pour , on trace

où  est la fonction de répartition de la variable que l’on étudie. Rappelons que pour une loi lognormale , l’indice de Gini est

De manière duale, à partir d’un indice de Gini, on peut obtenir les paramètres d’une loi lognormale sous-jacente. Par exemple, pour une loi lognormale, associée à un indice de Gini de l’ordre de 0.25, on obtient

> library(ineq)
> gini=0.25
> sigma=qnorm((1+gini)/2)*sqrt(2)
> x=rlnorm(n,0,sigma)
> Pen(x)

Pour des indices de Gini de 0.50 et de 0.75, on obtient les courbes suivantes

> gini=0.5
> sigma=qnorm((1+gini)/2)*sqrt(2)
> n=1e4
> y1=qlnorm((1:n)/(n+1),0,sigma)/
+ exp(0+sigma^2/2)

> gini=0.75
> sigma=qnorm((1+gini)/2)*sqrt(2)
> y2=qlnorm((1:n)/(n+1),0,sigma)/
+ exp(0+sigma^2/2)
> plot((1:n)/(n+1),y1,ylim=c(-.5,10))
> rect(0,0,1,1,col="light yellow",border=NA)
> polygon(c(0,(1:n)/(n+1),1),c(0,y1,0),
+ col=rgb(1,0,0,.5))
> polygon(c(0,(1:n)/(n+1),1),c(0,y2,0),
+ col=rgb(0,0,1,.5))
> abline(h=1,lty=2,col="grey")

La courbe jaune est une distribution parfaitement égalitaire. La courbe rouge est obtenue avec un indice de Gini de l’ordre de 0.50 alors que celle en bleu est obtenue avec un indice de 0.75.

Pour reprendre notre exemple de population, répartie sur le territoire, on peut aussi imaginer un territoire linéaire de 1 km de long. On suppose que la zone de gauche est la moins dense, et celle de droite est la plus dense.

  • avec une répartition uniforme, supposons que tout le monde puisse loger dans des immeubles de 5 étages.
  • avec un indice de Gini de 0.50,  les premiers 200m sont des immeubles de 1 étage, les 200m suivants, ce sont des immeubles de 2 étages, etc. A 700m, on obtient les derniers immeubles de 5 étages. Sur la fin, on a des immeubles de 15 étages sur les 50 derniers mètres, et avant un immeuble de 12 étages, et avant encore des immeubles de 10 étages.
  • dans le cas d’un indice de Gini de l’ordre de 0.75, il n’y a rien sur 25% du territoire, puis sur les 25% suivant, des immeubles de 1 étage, etc. 80% des logements ne dépassent pas 5 étages.  Par contre, sur la fin, on des tours relativement haute, dépassant 20 étages.
> n=19
> gini=0.5
> sigma=qnorm((1+gini)/2)*sqrt(2)
> y1=qlnorm((1:n)/(n+1),0,sigma)/
+ exp(0+sigma^2/2)

> gini=0.75
> sigma=qnorm((1+gini)/2)*sqrt(2)
> y2=qlnorm((1:n)/(n+1),0,sigma)/
+ exp(0+sigma^2/2)

> plot((1:n)/(n+1),y1*5,ylim=c(-.5,20))
> for(i in 1:n){
+ rect(i/(n+1)-.02,0,i/(n+1)+.02,1*5,
+ col="light yellow",border=NA)
+ rect(i/(n+1)-.02,0,i/(n+1)+.02,round(y1[i]*5),
+ col=rgb(1,0,0,.5),border=NA)
+ rect(i/(n+1)-.02,0,i/(n+1)+.02,round(y2[i]*5),
+ col=rgb(0,0,1,.5),border=NA)
+ }
> abline(h=1*5,lty=2,col="grey")
> axis(2)

ou, si on zoome, on obtient

La courbe rouge, c’est la répartition de la population, en France au XIXème siècle, ou en Allemagne aujourd’hui (tel que décrit dans mon précédant billet). La courbe bleue représente la France aujourd’hui.

La centralisation française, et la répartition de la population sur le territoire

Suite à mon billet d’hier soir sur la distribution en France, j’ai eu plusieurs commentaires – sur Twitter – qui notaient que ce n’était pas surprenant que la France soit aussi concentrée, en terme de population, compte tenu de l’importance de la centralisation en France (par opposition à l’Allemagne, par exemple).  Et comme le note wikipedia, à la page sur le centralisme, “depuis la Révolution française (et même avant), la France est un État très centraliste“.

Il y a quelques années, Mattia Bunel (aka @mattiabunel) avait rédigé un joli mémoire sur l’influence des contraintes environnementales sur la distribution de la population française entre 1793 et 1999. Au delà du travail de rédaction, il y a surtout un gros travail sur les données. Mattia a remis en forme les données de cassini.ehess.fr. Et Mattia a eu la gentille de m’envoyer ses données, avec en ligne les villages en France, leur superficie, et leur population, à plusieurs dates, entre 1793 et 1999. Pour récupérer la superficie (Mattia a passer du temps à mettre à jour, pour les villages qui ont fusionné, en particulier) le code est tout simplement

> base=read.csv("/Cassini/export.csv")
> dim(base)
[1] 41409    40
> S=base$Superficie
> n=nchar(as.character(S))
> S=substr(S,1,n-3)
> Surface=as.numeric(gsub(" ", "", as.character(S), fixed = TRUE))
> idx=which(!is.na(Surface))
> B=base[idx,]
> dim(B)
[1] 36576    40

On a ainsi les 36,000 communes françaises. Ensuite, on peut extraire, par année, la population. Histoire d’avoir un calcul de l’indice de Gini qui soit cohérent avec ce que j’avais fait hier – dans Non-Uniform Population Density in some European Countries – l’idée est la suivante : supposons qu’il y ait 2 villages, un de superficie 4, et de population 4, et un autre de superficie 1, et de population 3. Dans le billet d’hier, je raisonnais pas unité spatiale (en l’occurrence un petit carré)

et on va faire pareil ici. Autrement dit, on va répartir la population uniformément au sein du village de taille 4, ce qui va faire 4 “villages unitaires” avec 1 habitant chacun,

On a ainsi 5 unités, 4 avec une population de 1, et 1 avec une population de 3. De cet échantillon – {1,1,1,1,3} – on peut calculer l’indice de Gini. La fonction pour extraire la population, et calculer l’indice de Gini est

> library(ineq)
> LC=function(annee){
+   nom=paste("X",annee,sep="")
+   P=B[,nom]
+   P=gsub(" ", "", as.character(P), fixed = TRUE)
+   Pop=as.numeric(substr(P,3,nchar(P)))
+   P=Pop[which(!is.na(Pop))]
+   S=Surface[which(!is.na(Pop))]
+   D=P/S
+   S1=round(S/20)
+   X=rep(NA,sum(S1))
+   s=0
+   for(i in 1:length(S1)){
+     X[s+1:S1[i]]=D[i]
+     s=s+S1[i]
+   }
+   Gini(X)
+ }

Si on calcule notre indice de Gini pour toutes les années, on obtient

> Y=names(base)[7:ncol(base)]
> Y=as.numeric(substr(Y,2,5))
> gini=Vectorize(LC)(Y)
> plot(Y,gini,type="b")

On retrouve un indice de Gini qui dépasse 0.7 aujourd’hui (que nous avions obtenu hier, avec une méthodologie assez différente), mais surtout, on voit que l’indice de Gini n’a cessé d’augmenter, depuis la révolution française… L’interprétation rapide serait que le centralisme français ne cesse pas d’augmenter, depuis 200 ans.

How Could Classification Trees Be So Fast on Categorical Variables?

I think that over the past months, I have been saying non-correct things about classification with categorical covariates. Because I never took time to look at it carefuly. Consider some simulated dataset, with a logistic regression,

> n=1e3
> set.seed(1)
> X1=runif(n)
> q=quantile(X1,(0:26)/26)
> q[1]=0
> X2=cut(X1,q,labels=LETTERS[1:26])
> p=exp(-.1+qnorm(2*(abs(.5-X1))))/(1+exp(-.1+qnorm(2*(abs(.5-X1)))))
> Y=rbinom(n,size=1,p)
> df=data.frame(X1=X1,X2=X2,p=p,Y=Y)

Here, we use some continuous covariate, except that is considered as not-observed. Instead, we have a categorical covariate with 26 categories. The (theoretical) relationship between the covariate and the probability is given below,

> vx1=seq(0,1,by=.001)
> vp=exp(-.1+qnorm(2*(abs(.5-vx1))))/(1+exp(-.1+qnorm(2*(abs(.5-vx1)))))
> plot(vx1,vp,type="l")

and the empirical probability, for each modality is

If we run a classification tree, we get

> library(rpart)
> tree=rpart(Y~X2,data=df)
> library(rpart.plot)
> prp(tree, type=2, extra=1)

To be more specific, the output is here

> tree
1) root 1000 249.90000 0.4900000  
  2) X2=F,G,H,I,J,K,L,M,N,O,P,Q,R 499 105.3 0.302
    4) X2=J,K,L,M,N,O,P,Q,R 346  65.12 0.25144  *
    5) X2=F,G,H,I 153  37.22876 0.4183007       *
  3) X2=A,B,C,D,E,S,T,U,V,W,X,Y,Z 501 109.61 0.67
    6) X2=B,C,D,E,S,T,U,V,W,X 385  90.38 0.623  *
    7) X2=A,Y,Z 116  14.50862 0.8534483         *

 

Note that it takes less than a second to get that output. So clearly, we did not look for all combinations between modalities. For the first node, there are like  possible groups, i.e.

> 67108864

It is big… not huge, but too big to try all combinations, since that’s only the first node, and we have to do it again on the two leaves, etc. Antoine (aka @ly_antoine) told me – while we were having a coffee after lunch today – the trick to get a fast algorithm, on categories. And as usual, the idea is very clever…

First, we need a function to compute Gini index

> gini=function(y,classe){
+    T=table(y,classe)
+    nx=apply(T,2,sum)
+    n=sum(T)
+    pxy=T/matrix(rep(nx,each=2),nrow=2)
+    omega=matrix(rep(nx,each=2),nrow=2)/n
+    g=-sum(omega*pxy*(1-pxy))
+    return(g)}

For the first node, the idea is very simple:

  • Compute empirical averages 
> cond_prob=aggregate(df$Y,by=list(df$X2),mean)
  • Then sort those values, ,
  • Based on that ordering, consider 
> Group_Letters=cond_prob[order(cond_prob$x),2]

  • Then consider (only)  possible partitions,

against 

> v_gini=rep(NA,26)
> for(v in 1:26){
+   CLASSE=df$X2 %in% Group_Letters[1:v]
+   v_gini[v]=gini(y=df$Y,classe=CLASSE)
+ }

If we plot them, we get

> plot(1:26,v_gini,type="b)

As for continuous variables, we seek for the maximum value, and then, we have our two groups,

> sort(Group_Letters[1:which.max(v_gini)])
 [1] F G H I J K L M N O P Q R

That’s exactly what we got with the tree function in R,

1) root 1000 249.90000 0.4900000  
  2) X2=F,G,H,I,J,K,L,M,N,O,P,Q,R 499 105.30 0.30

Now, consider the leaf on the left (for instance)

> sub_df=df[df$X2 %in% sort(Group_Letters[1:which.max(v_gini)]),]

Then use the same algorithm as before: sort the conditional means,

> cond_prob=aggregate(sub_df$Y,by=
+ list(sub_df$X2),mean)
> s_Group_Letters=cond_prob[order(cond_prob$x),2]

Then compute Gini indices based on groups obtained from that ordering,

> v_gini=rep(NA,length(sub_Group_Letters))
> for(v in 1:length(sub_Group_Letters)){
+   CLASSE=sub_df$X2 %in% s_Group_Letters[1:v]
+   v_gini[v]=gini(y=sub_df$Y,classe=CLASSE)
+ }

If we plot it, we get our two groups,

> plot(1:length(s_Group_Letters),v_gini,type="b")

And the first group is here

> sort(sub_Group_Letters[1:which.max(v_gini)])
[1] J K L M N O P Q R

Again, that’s exactly what we got with the R function

1) root 1000 249.90000 0.4900000  
  2) X2=F,G,H,I,J,K,L,M,N,O,P,Q,R 499 105.30 0.30
    4) X2=J,K,L,M,N,O,P,Q,R 346  65.12 0.25144  *

Clever, isn’t?

‘Variable Importance Plot’ and Variable Selection

Classification trees are nice. They provide an interesting alternative to a logistic regression.  I started to include them in my courses maybe 7 or 8 years ago. The question is nice (how to get an optimal partition), the algorithmic procedure is nice (the trick of splitting according to one variable, and only one, at each node, and then to move forward, never backward), and the visual output is just perfect (with that tree structure). But the prediction can be rather poor. The performance of that algorithme can hardly compete with a (well specified) logistic regression.

Then I discovered forests (see Leo Breiman’s page for a detailed presentation). Being a huge fan of boostrap procedures I loved the idea. In regression models, I usually mention boostrap to avoid asymptotic approximations: we boostrap the rows (the observations). In the case of random forest, I have to admit that the idea of selecting randomly a set of possible variables at each node is very clever. The performance is much better, but interpretation is usually more difficult. And something that I love when there are a lot of covariance, the variable importance plot. Which is something that we can hardly get with econometric models (please let me know if I’m wrong).

In order to illustrate, let us generate a large dataset. Not necessarily huge, but large, so that we really have to select variables.  Since it is more interesting if we have possibly correlated variables, we need a covariance matrix. There is a nice package in R to randomly generate covariance matrices.

> set.seed(1)
> n=500
> library(clusterGeneration)
> library(mnormt)
> S=genPositiveDefMat("eigen",dim=15)
> S=genPositiveDefMat("unifcorrmat",dim=15)
> X=rmnorm(n,varcov=S$Sigma)
> library(corrplot)
> corrplot(cor(X), order = "hclust")

See Gosh & Hendersen (2003) for more details on the methodology.

Continue reading ‘Variable Importance Plot’ and Variable Selection

Spliting a Node in a Tree

If we grow a tree with standard functions in R, on the same dataset used to introduce classification tree in some previous post,

> MYOCARDE=read.table(
+ "http://freakonometrics.free.fr/saporta.csv",
+ head=TRUE,sep=";")
> library(rpart)
> cart<-rpart(PRONO~.,data=MYOCARDE)

we get

> library(rpart.plot)
> library(rattle)
> prp(cart,type=2,extra=1)

Continue reading Spliting a Node in a Tree

Inequality, Poverty and Welfare

For the fourth cours on Inequalities, we will get back on the quantile regression, and discuss welfare functions as well as poverty indices. Slides are now online

To illustrate, we will use the following datasets

uk88 <- read.csv("http://www.vcharite.univ-mrs.fr/pp/lubrano/cours/fes88.csv",sep=";",header=FALSE)$V1
uk92 <- read.csv("http://www.vcharite.univ-mrs.fr/pp/lubrano/cours/fes92.csv",sep=";",header=FALSE)$V1
uk96 <- read.csv("http://www.vcharite.univ-mrs.fr/pp/lubrano/cours/fes96.csv",sep=";",header=FALSE)$V1
cpi <- c(421.7, 546.4, 602.4)
y88 <- uk88/cpi[1]
y92 <- uk92/cpi[2]
y96 <- uk96/cpi[3]

and for the part on applications of quantile regression

salary <- read.table("http://data.princeton.edu/wws509/datasets/salary.dat",header=TRUE)

Visualizing Inequalities in a 3-Person Economy

Yesterday, in the course on inequalities, I mentioned (too) briefly the 3-person Economy. I wanted to spend some time in a short post on visualisations of inequalities in such a context. As mentioned in the slides,  it is possible to use a ternary plot representation. In the case where we believe that the scale independence principle makes sense, i.e. https://latex.codecogs.com/gif.latex?I(\lambda\boldsymbol{x})=I(\boldsymbol{x}). A distribution of incomes can be represented as a barycenter in an equilateral triangle (also called de Finetti diagram). The midpoint is the equal situation: the three agents share the same wealth. Because of the scale independence property, we can look at distribution of wealth on the simplex. A wealth distribution is a vector https://latex.codecogs.com/gif.latex?\boldsymbol{\omega}=(\omega_A,\omega_B,\omega_C) where each component is one of the (red) distance below. A is on top of the triangle, and the vertical distance is proportional to the wealth of A. The closer to the bottom line, the poorer A is.

To visualize this distribution of wealth, we can use the trifield package. To add the point and the three segments, the code is

tripoint=function(s){
  p=s/sum(s)
  p1=c(0,s[2]+s[1]/2,s[3]+s[1]/2)/sum(s)
  p2=c(s[1]+s[2]/2,0,s[3]+s[2]/2)/sum(s)
  p3=c(s[1]+s[3]/2,s[2]+s[3]/2,0)/sum(s)  
  C <- abc2xy(matrix(p,1,3))
  points(C,pch=19,col="red",cex=2)
  C1 <- abc2xy(matrix(p1,1,3))
  C2 <- abc2xy(matrix(p2,1,3))
  C3 <- abc2xy(matrix(p3,1,3))
  segments(C1[1],C1[2],C[1],C[2],lwd=2,col="red")
  segments(C2[1],C2[2],C[1],C[2],lwd=2,col="red")
  segments(C3[1],C3[2],C[1],C[2],lwd=2,col="red")
}

For instance, to visualize the equal case (inequality indices are defined as a distance to this situation)

tripoint(c(1,1,1))

For a case where there is inequality, use for instance

tripoint(c(1,2,3))

Continue reading Visualizing Inequalities in a 3-Person Economy

Inequalities, course 3

Tomorrow, we will discuss inequality indices, from a statistical perspective, and also an axiomatic point of view. In order to illustrate, we will use to following dataset,

> income <- read.csv("http://www.vcharite.univ-mrs.fr/pp/lubrano/cours/fes96.csv",sep=";",header=FALSE)$V1

Slides can be found online. Since it is the first year I give this course, all comments are welcome…