Tag Archives: Gini

La parade des nains, ou la visualisation des inégalités

Dans mon précédant billet, sur la répartition de la population sur le territoire français, j’utilisais l’indice de Gini, comme mesure d’inégalité. Et je notais que l’indice était passé de 0.45 au début du XIXème siècle à 0.75 aujourd’hui. Mais force est de constater que cet indice n’est pas très visuel (à moins que l’on ne soit à l’aise avec les courbes de Lorenz). En 1970, Jan Pen proposa introduisit la “parade des nains” afin d’illustrer les inégalités. Pour décrire des inégalités de revenus, on représente des individus dont la taille est proportionnelle au revenu.

Il y a quelques (rares) géants, et beaucoup de nains. On peut tracer la courbe associée à cette parade des nains. Formelle, pour , on trace

où  est la fonction de répartition de la variable que l’on étudie. Rappelons que pour une loi lognormale , l’indice de Gini est

De manière duale, à partir d’un indice de Gini, on peut obtenir les paramètres d’une loi lognormale sous-jacente. Par exemple, pour une loi lognormale, associée à un indice de Gini de l’ordre de 0.25, on obtient

> library(ineq)
> gini=0.25
> sigma=qnorm((1+gini)/2)*sqrt(2)
> x=rlnorm(n,0,sigma)
> Pen(x)

Pour des indices de Gini de 0.50 et de 0.75, on obtient les courbes suivantes

> gini=0.5
> sigma=qnorm((1+gini)/2)*sqrt(2)
> n=1e4
> y1=qlnorm((1:n)/(n+1),0,sigma)/
+ exp(0+sigma^2/2)

> gini=0.75
> sigma=qnorm((1+gini)/2)*sqrt(2)
> y2=qlnorm((1:n)/(n+1),0,sigma)/
+ exp(0+sigma^2/2)
> plot((1:n)/(n+1),y1,ylim=c(-.5,10))
> rect(0,0,1,1,col="light yellow",border=NA)
> polygon(c(0,(1:n)/(n+1),1),c(0,y1,0),
+ col=rgb(1,0,0,.5))
> polygon(c(0,(1:n)/(n+1),1),c(0,y2,0),
+ col=rgb(0,0,1,.5))
> abline(h=1,lty=2,col="grey")

La courbe jaune est une distribution parfaitement égalitaire. La courbe rouge est obtenue avec un indice de Gini de l’ordre de 0.50 alors que celle en bleu est obtenue avec un indice de 0.75.

Pour reprendre notre exemple de population, répartie sur le territoire, on peut aussi imaginer un territoire linéaire de 1 km de long. On suppose que la zone de gauche est la moins dense, et celle de droite est la plus dense.

  • avec une répartition uniforme, supposons que tout le monde puisse loger dans des immeubles de 5 étages.
  • avec un indice de Gini de 0.50,  les premiers 200m sont des immeubles de 1 étage, les 200m suivants, ce sont des immeubles de 2 étages, etc. A 700m, on obtient les derniers immeubles de 5 étages. Sur la fin, on a des immeubles de 15 étages sur les 50 derniers mètres, et avant un immeuble de 12 étages, et avant encore des immeubles de 10 étages.
  • dans le cas d’un indice de Gini de l’ordre de 0.75, il n’y a rien sur 25% du territoire, puis sur les 25% suivant, des immeubles de 1 étage, etc. 80% des logements ne dépassent pas 5 étages.  Par contre, sur la fin, on des tours relativement haute, dépassant 20 étages.
> n=19
> gini=0.5
> sigma=qnorm((1+gini)/2)*sqrt(2)
> y1=qlnorm((1:n)/(n+1),0,sigma)/
+ exp(0+sigma^2/2)

> gini=0.75
> sigma=qnorm((1+gini)/2)*sqrt(2)
> y2=qlnorm((1:n)/(n+1),0,sigma)/
+ exp(0+sigma^2/2)

> plot((1:n)/(n+1),y1*5,ylim=c(-.5,20))
> for(i in 1:n){
+ rect(i/(n+1)-.02,0,i/(n+1)+.02,1*5,
+ col="light yellow",border=NA)
+ rect(i/(n+1)-.02,0,i/(n+1)+.02,round(y1[i]*5),
+ col=rgb(1,0,0,.5),border=NA)
+ rect(i/(n+1)-.02,0,i/(n+1)+.02,round(y2[i]*5),
+ col=rgb(0,0,1,.5),border=NA)
+ }
> abline(h=1*5,lty=2,col="grey")
> axis(2)

ou, si on zoome, on obtient

La courbe rouge, c’est la répartition de la population, en France au XIXème siècle, ou en Allemagne aujourd’hui (tel que décrit dans mon précédant billet). La courbe bleue représente la France aujourd’hui.

La centralisation française, et la répartition de la population sur le territoire

Suite à mon billet d’hier soir sur la distribution en France, j’ai eu plusieurs commentaires – sur Twitter – qui notaient que ce n’était pas surprenant que la France soit aussi concentrée, en terme de population, compte tenu de l’importance de la centralisation en France (par opposition à l’Allemagne, par exemple).  Et comme le note wikipedia, à la page sur le centralisme, “depuis la Révolution française (et même avant), la France est un État très centraliste“.

Il y a quelques années, Mattia Bunel (aka @mattiabunel) avait rédigé un joli mémoire sur l’influence des contraintes environnementales sur la distribution de la population française entre 1793 et 1999. Au delà du travail de rédaction, il y a surtout un gros travail sur les données. Mattia a remis en forme les données de cassini.ehess.fr. Et Mattia a eu la gentille de m’envoyer ses données, avec en ligne les villages en France, leur superficie, et leur population, à plusieurs dates, entre 1793 et 1999. Pour récupérer la superficie (Mattia a passer du temps à mettre à jour, pour les villages qui ont fusionné, en particulier) le code est tout simplement

> base=read.csv("/Cassini/export.csv")
> dim(base)
[1] 41409    40
> S=base$Superficie
> n=nchar(as.character(S))
> S=substr(S,1,n-3)
> Surface=as.numeric(gsub(" ", "", as.character(S), fixed = TRUE))
> idx=which(!is.na(Surface))
> B=base[idx,]
> dim(B)
[1] 36576    40

On a ainsi les 36,000 communes françaises. Ensuite, on peut extraire, par année, la population. Histoire d’avoir un calcul de l’indice de Gini qui soit cohérent avec ce que j’avais fait hier – dans Non-Uniform Population Density in some European Countries – l’idée est la suivante : supposons qu’il y ait 2 villages, un de superficie 4, et de population 4, et un autre de superficie 1, et de population 3. Dans le billet d’hier, je raisonnais pas unité spatiale (en l’occurrence un petit carré)

et on va faire pareil ici. Autrement dit, on va répartir la population uniformément au sein du village de taille 4, ce qui va faire 4 “villages unitaires” avec 1 habitant chacun,

On a ainsi 5 unités, 4 avec une population de 1, et 1 avec une population de 3. De cet échantillon – {1,1,1,1,3} – on peut calculer l’indice de Gini. La fonction pour extraire la population, et calculer l’indice de Gini est

> library(ineq)
> LC=function(annee){
+   nom=paste("X",annee,sep="")
+   P=B[,nom]
+   P=gsub(" ", "", as.character(P), fixed = TRUE)
+   Pop=as.numeric(substr(P,3,nchar(P)))
+   P=Pop[which(!is.na(Pop))]
+   S=Surface[which(!is.na(Pop))]
+   D=P/S
+   S1=round(S/20)
+   X=rep(NA,sum(S1))
+   s=0
+   for(i in 1:length(S1)){
+     X[s+1:S1[i]]=D[i]
+     s=s+S1[i]
+   }
+   Gini(X)
+ }

Si on calcule notre indice de Gini pour toutes les années, on obtient

> Y=names(base)[7:ncol(base)]
> Y=as.numeric(substr(Y,2,5))
> gini=Vectorize(LC)(Y)
> plot(Y,gini,type="b")

On retrouve un indice de Gini qui dépasse 0.7 aujourd’hui (que nous avions obtenu hier, avec une méthodologie assez différente), mais surtout, on voit que l’indice de Gini n’a cessé d’augmenter, depuis la révolution française… L’interprétation rapide serait que le centralisme français ne cesse pas d’augmenter, depuis 200 ans.

How Could Classification Trees Be So Fast on Categorical Variables?

I think that over the past months, I have been saying non-correct things about classification with categorical covariates. Because I never took time to look at it carefuly. Consider some simulated dataset, with a logistic regression,

> n=1e3
> set.seed(1)
> X1=runif(n)
> q=quantile(X1,(0:26)/26)
> q[1]=0
> X2=cut(X1,q,labels=LETTERS[1:26])
> p=exp(-.1+qnorm(2*(abs(.5-X1))))/(1+exp(-.1+qnorm(2*(abs(.5-X1)))))
> Y=rbinom(n,size=1,p)
> df=data.frame(X1=X1,X2=X2,p=p,Y=Y)

Here, we use some continuous covariate, except that is considered as not-observed. Instead, we have a categorical covariate with 26 categories. The (theoretical) relationship between the covariate and the probability is given below,

> vx1=seq(0,1,by=.001)
> vp=exp(-.1+qnorm(2*(abs(.5-vx1))))/(1+exp(-.1+qnorm(2*(abs(.5-vx1)))))
> plot(vx1,vp,type="l")

and the empirical probability, for each modality is

If we run a classification tree, we get

> library(rpart)
> tree=rpart(Y~X2,data=df)
> library(rpart.plot)
> prp(tree, type=2, extra=1)

To be more specific, the output is here

> tree
1) root 1000 249.90000 0.4900000  
  2) X2=F,G,H,I,J,K,L,M,N,O,P,Q,R 499 105.3 0.302
    4) X2=J,K,L,M,N,O,P,Q,R 346  65.12 0.25144  *
    5) X2=F,G,H,I 153  37.22876 0.4183007       *
  3) X2=A,B,C,D,E,S,T,U,V,W,X,Y,Z 501 109.61 0.67
    6) X2=B,C,D,E,S,T,U,V,W,X 385  90.38 0.623  *
    7) X2=A,Y,Z 116  14.50862 0.8534483         *

 

Note that it takes less than a second to get that output. So clearly, we did not look for all combinations between modalities. For the first node, there are like  possible groups, i.e.

> 67108864

It is big… not huge, but too big to try all combinations, since that’s only the first node, and we have to do it again on the two leaves, etc. Antoine (aka @ly_antoine) told me – while we were having a coffee after lunch today – the trick to get a fast algorithm, on categories. And as usual, the idea is very clever…

First, we need a function to compute Gini index

> gini=function(y,classe){
+    T=table(y,classe)
+    nx=apply(T,2,sum)
+    n=sum(T)
+    pxy=T/matrix(rep(nx,each=2),nrow=2)
+    omega=matrix(rep(nx,each=2),nrow=2)/n
+    g=-sum(omega*pxy*(1-pxy))
+    return(g)}

For the first node, the idea is very simple:

  • Compute empirical averages 
> cond_prob=aggregate(df$Y,by=list(df$X2),mean)
  • Then sort those values, ,
  • Based on that ordering, consider 
> Group_Letters=cond_prob[order(cond_prob$x),2]

  • Then consider (only)  possible partitions,

against 

> v_gini=rep(NA,26)
> for(v in 1:26){
+   CLASSE=df$X2 %in% Group_Letters[1:v]
+   v_gini[v]=gini(y=df$Y,classe=CLASSE)
+ }

If we plot them, we get

> plot(1:26,v_gini,type="b)

As for continuous variables, we seek for the maximum value, and then, we have our two groups,

> sort(Group_Letters[1:which.max(v_gini)])
 [1] F G H I J K L M N O P Q R

That’s exactly what we got with the tree function in R,

1) root 1000 249.90000 0.4900000  
  2) X2=F,G,H,I,J,K,L,M,N,O,P,Q,R 499 105.30 0.30

Now, consider the leaf on the left (for instance)

> sub_df=df[df$X2 %in% sort(Group_Letters[1:which.max(v_gini)]),]

Then use the same algorithm as before: sort the conditional means,

> cond_prob=aggregate(sub_df$Y,by=
+ list(sub_df$X2),mean)
> s_Group_Letters=cond_prob[order(cond_prob$x),2]

Then compute Gini indices based on groups obtained from that ordering,

> v_gini=rep(NA,length(sub_Group_Letters))
> for(v in 1:length(sub_Group_Letters)){
+   CLASSE=sub_df$X2 %in% s_Group_Letters[1:v]
+   v_gini[v]=gini(y=sub_df$Y,classe=CLASSE)
+ }

If we plot it, we get our two groups,

> plot(1:length(s_Group_Letters),v_gini,type="b")

And the first group is here

> sort(sub_Group_Letters[1:which.max(v_gini)])
[1] J K L M N O P Q R

Again, that’s exactly what we got with the R function

1) root 1000 249.90000 0.4900000  
  2) X2=F,G,H,I,J,K,L,M,N,O,P,Q,R 499 105.30 0.30
    4) X2=J,K,L,M,N,O,P,Q,R 346  65.12 0.25144  *

Clever, isn’t?

‘Variable Importance Plot’ and Variable Selection

Classification trees are nice. They provide an interesting alternative to a logistic regression.  I started to include them in my courses maybe 7 or 8 years ago. The question is nice (how to get an optimal partition), the algorithmic procedure is nice (the trick of splitting according to one variable, and only one, at each node, and then to move forward, never backward), and the visual output is just perfect (with that tree structure). But the prediction can be rather poor. The performance of that algorithme can hardly compete with a (well specified) logistic regression.

Then I discovered forests (see Leo Breiman’s page for a detailed presentation). Being a huge fan of boostrap procedures I loved the idea. In regression models, I usually mention boostrap to avoid asymptotic approximations: we boostrap the rows (the observations). In the case of random forest, I have to admit that the idea of selecting randomly a set of possible variables at each node is very clever. The performance is much better, but interpretation is usually more difficult. And something that I love when there are a lot of covariance, the variable importance plot. Which is something that we can hardly get with econometric models (please let me know if I’m wrong).

In order to illustrate, let us generate a large dataset. Not necessarily huge, but large, so that we really have to select variables.  Since it is more interesting if we have possibly correlated variables, we need a covariance matrix. There is a nice package in R to randomly generate covariance matrices.

> set.seed(1)
> n=500
> library(clusterGeneration)
> library(mnormt)
> S=genPositiveDefMat("eigen",dim=15)
> S=genPositiveDefMat("unifcorrmat",dim=15)
> X=rmnorm(n,varcov=S$Sigma)
> library(corrplot)
> corrplot(cor(X), order = "hclust")

See Gosh & Hendersen (2003) for more details on the methodology.

Continue reading ‘Variable Importance Plot’ and Variable Selection

Spliting a Node in a Tree

If we grow a tree with standard functions in R, on the same dataset used to introduce classification tree in some previous post,

> MYOCARDE=read.table(
+ "http://freakonometrics.free.fr/saporta.csv",
+ head=TRUE,sep=";")
> library(rpart)
> cart<-rpart(PRONO~.,data=MYOCARDE)

we get

> library(rpart.plot)
> library(rattle)
> prp(cart,type=2,extra=1)

Continue reading Spliting a Node in a Tree

Inequality, Poverty and Welfare

For the fourth cours on Inequalities, we will get back on the quantile regression, and discuss welfare functions as well as poverty indices. Slides are now online

To illustrate, we will use the following datasets

uk88 <- read.csv("http://www.vcharite.univ-mrs.fr/pp/lubrano/cours/fes88.csv",sep=";",header=FALSE)$V1
uk92 <- read.csv("http://www.vcharite.univ-mrs.fr/pp/lubrano/cours/fes92.csv",sep=";",header=FALSE)$V1
uk96 <- read.csv("http://www.vcharite.univ-mrs.fr/pp/lubrano/cours/fes96.csv",sep=";",header=FALSE)$V1
cpi <- c(421.7, 546.4, 602.4)
y88 <- uk88/cpi[1]
y92 <- uk92/cpi[2]
y96 <- uk96/cpi[3]

and for the part on applications of quantile regression

salary <- read.table("http://data.princeton.edu/wws509/datasets/salary.dat",header=TRUE)

Visualizing Inequalities in a 3-Person Economy

Yesterday, in the course on inequalities, I mentioned (too) briefly the 3-person Economy. I wanted to spend some time in a short post on visualisations of inequalities in such a context. As mentioned in the slides,  it is possible to use a ternary plot representation. In the case where we believe that the scale independence principle makes sense, i.e. https://latex.codecogs.com/gif.latex?I(\lambda\boldsymbol{x})=I(\boldsymbol{x}). A distribution of incomes can be represented as a barycenter in an equilateral triangle (also called de Finetti diagram). The midpoint is the equal situation: the three agents share the same wealth. Because of the scale independence property, we can look at distribution of wealth on the simplex. A wealth distribution is a vector https://latex.codecogs.com/gif.latex?\boldsymbol{\omega}=(\omega_A,\omega_B,\omega_C) where each component is one of the (red) distance below. A is on top of the triangle, and the vertical distance is proportional to the wealth of A. The closer to the bottom line, the poorer A is.

To visualize this distribution of wealth, we can use the trifield package. To add the point and the three segments, the code is

tripoint=function(s){
  p=s/sum(s)
  p1=c(0,s[2]+s[1]/2,s[3]+s[1]/2)/sum(s)
  p2=c(s[1]+s[2]/2,0,s[3]+s[2]/2)/sum(s)
  p3=c(s[1]+s[3]/2,s[2]+s[3]/2,0)/sum(s)  
  C <- abc2xy(matrix(p,1,3))
  points(C,pch=19,col="red",cex=2)
  C1 <- abc2xy(matrix(p1,1,3))
  C2 <- abc2xy(matrix(p2,1,3))
  C3 <- abc2xy(matrix(p3,1,3))
  segments(C1[1],C1[2],C[1],C[2],lwd=2,col="red")
  segments(C2[1],C2[2],C[1],C[2],lwd=2,col="red")
  segments(C3[1],C3[2],C[1],C[2],lwd=2,col="red")
}

For instance, to visualize the equal case (inequality indices are defined as a distance to this situation)

tripoint(c(1,1,1))

For a case where there is inequality, use for instance

tripoint(c(1,2,3))

Continue reading Visualizing Inequalities in a 3-Person Economy

Inequalities, course 3

Tomorrow, we will discuss inequality indices, from a statistical perspective, and also an axiomatic point of view. In order to illustrate, we will use to following dataset,

> income <- read.csv("http://www.vcharite.univ-mrs.fr/pp/lubrano/cours/fes96.csv",sep=";",header=FALSE)$V1

Slides can be found online. Since it is the first year I give this course, all comments are welcome…

Modeling Incomes and Inequalities

Last week, in our Inequality course, we’ve been looking at data. We started with some simulated data, only a few of them

> library("ineq")
> load(url("http://freakonometrics.free.fr/income_5.RData"))
> (income=sort(income))
[1]  19233  23707  53297  61667 218662

How could we say that there is inequality in this sample? If we look at the wealth owned by the poorest, the poorest person (1 out of 5) owns 5% of the wealth; the bottom two (2 out of 5) own 11%, etc

> income[1]/sum(income)
[1] 0.05107471
> sum(income[1:2])/sum(income)
[1] 0.1140305
> sum(income[1:3])/sum(income)
[1] 0.2555648
> sum(income[1:4])/sum(income)
[1] 0.4193262

If we plot those values, we get Lorenz curve

> plot(Lc(income))
> points(c(0:5)/5,c(0,cumsum(income)/sum(income)),pch=19,col="blue")

Continue reading Modeling Incomes and Inequalities

Welfare, Inequality and Poverty

This week, we will start the crash course on Welfare, Inequality and Poverty. I will upload the slides soon. Reference for the course are the following,

See also Emmanuel Flachaire’s ECON-473 webpage, as well as Michel Lubrano’s notes. In the introductionary course, I will also mention Le Monde, 2012 (on poverty), with the pdf. An interesting video is based on Norton & Ariely, 2011

Income distribution and Tour de France

A few days ago, Jean-François Mignot published an interesting article entitled Tour de France 2014 : pourquoi le vainqueur gagne 100 fois plus que le 10e. In this article, we have the following graph, with the income of the cyclist, as a function of his final ranking (the data where downloaded from http://sportbuzzbusiness.fr/)

> bike=read.csv(
+ "http://freakonometrics.free.fr/tourdefrance.csv",
+ sep=";",header=TRUE,dec=" ")

> bike[1:19,"prime"]=bike[1:19,"prime"]*1000
> plot(bike,log="y",type="b",
+ xlab="(Final) rank",ylab="Bonus")

As pointed out by Jean-François, if the winner gets a lot of money, the bonus decreases fast, very fast actually. Gini index is very high here

> library(ineq)
> ineq(X,type="Gini")
[1] 0.910461

and if we look at Lorenz curve, indeed, the Tour de France is not very equalitarian,

Continue reading Income distribution and Tour de France

Regression tree using Gini’s index

In order to illustrate the construction of regression tree (using the CART methodology), consider the following simulated dataset,

> set.seed(1)
> n=200
> X1=runif(n)
> X2=runif(n)
> P=.8*(X1<.3)*(X2<.5)+
+   .2*(X1<.3)*(X2>.5)+
+   .8*(X1>.3)*(X1<.85)*(X2<.3)+
+   .2*(X1>.3)*(X1<.85)*(X2>.3)+
+   .8*(X1>.85)*(X2<.7)+
+   .2*(X1>.85)*(X2>.7) 
> Y=rbinom(n,size=1,P)  
> B=data.frame(Y,X1,X2)

with one dichotomos varible (the variable of interest, ), and two continuous ones (the explanatory ones  and ).

> tail(B)
    Y        X1        X2
195 0 0.2832325 0.1548510
196 0 0.5905732 0.3483021
197 0 0.1103606 0.6598210
198 0 0.8405070 0.3117724
199 0 0.3179637 0.3515734
200 1 0.7828513 0.1478457

The theoretical partition is the following

Here, the sample can be plotted below (be careful, the first variate is on the y-axis above, and the x-axis below) with blue dots when  equals one, and red dots when  is null,

> plot(X1,X2,col="white")
> points(X1[Y=="1"],X2[Y=="1"],col="blue",pch=19)
> points(X1[Y=="0"],X2[Y=="0"],col="red",pch=19)

In order to construct the tree, we need a partition critera. The most standard one is probably Gini’s index, which can be writen, when ‘s are splited in two classes, denoted here 

L'image “https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-04.png” ne peut être affichée car elle contient des erreurs.

or when ‘s are splited in three classes, denoted 
https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-07.png

etc. Here,  are just counts of observations that belong to partition  such that  takes value . But it is possible to consider other criteria, such as the chi-square distance,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-01.png

where, classically

https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-02.png
when we consider two classes (one knot) or, in the case of three classes (two knots)
https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-05.png

Here again, the idea is to maximize that distance: the idea is to discriminate, so we want samples as not independent as possible. To compute Gini’s index consider

> GINI=function(y,i){
+ T=table(y,i)
+ nx=apply(T,2,sum)
+ pxy=T/matrix(rep(nx,each=2),2,ncol(T))
+ vxy=pxy*(1-pxy)
+ zx=apply(vxy,2,sum)
+ n=sum(T)
+ -sum(nx/n*zx)
+ }

We simply construct the contingency table, and then, compute the quantity given above. Assume, first, that there is only one explanatory variable. We split the sample in two, with all possible spliting values , i.e.

Then, we compute Gini’s index, for all those values. The knot is the value that maximizes Gini’s index. Once we have our first knot, we keep it (call it, from now on ). And we reiterate, by seeking the best second choice: given one knot, consider the value that splits the sample in three, and give the highest Gini’s index, Thus, we consider either the following partition

or this one

I.e. we cut either below, or above the previous knot. And we iterate. The code can be something like that,

> X=X2
> u=(sort(X)[2:n]+sort(X)[1:(n-1)])/2
> knot=NULL
> for(s in 1:4){
+ vgini=rep(NA,length(u))
+ for(i in 1:length(u)){
+ kn=c(knot,u[i])
+ F=function(x){sum(x<=kn)}
+ I=Vectorize(F)(X)
+ vgini[i]=GINI(Y,I)
+ }
+ plot(u,vgini)
+ k=which.max(vgini)
+ cat("knot",k,u[k],"\n")
+ knot=c(knot,u[k])
+ u=u[-k]
+ }
knot 69 0.3025479 
knot 133 0.5846202 
knot 72 0.3148172 
knot 111 0.4811517

At the first step, the value of Gini’s index was the following,

which was maximal around 0.3. Then, this value is considered as fixed. And we try to construct a partition in three parts (spliting either below or above 0.3). We get the following plot for Gini’s index (as a function of this second knot)

 which is maximum when the split the sample around 0.6 (which becomes our second knot). Etc. Now, let us compare our code with the standard R function,

> tree(Y~X2,method="gini")
node), split, n, deviance, yval
      * denotes terminal node

 1) root 200 49.8800 0.4750  
   2) X2 < 0.302548 69 12.8100 0.7536 *
   3) X2 > 0.302548 131 28.8900 0.3282  
     6) X2 < 0.58462 65 16.1500 0.4615  
      12) X2 < 0.324591 7  0.8571 0.1429 *
      13) X2 > 0.324591 58 14.5000 0.5000 *
     7) X2 > 0.58462 66 10.4400 0.1970 *

We do obtain similar knots: the first one is 0.302 and the second one 0.584. So, constructing tree is not that difficult…

Now, what if we consider our two explanatory variables? The story remains the same, except that the partition is now a bit more complex to write. To find the first knot, we consider all values on the two components, and again, keep the one that maximizes Gini’s index,

> n=nrow(B)
> u1=(sort(X1)[2:n]+sort(X1)[1:(n-1)])/2
> u2=(sort(X2)[2:n]+sort(X2)[1:(n-1)])/2
> gini=matrix(NA,nrow(B)-1,2)
> for(i in 1:length(u1)){
+ I=(X1<u1[i])
+ gini[i,1]=GINI(Y,I)
+ I=(X2<u2[i])
+ gini[i,2]=GINI(Y,I)
+ }
> mg=max(gini)
> i=1+sum(mg==max(gini[,2]))
> par(mfrow = c(1, 2))
> plot(u1,gini[,1],ylim=range(gini),col="green",type="b",xlab="X1",ylab="Gini index")
> abline(h=mg,lty=2,col="red")
> if(i==1){points(u1[which.max(gini[,1])],mg,pch=19,col="red")
+          segments(u1[which.max(gini[,1])],mg,u1[which.max(gini[,1])],-100000)}
> plot(u2,gini[,2],ylim=range(gini),col="green",type="b",xlab="X2",ylab="Gini index")
> abline(h=mg,lty=2,col="red")
> if(i==2){points(u2[which.max(gini[,2])],mg,pch=19,col="red")
+          segments(u2[which.max(gini[,2])],mg,u2[which.max(gini[,2])],-100000)}
> u2[which.max(gini[,2])]
[1] 0.3025479

The graphs are the following: either we split on the first component (and we obtain the partition on the right, below),

or we split on the second one (and we get the following partition),

Here, it is optimal to split on the second variate, first. And actually, we get back to the one-dimensional case discussed previously: as expected, it is optimal to split around 0.3. This is confirmed with the code below,

> library(tree)
> arbre=tree(Y~X1+X2,data=B,method="gini")
> arbre$frame[1:4,]
     var   n       dev      yval splits.cutleft splits.cutright
1     X2 200 49.875000 0.4750000      <0.302548       >0.302548
2     X1  69 12.811594 0.7536232      <0.800113       >0.800113
4 <leaf>  57  8.877193 0.8070175                               
5 <leaf>  12  3.000000 0.5000000

For the second knot, four cases should be considered: spliting on the second variable (again), either above, or below the previous knot (see below on the left) or spliting on the first one. Then whe have wither a partition below or above the previous knot (see below on the right),

Etc. To visualize the tree, the code is the following

> plot(arbre)
> text(arbre)
> partition.tree(arbre)

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/arbre-gini-x1-x2-encore.png

Note that we can also visualize the partition. Nice, isn’t it?

To go further, the book Classification and Regression Trees by Leo Breiman (and co-authors) is awesome. Note that there are also interesting sections in the bible Elements of Statistical Learning: Data Mining, Inference, and Prediction by Trevor Hastie, Robert Tibshirani and Jerome Friedman (which can be downloaded from http://www.stanford.edu/~hastie/…)