Principal Component Analysis: A Generalized Gini Approach

With Stéphane Mussard and Téa Ouraga, we recently uploaded on arxiv a paper Principal Component Analysis: A Generalized Gini Approach,

A principal component analysis based on the generalized Gini correlation index is provided. It is proven that the reduction dimensionality based on the generalized Gini correlation index, that relies on city-block distances, is robust to outliers.

Some codes are also available on a dedicated github repo.

De l’abus de notation dans les modèles de régression

De manière un peu rituelle, je commence toujours mon cours de régression en revenant sur un point important de la statistique : les abus de notation !  Car tout le monde utilise les mêmes lettres (surtout les grecques) pour désigner des objets de nature différente. Dans la majorité des livres, on pourra nous dire sur la même page que \widehat{\theta}=2.35 et que \text{Var}(\widehat{\theta})=1.07, autrement dit \widehat{\theta} peut désigner en même temps un nombre (dans le premier cas) et une variable aléatoire (dans le second). C’est pour le moins déroutant ! En fait, la raison est assez simple. La statistique commence toujours par un échantillon \{y_1,y_2,\cdots,y_n\}, des données, des chiffres. Si on reste là, on fait du descriptif. L’étape classique est ensuite de supposer que les observations y_i sont des réalisations de variables aléatoires Y_i, qu’on supposera bien souvent indépendantes et identiquement distribuées. Et \widehat{\theta} sera alors une statistique, c’est à dire une fonction de mes observations. Je peux alors définir \widehat{\theta}=t(y_1,\cdots,y_n) comme étant la statistique observée sur mon échantillon, mais je peux aussi considérer \widehat{\theta}=t(Y_1,\cdots,Y_n), qui est alors une variable aléatoire, mais avec la même notation. Si on voulait aider à comprendre, on utiliserait \widehat{\Theta}, mais bon, les choses sont ce qu’elles sont… Et en économétrie, ça devient rapidement un cauchemar quand on commence à parler des résidus… Autre particularité en statistique, c’est que si on distingue l’espérance et la moyenne (empirique), on a un seul mot pour parler de la variance, que ce soit pour une variable aléatoire, ou un vecteur de \mathbb{R}^n. On aura ainsi \mathbb{E}[Y]=\int y dF(y)et\overline{y}=\widehat{\mathbb{E}}[\boldsymbol{y}]=\frac{1}{n}\sum_{i=1} y_ialors que\text{Var}[Y]=\int [y-\mathbb{E}[Y]]^2 dF(y)et\widehat{\text{Var}}[\boldsymbol{y}]=\frac{1}{n}\sum_{i=1} (y_i-\overline{y})^2

Considérons un problème de régression maintenant, avec un modèle de la forme y_i=\boldsymbol{x}_i^\top\boldsymbol{\beta}+\varepsilon_i. Ici, \varepsilon_i est un nombre réel, inconnu. Dans une écriture matricielle, on a \boldsymbol{y}=\boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\varepsilon}, où cette fois \boldsymbol{\varepsilon} est un vecteur de \mathbb{R}^n (et oui, je suis désolé mais ici \boldsymbol{X} désigne la matrice des covariates, et non pas un vecteur aléatoire… je ferais un billet un jour pour parler du fait que parfois on dit que les \boldsymbol{x} sont donnés et des fois – comme on conditionne suivant \boldsymbol{X}, autrement dit, on les voit comme aléatoires). On peut parfois faire une hypothèse quant à la distribution des résidus. Autrement dit, les \varepsilon_i sont vues comme des réalisations de variables aléatoires \varepsilon_i, ainsi que \boldsymbol{\varepsilon}. On notera ainsi \boldsymbol{\varepsilon}\sim\mathcal{N}(\boldsymbol{0},\boldsymbol{\Sigma}). Ah oui, autre point juste pour perdre les élèves : \text{Var}(\boldsymbol{\varepsilon})=\boldsymbol{\Sigma} alors que \text{Var}(\varepsilon_i)=\sigma^2… Bon, ici comme on suppose les observations indépendentes, et identiquement distribuées, on supposera que \text{Var}(\boldsymbol{\varepsilon})=\boldsymbol{\Sigma}=\sigma^2\mathbb{I}.

Encore une fois, \boldsymbol{\varepsilon} est (par définition) non observable. Par contre, on peut estimer ces résidus : à partir d’un estimateur \widehat{\boldsymbol{\beta}} de \boldsymbol{\beta}, on peut définir \widehat{\boldsymbol{\varepsilon}}=\boldsymbol{y}-\widehat{\boldsymbol{y}}=\boldsymbol{y}-\boldsymbol{x}^\top\widehat{\boldsymbol{\beta}}Histoire de clarifier, je vais plutôt noter  \widehat{\boldsymbol{e}} ces résidus estimés, en utilisant l’estimateur par moindres carrés de \boldsymbol{\beta}. On peut noter que \widehat{\boldsymbol{e}}=(\mathbb{I}-\boldsymbol{H})\boldsymbol{y} où classiquement \boldsymbol{H}=\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top est la matrice de projection sur l’espace engendré par toutes les combinaisons linéaires des variables explicatives. Mais là encore, on peut voir le vecteur (numérique) \widehat{\boldsymbol{e}} comme la réalisation d’une variable aléatoire \widehat{\boldsymbol{E}}. En particulier, \widehat{\boldsymbol{E}}=(\mathbb{I}-\boldsymbol{H})\boldsymbol{Y}=(\mathbb{I}-\boldsymbol{H})\boldsymbol{\varepsilon}\boldsymbol{\varepsilon} est notre vecteur aléatoire, centré, de matrice de variance-covariance \text{Var}(\boldsymbol{\varepsilon})=\sigma^2\mathbb{I}. On peut alors en déduire que\mathbb{E}[\widehat{\boldsymbol{E}}]=(\mathbb{I}-\boldsymbol{H})\mathbb{E}[\boldsymbol{\varepsilon}]=\boldsymbol{0}et\text{Var}[\widehat{\boldsymbol{E}}]=(\mathbb{I}-\boldsymbol{H})\text{Var}[\boldsymbol{\varepsilon}](\mathbb{I}-\boldsymbol{H})^\top=\sigma^2(\mathbb{I}-\boldsymbol{H})(car (\mathbb{I}-\boldsymbol{H}) est idenpotent). Cette dernière relation est particulièrement importante, car on notera que \text{Var}(\widehat{\boldsymbol{E}})\neq\sigma^2\mathbb{I}. En particulier, si on prend un résidu estimé au hasard \text{Var}(\widehat{E}_i)=\sigma^2(1-\boldsymbol{H}_{i,i}) (on avait parlé longuement de \boldsymbol{H}_{i,i} dans un billet récent, on leverage, en particulier on avait vu que \boldsymbol{H}_{i,i}\in[0,1] (on avait discuté la borne inférieur, qui peut être améliorée, en fait \boldsymbol{H}_{i,i}\in(0,1]) de telle sorte que \text{Var}(\widehat{E}_i)\leq\sigma^2. Si on poursuit un peu, on peut regarder la somme des carrés estimés, et noter que\mathbb{E}\big[\sum_{i=1}^n \widehat{E}_i^2\big]=\mathbb{E}[\text{trace}( \widehat{\boldsymbol{E}}\widehat{\boldsymbol{E}}^\top)] =\text{trace}(\mathbb{E}[\text{trace}( \widehat{\boldsymbol{E}}\widehat{\boldsymbol{E}}^\top])i.e.\mathbb{E}\big[\sum_{i=1}^n \widehat{E}_i^2\big]=\sigma^2\text{trace}(\mathbb{I}-\boldsymbol{H})or \text{trace}(\mathbb{I}-\boldsymbol{H})=n-p donc\widehat{\sigma}^2=\frac{1}{n-p}\sum_{i=1}^n \widehat{E}_i^2est un estimateur sans biais de \sigma^2. Et classiquement, on considèrera les résidus Studentisés\widehat{R}_i=\frac{\widehat{E}_i}{\widehat{\sigma}\sqrt{1-\boldsymbol{H}_{i,i}}}Si je voulais résumer un peu, on pourrait dire que\text{Var}(\boldsymbol{E})=\sigma^2\mathbb{I}\widehat{\text{Var}}(\boldsymbol{E})=\widehat{\sigma}^2\mathbb{I}\text{Var}(\widehat{\boldsymbol{E}})=\sigma^2(\mathbb{I}-\boldsymbol{H})\widehat{\text{Var}}(\widehat{\boldsymbol{E}})=\widehat{\sigma}^2(\mathbb{I}-\boldsymbol{H})En espérant que ça clarifie un peu…(?)

Machine Learning and Econometrics

This week-end, the Canadian Econometric Study Group will organise a conference in Montréal, on Machine Learning Econometrics. Since I was in the scientific committee, I’ve read some of the papers that will be presented, and it will be extremely interesting. There will be two invited speakers, Gregory Duncan (Amazon and University of Washington) and Dacheng Xiu (University of Chicago).I will be around at the poster session on Friday, and I should chair a session on Saturday ! See you there !

Combining automatically factor levels with trees

Last year, in a post, I discussed how to merge levels of factor variables, using combinatorial techniques (it was for my STT5100 cours, and trees are not in the syllabus), with an extension on trees at the end of the post.

consider the following (simulated dataset)

n=200
set.seed(1)
x1=runif(n)
x2=runif(n)
y=1+2*x1-x2+rnorm(n,0,.2)
LB=sample(LETTERS[1:10])
b=data.frame(y=y,x1=x1,
  x2=cut(x2,breaks=
  c(-1,.05,.1,.2,.35,.4,.55,.65,.8,.9,2),
  labels=LB))
str(b)
'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: Factor w/ 10 levels "I","A","H","F",..: 4 4 6 4 3 6 7 3 4 8 ...
table(b$x2)[LETTERS[1:10]]
 
 A  B  C  D  E  F  G  H  I  J 
11 12 23 34 23 36 12 32  3 14

Just by looking at the data (see the previous post), we could easily get the feeling that 10 levels was too much.

Following my post, Przemyslaw sent a comment suggesting to use

library(factorMerger)

It is indeed a nice package (unless you have really really big datasets with a lot of categories in your factor variables – as I experienced recently), and you can get great graphs

MF = mergeFactors(response = b$y, 
             factor = b$x2, 
             family = "gaussian")
plot(MF)

Here is suggests to create three categories. Recall that with student t-tests (changing the reference), we got

Another interesting package, by Piro Polo, is

library(tree.bins)

To use it, we simply call the following function, and we transform automatically our dataset : the continuous variables remain unchanged, and (possibly) categories of categorical variables are merged

b.bins = tree.bins(data=b, y=y)
str(b.bins)
Classes ‘data.table’ and 'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: chr  "Group.4" "Group.4" "Group.4" "Group.4" ...
 - attr(*, ".internal.selfref")= 
table(b.bins$x2)

Group.1 Group.2 Group.3 Group.4 
     23      35      26     116

here in four groups. To get the correspondance, use

tree.bins(data=b, y=y, return = "lkup.list")
[[1]]
   x2 Categories
1   E    Group.1
2   G    Group.2
3   C    Group.2
4   B    Group.3
5   J    Group.3
6   I    Group.4
7   A    Group.4
8   H    Group.4
9   F    Group.4
10  D    Group.4

(we have a list with one element, one dataframe, since there is only one factor variable). Cool, isn’t it ? I miss Przemyslaw’s plot, but this is rather quick, and efficient..

 

On leverage

Last week, in our STT5100 (applied linear models) class, I’ve introduce the hat matrix, and the notion of leverage. In a classical regression model, \boldsymbol{y}=\boldsymbol{X}\boldsymbol{\beta} (in a matrix form), the ordinary least square estimator of parameter \boldsymbol{\beta} is \widehat{\boldsymbol{\beta}}=(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top\boldsymbol{y}The prediction can then be written\widehat{\boldsymbol{y}}=\boldsymbol{X}\widehat{\boldsymbol{\beta}}=\underbrace{\color{blue}{\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}}_{\color{blue}{\boldsymbol{H}}}\boldsymbol{y}where \color{blue}{\boldsymbol{H}} is called the hat matrix.

The matrix is idempotent, i.e. \boldsymbol{H}^2={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\textcolor{grey}{\boldsymbol{X}^\top{\boldsymbol{X}}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}}\boldsymbol{X}^\top}={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}=\boldsymbol{H}so it can be interpreted as a projection matrix. Furthermore, since\boldsymbol{H}\boldsymbol{X}=\boldsymbol{X} (just do the maths), the projection is on a subspace that contains all the linear combinations of columns of \boldsymbol{X}. One can also observe that \mathbb{I}-\boldsymbol{H} is also a projection matrix. And we can write\boldsymbol{y}=\underbrace{\boldsymbol{H}\boldsymbol{y}}_{\widehat{\boldsymbol{y}}}+\underbrace{(\mathbb{I}-\boldsymbol{H})\boldsymbol{y}}_{\widehat{\boldsymbol{\varepsilon}}}where \widehat{\boldsymbol{y}} is the orthogonal projection of \boldsymbol{y} on the (linear) space of linear combinations of columns of \boldsymbol{X}, and \widehat{\boldsymbol{y}}\perp\widehat{\boldsymbol{\varepsilon}}, which gives the classical interpretation of residuals, being unpredictible (at least with a linear model using variables \boldsymbol{X}).

Let’s move a bit faster now (we’ve seen many other properties last week), and consider elements on the diagonal of matrix \boldsymbol{H}. Recall that we have

so entry \boldsymbol{H}_{i,i} is a measure of the influence of entry \boldsymbol{y}_i on its prediction latex]\widehat{\boldsymbol{y}}_i[/latex].

We have seen that\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}(\boldsymbol{H})=\text{trace}(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top)which can be written\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}\boldsymbol{X}^\top(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1})=\text{trace}(\mathbb{I})=pwhere classically p=k+1, where k is the number of explanatory variables. Further, since \boldsymbol{H} is idempotent, we can write (from \boldsymbol{H}=\boldsymbol{H}^2) that\boldsymbol{H}_{i,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}\boldsymbol{H}_{j,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}^2One the one hand, since the second term is positive \boldsymbol{H}_{i,i}\geq\boldsymbol{H}_{i,i}^2, i.e. 1\geq\boldsymbol{H}_{i,i}. And since both terms are positive, then \boldsymbol{H}_{i,i}\in[0,1]. And there was a question in the course on the sharpeness of the bounds.

Using Anscombe’s dataset, we’ve seen that it was possible to get a leverage of 1. Using something rather similar

df = data.frame(x = c(rep(1,10),6), y = c(1:10,8))
plot(df)

we obtain

model = lm(y~x,data=df)
abline(model,col="red",lwd=2)
H = lm.influence(model)$hat
plot(1:11,H,type="h")

The very last observation, the one one the right, is here extremely influencial : if we remove it, the model is completely different ! And here, we reach the upper bound, \boldsymbol{H}_{11,11}=1. Observe that all other points are equally influencial, and because on the constraint on the trace of the matrix, \boldsymbol{H}_{i,i}=1/10 when i\in\{1,2,\cdots,10\}.

Now, what about the lower bound ? In order to have some sort of “non-influencial” observations, consider the two following case.

  • the case where one observation (below the first one) is such that \widehat{\boldsymbol{y}}_{i}=\boldsymbol{y}_{i} (perfect prediction)
  • the case where one observation (below the tenth one) is such that \boldsymbol{x}_{i}=\overline{\boldsymbol{x}} and \boldsymbol{y}_{i}=\overline{\boldsymbol{y}} (from the first order condition – or normal equation), the fitted regression line always go through point (\overline{\boldsymbol{x}},\overline{\boldsymbol{y}})

Let us move two observations from our dataset,

mean(c(4,rep(1,8),6))
[1] 1.8
df = data.frame(x = c(4,rep(1,8),6,1.8),
y = c(predict(model,newdata=data.frame(x=4)),
2:9,8,
predict(model,newdata=data.frame(x=1.8))))

We now have

If we compute the leverages, we obtain

model = lm(y~x,data=df)
H = lm.influence(model)$hat
plot(1:11,H,type="h")

so, for the first observation, its leverage actually increased (the blue part), and for the tenth one, we have the lowest influence, but it is not zero. Is it possible to reach zero ?

Here, observe that for the tenth observation, \boldsymbol{H}_{i,i}=1/n. And actually, that’s the best we can do… We can prove that, in the case of a simple regression (as above)\boldsymbol{H}_{i,i}=\frac{1}{n}+\frac{(x_i-\overline{x})^2}{n\text{Var}(x)}which is minimum when x_i=\overline{x}, and then \boldsymbol{H}_{i,i}=1/n, otherwise \boldsymbol{H}_{i,i}>1/n. And this property is also valid in a multiple regression (as soon as an intercept is included in the regression – which should always be the case). To prove that result, let \tilde{\boldsymbol{X}} denote the matrix of centered variables \boldsymbol{X}, then we can prove that \boldsymbol{H}_{i,i}=\frac{1}{n}+\big[\tilde{\boldsymbol{X}}(\tilde{\boldsymbol{X}}^\top\tilde{\boldsymbol{X}})^{-1}\tilde{\boldsymbol{X}}^\top\big]_{i,i}(which is basically a matrix version of the previous equation).

I can maybe add another comment on Anscombe’s data. We’ve seen that on the right that we did reach 1. But I did not prove it. One way to prove it is actually to focus on the remaining n-1 points, on the left. Those have all the same x values. We can prove that if \boldsymbol{X}_{i_1}=\boldsymbol{X}_{i_2}, then \boldsymbol{H}_{i_1,i_2}=\boldsymbol{X}_{i_1}^\top(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}_{i_2}=\boldsymbol{H}_{i_1,i_1}hence, using the relationship obtained since the hat matrix is idempotent\boldsymbol{H}_{i_1,i_1}=2\boldsymbol{H}_{i_1,i_1}^2+\sum_{j\notin\{i_1,i_2\}}\boldsymbol{H}_{i_1,j}^2thus, we now have\boldsymbol{H}_{i_1,i_1}\big(1-2\boldsymbol{H}_{i_1,i_1}\big)>0i.e. \boldsymbol{H}_{i_1,i_1}\in[0,1/2], where the upper bound becomes 1/(n-1) “duplicates”. So for n-1 \boldsymbol{H}_{i,i}‘s, we have values below 1/(n-1), the last one should be below 1 and the sum has to be k=2 . So we have the value of the n \boldsymbol{H}_{i,i}‘s.