Category Archives: M1-Data Science

Classification on the German Credit Database

In our data science course, this morning, we’ve use random forrest to improve prediction on the German Credit Dataset. The dataset is

> url="http://freakonometrics.free.fr/german_credit.csv"
> credit=read.csv(url, header = TRUE, sep = ",")

Almost all variables are treated a numeric, but actually, most of them are factors,

> str(credit)
'data.frame':	1000 obs. of  21 variables:
 $ Creditability   : int  1 1 1 1 1 1 1 1 1 1 ...
 $ Account.Balance : int  1 1 2 1 1 1 1 1 4 2 ...
 $ Duration        : int  18 9 12 12 12 10 8  ...
 $ Purpose         : int  2 0 9 0 0 0 0 0 3 3 ...

(etc). Let us convert categorical variables as factors,

> F=c(1,2,4,5,7,8,9,10,11,12,13,15,16,17,18,19,20)
> for(i in F) credit[,i]=as.factor(credit[,i])

Let us now create our training/calibration and validation/testing datasets, with proportion 1/3-2/3

> i_test=sample(1:nrow(credit),size=333)
> i_calibration=(1:nrow(credit))[-i_test]

The first model we can fit is a logistic regression, on selected covariates

> LogisticModel <- glm(Creditability ~ Account.Balance + Payment.Status.of.Previous.Credit + Purpose + 
Length.of.current.employment + 
Sex...Marital.Status, family=binomial, 
data = credit[i_calibration,])

Based on that model, it is possible to draw the ROC curve, and to compute the AUC (on ne validation dataset)

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> library(ROCR)
> pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog1=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog1,"\n")
AUC:  0.7340997

An alternative is to consider a logistic regression on all explanatory variables

> LogisticModel <- glm(Creditability ~ ., 
+  family=binomial, 
+  data = credit[i_calibration,])

We might overfit, here, and we should observe that on the ROC curve

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog2=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog2,"\n")
AUC:  0.7609792

There is a slight improvement here,  compared with the previous model, where only five explanatory variables were considered.

Consider now some regression tree (on all covariates)

> library(rpart)
> ArbreModel <- rpart(Creditability ~ ., 
+  data = credit[i_calibration,])

We can visualize the tree using

> library(rpart.plot)
> prp(ArbreModel,type=2,extra=1)

The ROC curve for that model is

> fitArbre <- predict(ArbreModel,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitArbre, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCArbre=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCArbre,"\n")
AUC:  0.7100323

As expected, a single has a lower performance, compared with a logistic regression. And a natural idea is to grow several trees using some boostrap procedure, and then to agregate those predictions.

> library(randomForest)
> RF <- randomForest(Creditability ~ .,
+ data = credit[i_calibration,])
> fitForet <- predict(RF,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitForet, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCRF=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCRF,"\n")
AUC:  0.7682367

Here this model is (slightly) better than the logistic regression. Actually, if we create many training/validation samples, and compare the AUC, we can observe that – on average – random forests perform better than logistic regressions,

> AUC=function(i){
+   set.seed(i)
+   i_test=sample(1:nrow(credit),size=333)
+   i_calibration=(1:nrow(credit))[-i_test]
+   LogisticModel <- glm(Creditability ~ ., 
+    family=binomial, 
+    data = credit[i_calibration,])
+   summary(LogisticModel)
+   fitLog <- predict(LogisticModel,type="response",
+                     newdata=credit[i_test,])
+   library(ROCR)
+   pred = prediction( fitLog, credit$Creditability[i_test])
+   AUCLog2=performance(pred, measure = "auc")@y.values[[1]] 
+   RF <- randomForest(Creditability ~ .,
+   data = credit[i_calibration,])
+   fitForet <- predict(RF,
+                       newdata=credit[i_test,],
+                       type="prob")[,2]
+   pred = prediction( fitForet, credit$Creditability[i_test])
+   AUCRF=performance(pred, measure = "auc")@y.values[[1]]
+   return(c(AUCLog2,AUCRF))
+ }
> A=Vectorize(AUC)(1:200)
> plot(t(A))

Clustering French Cities (based on Temperatures)

In order to illustrate hierarchical clustering techniques and k-means, I did borrow François Husson‘s dataset, with monthly average temperature in several French cities.

> temp=read.table(
+ "http://freakonometrics.free.fr/FR_temp.txt",
+ header=TRUE,dec=",")

We have 15 cities, with monthly observations

> X=temp[,1:12]
> boxplot(X)

Since the variance seems to be rather stable, we will not ‘normalize’ the variables here,

> apply(X,2,sd)
    Janv     Fevr     Mars     Avri 
2.007296 1.868409 1.529083 1.414820 
     Mai     Juin     juil     Aout 
1.504596 1.793507 2.128939 2.011988 
    Sept     Octo     Nove     Dece 
1.848114 1.829988 1.803753 1.958449

In order to get a hierarchical cluster analysis, use for instance

> h <- hclust(dist(X), method = "ward")
> plot(h, labels = rownames(X), sub = "")

An alternative is to use

> library(FactoMineR)
> h2=HCPC(X)
> plot(h2)

Here, we visualise observations with a principal components analysis. We have here also an automatic selection of the number of classes, here 3. We can get the description of the groups using

> h2$desc.ind

or directly

> cah=hclust(dist(X))
> groups.3 <- cutree(cah,3)

We can also visualise those classes by ourselves,

> acp=PCA(X,scale.unit=FALSE)
> plot(acp$ind$coord[,1:2],col="white")
> text(acp$ind$coord[,1],acp$ind$coord[,2],
+ rownames(acp$ind$coord),col=groups.3)

It is possible to plot the centroïds of those clusters

> PT=aggregate(acp$ind$coord,list(groups.3),mean)
> points(PT$Dim.1,PT$Dim.2,pch=19)

If we add Voroid sets around those centroïds, here we do not see them (actually, we see the point – in the middle – that is exactly at the intersection of the three regions),

> library(tripack)
> V <- voronoi.mosaic(PT$Dim.1,PT$Dim.2)
> plot(V,add=TRUE)

To visualize those regions, use

> p=function(x,y){
+   which.min((PT$Dim.1-x)^2+(PT$Dim.2-y)^2)
+ }
> vx=seq(-10,12,length=251)
> vy=seq(-6,8,length=251)
> z=outer(vx,vy,Vectorize(p))
> image(vx,vy,z,col=c(rgb(1,0,0,.2),
+ rgb(0,1,0,.2),rgb(0,0,1,.2)))
> CL=c("red","black","blue")
> text(acp$ind$coord[,1],acp$ind$coord[,2],
+ rownames(acp$ind$coord),col=CL[groups.3])

Actually, those three groups (and those three regions) are also the ones we obtain using a k-mean algorithm,

> km=kmeans(acp$ind$coord[,1:2],3)
> km
K-means clustering 
with 3 clusters of sizes 3, 7, 5

(etc). But actually, since again we have some spatial data, it is possible to visualize them on a map

> library(maps)
> map("france")
> points(temp$Long,temp$Lati,col=groups.3,pch=19)

or, to visualize the regions, use e.g.

> library(car)
> for(i in 1:3) 
+ dataEllipse(temp$Long[groups.3==i],
+ temp$Lati[groups.3==i], levels=.7,add=TRUE,
+ col=i+1,fill=TRUE)

Those three regions actually make sense, geographically speaking.

Clusters of Texts

Another popular application of classification techniques is on texmining (see e.g. an old post on French president speaches). Consider the following example,  inspired by Nobert Ryciak’s post, with 12 wikipedia pages, on various topics,

> library(tm)
> library(stringi)
> library(proxy)
> titles = c("Boosting_(machine_learning)",
+            "Random_forest",
+            "K-nearest_neighbors_algorithm",
+            "Logistic_regression",
+            "Boston_Bruins",
+            "Los_Angeles_Lakers",
+            "Game_of_Thrones",
+            "House_of_Cards_(U.S._TV_series)",
+            "True_Detective_(TV_series)",
+            "Picasso",
+            "Henri_Matisse",
+            "Jackson_Pollock")
> articles = character(length(titles))
> wiki = "http://en.wikipedia.org/wiki/"
> for (i in 1:length(titles)) {
+   articles[i] = stri_flatten(readLines(stri_paste(wiki, titles[i])), col = " ")
+ }

Here, we store all the contents of the pages in a corpus (from the text mining package).

> docs = Corpus(VectorSource(articles))

This is what we have in that corpus

> a = stri_flatten(readLines(stri_paste(wiki, titles[1])), col = " ")
> a = Corpus(VectorSource(a))
> a[[1]]

Thoughts on Hypothesis Boosting</i></a>, Unpublished manuscript (Machine Learning class project, December 1988)</span></li> <li id="cite_note-4"><span class="mw-cite-backlink"><b><a href="#cite_ref-4">^</a></b></span> <span class="reference-text"><cite class="citation journal"><a href="/wiki/Michael_Kearns" title="Michael Kearns">Michael Kearns</a>; <a href="/wiki/Leslie_Valiant" title="Leslie Valiant">Leslie Valiant</a> (1989). <a rel="nofollow" class="external text" href="http://dl.acm.org/citation.cfm?id=73049">"Crytographic limitations on learning Boolean formulae and finite automata"</a>. <i>Symposium on T

This is because we read an html page.

> a = tm_map(a, function(x) stri_replace_all_fixed(x, "\t", " "))
> a = tm_map(a, PlainTextDocument)
> a = tm_map(a, stripWhitespace)
> a = tm_map(a, removeWords, stopwords("english"))
> a = tm_map(a, removePunctuation)
> a = tm_map(a, tolower)
> a 

can  set  weak learners create  single strong learner  a weak learner  defined    classifier    slightly correlated   true classification  can label examples better  random guessing in contrast  strong learner   classifier   arbitrarily wellcorrelated   true classification robert 

Now we have the text of the wikipedia document. What we did was

  • replace all “” elements with a space. We do it because there are not a part of text document but in general a html code.
  • replace all “/t” with a space.
  • convert previous result (returned type was “string”) to “PlainTextDocument”, so that we can apply the other functions from tm package, which require this type of argument.
  • remove extra whitespaces from the documents.
  • remove punctuation marks.
  • remove from the documents words which we find redundant for text mining (e.g. pronouns, conjunctions). We set this words as stopwords(“english”) which is a built-in list for English language (this argument is passed to the function removeWords.
  • transform characters to lower case.

Now we can do it on the entire corpus

> docs2 = tm_map(docs, function(x) stri_replace_all_regex(x, "<.+?>", " "))
> docs3 = tm_map(docs2, function(x) stri_replace_all_fixed(x, "\t", " "))
> docs4 = tm_map(docs3, PlainTextDocument)
> docs5 = tm_map(docs4, stripWhitespace)
> docs6 = tm_map(docs5, removeWords, stopwords("english"))
> docs7 = tm_map(docs6, removePunctuation)
> docs8 = tm_map(docs7, tolower)

Now, we simply count words in each page,

> dtm <- DocumentTermMatrix(docs8)
> dtm2 <- as.matrix(dtm)
> dim(dtm2)
[1] 12 13683
> frequency <- colSums(dtm2)
> frequency <- sort(frequency, decreasing=TRUE)
> mots=frequency[frequency>20]
> s=dtm2[1,which(colnames(dtm2) %in% names(mots))]
> for(i in 2:nrow(dtm2)) s=cbind(s,dtm2[i,which(colnames(dtm2) %in% names(mots))])
> colnames(s)=titles

 

Once we have that dataset, we can use a PCA to visualise the ‘variables’ i.e. the pages

> library(FactoMineR)
> PCA(s)

We can also use non-supervised classification to group pages. But first, let us normalize the dataset

> s0=s/apply(s,1,sd)

Then, we can run a cluster dendrogram, using the Ward distance

> h <- hclust(dist(t(s0)), method = "ward")
> plot(h, labels = titles, sub = "")

Groups are consistent with intuition: painters are in the same cluster, as well as TV series, sports teams, and statistical techniques.

Clusters of (French) Regions

For the data scienec course of tomorrow, I just wanted to post some functions to illustrate cluster analysis. Consider the dataset of the French 2012 elections

> elections2012=read.table(
"http://freakonometrics.free.fr/elections_2012_T1.csv",sep=";",dec=",",header=TRUE)
> voix=which(substr(names(
+ elections2012),1,11)=="X..Voix.Exp")
> elections2012=elections2012[1:96,]
> X=as.matrix(elections2012[,voix])
> colnames(X)=c("JOLY","LE PEN","SARKOZY","MÉLENCHON","POUTOU","ARTHAUD","CHEMINADE","BAYROU","DUPONT-AIGNAN","HOLLANDE")
> rownames(X)=elections2012[,1]

The hierarchical cluster analysis is obtained using

> cah=hclust(dist(X))
> plot(cah,cex=.6)

To get five groups, we have to prune the tree

> rect.hclust(cah,k=5)
> groups.5 <- cutree(cah,5)

We have to zoom-in to visualize the French regions,

It is also possible to use

> library(dendroextras)
> plot(colour_clusters(cah,k=5))

And again, if we zoom-in, we get

The interpretation of the clusters can be obtained using

> aggregate(X,list(groups.5),mean)
  Group.1     JOLY   LE PEN  SARKOZY
1       1 2.185000 18.00042 28.74042
2       2 1.943824 23.22324 25.78029
3       3 2.240667 15.34267 23.45933
4       4 2.620000 21.90600 34.32200
5       5 3.140000  9.05000 33.80000

It is also possible to visualize those clusters on a map, using

> library(RColorBrewer)
> CL=brewer.pal(8,"Set3")
> carte_classe <- function(groupes){
+ library(stringr)
+ elections2012$dep <- elections2012[,2]
+ elections2012$dep <- tolower(elections2012$dep)
+ elections2012$dep <- str_replace_all(elections2012$dep, pattern = " |-|'|/", replacement = "")
+ library(maps)
+ france<-map(database="france")
+ france$dep <- france$names
+ france$dep <- tolower(france$dep)
+ france$dep <- str_replace_all(france$dep, pattern = " |-|'|/", replacement = "")
+ corresp_noms <- elections2012[, c(1,2, ncol(elections2012))]
+ corresp_noms$dep[which(corresp_noms$dep %in% "corsesud")] <- "corsedusud"
+ col2001<-groupes+1
+ names(col2001) <- corresp_noms$dep[match(names(col2001), corresp_noms[,1])]
+ color <- col2001[match(france$dep, names(col2001))]
+ map(database="france", fill=TRUE, col=CL[color])
+ }
> carte_classe(cutree(cah,5))

or, if we simply want 4 clusters

> carte_classe(cutree(cah,4))

 

Heuristics on Correspondance Analysis

This week, in the course on non-supervised techniques for data science, we’ve been using a dataset, with candidate for the presidential elections in 2002 (per row) and newpapers (per column). In order to visualize that dataset, consider three candidates, and three newspapers

> base=read.table(
"http://freakonometrics.free.fr/election2002.txt",header=TRUE)
> sb=base[,c(2,3,4)]
> sb=sb[c(4,12,7),]
> (N=sb)
       LeFigaro Liberation LeMonde
Jospin        7         41      26
Chirac       35          9      18
Mamere        1         10       7

The first part is based on a description of rows. Consider here rows are conditional probabilities, in the set of newspapers,

> (L=N/apply(N,1,sum))
         LeFigaro Liberation   LeMonde
Jospin 0.09459459  0.5540541 0.3513514
Chirac 0.56451613  0.1451613 0.2903226
Mamere 0.05555556  0.5555556 0.3888889

The “average row” is the marginal distribution of newspapers

> (Lbar=apply(N,2,sum)/sum(N))
  LeFigaro Liberation    LeMonde 
 0.2792208  0.3896104  0.3311688

If we visualize those individuals, in the set of newpapers (in the simplexe in the newspapers space), we have

Here it is,

But actually, we will not stay in the simplexe. A PCA is considered, with weights on individuals, that take into account the importance of the different candidates, and weights for the scalar product (in order to have a distance related to the chi-square distance, and not a standard Euclidean distance)

> matL0=t(t(L)-Lbar)
> library(FactoMineR)
> acpL=PCA(matL0,scale.unit=FALSE,
+   row.w=(apply(N,1,sum)),
+   col.w=1/(apply(N,2,sum)))
> plot.PCA(acpL,choix="ind",ylim=c(-.02,.02))

 

The second part is based on a description of columns. Here Columns are conditional probabilities, in the set of candidates,

> (C=t(t(N)/apply(N,2,sum)))
         LeFigaro Liberation   LeMonde
Jospin 0.16279070  0.6833333 0.5098039
Chirac 0.81395349  0.1500000 0.3529412
Mamere 0.02325581  0.1666667 0.1372549

Here again, we can compute the “average column”

> (Cbar=apply(N,1,sum)/sum(N))
   Jospin    Chirac    Mamere 
0.4805195 0.4025974 0.1168831

In the simplex, points are

i.e.

But here again, we won’t use that simplexe. We consider a PCA, with two vectors of weights, some to take into account the weights of the newspapers, and some to get a chi-square distance

> Cbar=apply(N,1,sum)/sum(N)
> matC0=C-Cbar
> acpC=PCA(t(matC0),scale.unit=FALSE,
+          row.w=(apply(N,2,sum)),
+          col.w=1/(apply(N,1,sum)))

 

Now, we can almost overlap the two projections. Almost because we might, sometime switch right and left, top and bottom. Because if  is a (unit) eigenvector, so is . Here, for instance, we should swich them

> CA(N)

Analyse des Correspondances et ACP

Considérons les données suivantes, correspondant à des professions au sein d’un couple. On dispose du tableau de contingence suivant

> base=read.table(
"http://freakonometrics.free.fr/epoux.csv",
sep=";",header=TRUE)
> rownames(base)=base$Nom

Classiquement, avec ce genre de données, on est habitué à utiliser la distance du chi-deux, et les contributions au chi-deux pour voir les modalités qui correspondent fortement

> M=base[1:9,2:10]
> CT=chisq.test(M)

On peut le visualiser de la manière suivante,

> mosaicplot(t(M),main="",las=2,shade=TRUE)

avec les maris en ligne, et les épouses en colonnes. Les contribution importantes sont en bleu ou rouge foncé, les couleurs correspondant respectivement à un lien ‘positif’ (probabilité jointe plus importante que le cas indépendant) ou un lien ‘négatif’ (probabilité jointe moins importante que le cas indépendant)

ou dans l’autre sens

> mosaicplot(M,main="",las=2,shade=TRUE)

mais avec la même conclusion que précédemment: il y a de fortes valeurs bleues sur la diagonale.

Autrement dit, les couples sont relativement homogènes en terme de professions.

En analyse des correspondance, on va regarder notre tableau de contingence, en ligne ou en colonne. Par exemple, on peut définir les profils lignes, qui sont des vecteurs de probabilité \boldsymbol{L}_i=\left(\frac{p_{i1}}{p_{i\cdot}},\cdots,\frac{p_{iJ}}{p_{i\cdot}}\right)\in\mathbb{R}^J

> N=M 
> L=N/apply(N,1,sum)

En notant \boldsymbol{D}_I=\text{diag}(p_{1\cdot},\cdots,p_{I\cdot}), on peut écrire \boldsymbol{L}=\boldsymbol{D}_I^{-1}\boldsymbol{N}Le barycentre de nos vecteurs de lignes est ici\overline{\boldsymbol{L}}=\left(p_{\cdot 1},\cdots,p_{\cdot J}\right)\in\mathbb{R}^J

Là encore, en notant \boldsymbol{D}_J=\text{diag}(p_{\cdot 1},\cdots,p_{\cdot J}), on peut écrire ce barycentre de manière matricielle, \overline{\boldsymbol{L}}=\mathbb{I}_{IJ}\boldsymbol{D}_J.\boldsymbol{L}_{0,i}=\boldsymbol{L}_i-\overline{\boldsymbol{L}}

> Lbar=apply(N,2,sum)/sum(N)
> matL0=t(t(L)-Lbar)

A chaque point, on a associé la fréquence (relative) comme poids, p_{i\cdot}, ce qui correspond à utiliser la matrice \boldsymbol{D}_I. Pour mesurer la distance entre deux points, on va pondérer la distance euclidienne par l’inverse des des probabilités, p_{\cdot j}, ce qui revient à utiliser la matrice \boldsymbol{D}_J^{-1}. La distance entre deux lignes estd(\boldsymbol{L}_{i_1},\boldsymbol{L}_{i_2})=\sum_{j=1}^J\frac{1}{p_{\cdot j}}\left(\frac{p_{i_1j}}{p_{i_1\cdot}}-\frac{p_{i_2j}}{p_{i_2\cdot}}\right)^2

On va alors faire une analyse en composantes principales avec ces différents poids. D’un point de vue matriciel, on va alors l’analyse spectrale de\boldsymbol{H}=\boldsymbol{D}_J^{-1/2}\boldsymbol{L}_0^{\text{T}}\boldsymbol{D}_I\boldsymbol{L}_0\boldsymbol{D}_J^{-1/2}

En particulier, on note \boldsymbol{U}_1,\cdots,\boldsymbol{U}_k les vecteurs propres, et on défini les composantes principales\boldsymbol{C}=\boldsymbol{L}_0\boldsymbol{D}_J^{-1/2}\boldsymbol{U}

La projection sur les deux premières composantes des lignes donne ici

> library(FactoMineR)
> acpL=PCA(matL0,scale.unit=FALSE,
+          row.w=(apply(N,1,sum)),
+          col.w=1/(apply(N,2,sum)))

L’idée est de visualiser le nuage des individus correspondants aux lignes. Dans un second temps, on va faire exactement pareil, mais sur les colonnes\boldsymbol{C}_j=\left(\frac{p_{1j}}{p_{\cdot j}},\cdots,\frac{p_{jI}}{p_{\cdot j}}\right)\in\mathbb{R}^I

> C=t(t(N)/apply(N,2,sum))

Le barycentre est ici\overline{\boldsymbol{C}}=\left(p_{1\cdot},\cdots,p_{I\cdot}\right)\in\mathbb{R}^I

> Cbar=apply(N,1,sum)/sum(N)
> matC0=C-Cbar

et on peut alors faire une analyse en composante principale

> acpC=PCA(t(matC0),scale.unit=FALSE,
+          row.w=(apply(N,2,sum)),
+          col.w=1/(apply(N,1,sum)))

ce qui donne, en regardant le nuage des individus,

La magie de l’analyse des correspondances est qu’on ‘peut’ représenter les deux projections des individus sur le même plan,

> matC=acpC$ind$coord
> matL=acpL$ind$coord
> plot(matC[,1:2],col="red",xlim=c(-.002,.015))
> text(matC[,1:2],rownames(matC),col="red",pos=4)
> points(matL[,1:2],col="blue")
> text(matL[,1:2],rownames(matL),col="blue")

C’est exactement ce que fait la fonction

> afc=CA(N)

Note: en réalité, si on regarde les coordonnées des lignes par ACP on obtient

acpL$ind$coord
                    Dim.1     Dim.2     Dim.3     Dim.4     Dim.5
Agriculteur (M) -0.000056 -0.000414  0.002940  0.000150  0.000011
Artisan (M)     -0.000014  0.000152 -0.000057  0.000323  0.001279
Cadres (M)      -0.000262  0.001734 -0.000015  0.000359 -0.000219
Prof_Int (M)    -0.000194  0.000450 -0.000003 -0.000504  0.000041
Employe (M)     -0.000104 -0.000317 -0.000083 -0.000188  0.000011
Ouvrier (M)     -0.000052 -0.000764 -0.000080  0.000133 -0.000141
Retraite (M)     0.007771  0.000502 -0.000001 -0.000089 -0.000065
Inactif (M)     -0.000018 -0.000150 -0.000163  0.000751 -0.000007
Inconnu(M)       0.000261 -0.000492 -0.000554  0.000398  0.000648

alors que

CA(M, graph = FALSE)$row$coord
                 Dim 1  Dim 2  Dim 3  Dim 4  Dim 5
Agriculteur (M) -0.028 -0.209  1.481  0.076  0.005
Artisan (M)     -0.007  0.077 -0.029  0.163  0.644
Cadres (M)      -0.132  0.873 -0.007  0.181 -0.110
Prof_Int (M)    -0.098  0.227 -0.001 -0.254  0.020
Employe (M)     -0.053 -0.160 -0.042 -0.095  0.006
Ouvrier (M)     -0.026 -0.385 -0.040  0.067 -0.071
Retraite (M)     3.914  0.253 -0.001 -0.045 -0.033
Inactif (M)     -0.009 -0.075 -0.082  0.378 -0.004
Inconnu(M)       0.132 -0.248 -0.279  0.201  0.327

ce qui donne un ratio constant,

acpL$ind$coord/CA(M, graph = FALSE)$row$coord
                      Dim.1       Dim.2       Dim.3       Dim.4       Dim.5
Agriculteur (M) 0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Artisan (M)     0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Cadres (M)      0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Prof_Int (M)    0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Employe (M)     0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Ouvrier (M)     0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Retraite (M)    0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Inactif (M)     0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Inconnu(M)      0.001985182 0.001985182 0.001985182 0.001985182 0.001985182

qui correspond à 1/\sqrt{n}

1/sqrt(sum(M))
[1] 0.001985182

Dualité Individus / Variables en ACP

Dans le cours d’analyse des données, on va commencer l’analyse en composantes principales. Comme on l’a déjà évoqué à plusieurs reprises, il y a une dualité entre les individus dans l’espace des variables, et les variables dans l’espace des individus. Histoire de faire des dessins, on va se contenter de 3 individus, avec 3 variables,

> BD=read.table("https://perso.univ-rennes1.fr/arthur.charpentier/securite.txt",
header=TRUE)
>  M=as.matrix(BD[,2:10])
>  colnames(M)=names(BD)[2:10]
>  rownames(M)=BD$dep
>  M=M[1:3,1:3]
>  n=nrow(M)  
>  M_s=scale(M,scale=TRUE)

 

On peut commencer par visualiser les individus

Les directions principales sont ici

> U=eigen(t(M_s)%*%(M_s))$vectors

 

Les deux premières nous donne le plan principal

La projection sur le plan donne ici

Mais on peut aussi visualiser les données dans l’autre dimension.

Rappelons que les variables ont été centrées et réduites. La encore, l’analyse spectrale nous donne les vecteurs principaux (qui vont maximiser l’inertie en projection),

>  V=eigen((M_s)%*%t(M_s))$vectors

 

Là encore, on peut s’intéresser au plan principal

Là encore on peut regarder la projection dans cette dimension,

Comme on a centré et réduit nos variables, on voit apparaître le “cercle de corrélations“.

Le principe de dualité dit que l’on peut superposer les projections

 

ce qui permettra de faire une réelle analyse des données

Approximation de matrices

Dans le cours d’analyses des données, on va commencer l’analyse en composantes principales, en lien avec l’analyse spectrale de  ou de , ou directement l’analyse en valeurs singulières de .

Reprenons les données de l ‘enquête du quotidien L’Express,

> BD=read.table("https://perso.univ-rennes1.fr/arthur.charpentier/securite.txt",
header=TRUE)
>  M=as.matrix(BD[,2:10])
>  colnames(M)=names(BD)[2:10]
>  rownames(M)=BD$dep

Rappelons que l’on a une matrice de taille

> dim(M)
[1] 94  9

La décomposition en valeurs singulières donne des matrices

> SVD=svd(M)
> dim(M)
[1] 94  9
> dim(SVD$u)
[1] 94  9
> dim(SVD$v)
[1] 9 9

telles que . La preuve, créons la matrice produit

> M1 = SVD$u %*% diag(SVD$d) %*% t(SVD$v)

Pour vérifier que les deux matrices sont (très très) proches, on a le choix de la distance. Par exemple

à partir de la norme de Frobenius, .

> sum( (M0-M)^2 )
[1] 2.160588e-25

On peut aussi utiliser

qui est la plus grande valeur singulière de .

> max(eigen(t(M0-M)%*%(M0-M))$value)
[1] 2.018236e-25

Pour (simplement) approcher la matrice de départ par une matrice de rang plus faible, on peut  utiliser une approximation de la forme

où seules les  premières colonnes sont gardées,

> k=4
> M4 = SVD$u[,1:k] %*% diag(SVD$d[1:k]) %*% t(SVD$v[,1:k])

Les deux matrices semblent proches. Par exemple, si on regarde la première ligne on a

> M[1,1:7]
infra  vols   eco  crim   vma   vvi  camb
44.11 29.51  4.14  2.79  0.08  0.27  5.91 
> M4[1,1:6]
[1] 44.07698034 29.57490613  4.18441060   
     2.86142461  0.06670419  0.50994705

Si on regarde la distance entre ces deux matrices, on est effectivement relativement faible

> sum( (M4-M)^2 )
[1] 240.5894
> max(eigen(t(M4-M)%*%(M4-M))$value)
[1] 130.3316

cette dernière étant à comparer avec

> max(eigen(t(M)%*%M)$value)
[1] 447176.6

Cette matrice est effectivement de rang plus faible (de range 4)

> svd(M4)$d[1:8]
[1] 6.687127e+02 3.520604e+01 1.860897e+01 1.328171e+01 3.017803e-14 9.930232e-15 8.511710e-15 5.410929e-15

On peut aussi regarder une approximation de rang 2,

> k=2
> M2 = SVD$u[,1:k] %*% diag(SVD$d[1:k]) %*% t(SVD$v[,1:k])
> svd(M2)$d
[1] 6.687127e+02 3.520604e+01 1.012972e-13 9.873132e-15 6.152361e-15 4.691758e-15 3.289106e-15 1.206215e-15 1.248952e-16
> sum( (M2-M)^2 )
[1] 763.2868
> max(eigen(t(M2-M)%*%(M2-M))$value)
[1] 346.2936

On continuera pendant le cours.

Ah oui, et pour avoir davantage d’illustration, on pourra utiliser une autre base, afin de travailler sur des rangs (et non pas des valeurs réelles)

> BD = read.table("http://freakonometrics.free.fr/acp-villes.csv",sep=";",header=TRUE)
>  M=as.matrix(BD[,2:12])
>  colnames(M)=names(BD)[2:12]
>  rownames(M)=BD$Agglo

 

Manipulation matricielle de données

Pour travailler un peu la manipulation de matrices, dans le cadre du cours d’analyse de données, quelques lignes de code.

La matrice de données est ici

> BD=read.table("https://perso.univ-rennes1.fr/arthur.charpentier/securite.txt",header=TRUE)
> M=as.matrix(BD[,2:10])
> colnames(M)=names(BD)[2:10]
> rownames(M)=BD$dep
> head(M[,1:5])
    infra  vols  eco crim  vma
D1  44.11 29.51 4.14 2.79 0.08
D2  45.97 26.29 3.97 4.54 0.07
D3  38.83 21.90 4.61 3.33 0.03
D4  49.68 30.11 5.28 4.54 0.07
D5  47.67 27.19 4.94 3.91 0.05
D6 109.21 77.71 6.87 5.25 0.24

La décomposition spectrale de

> round(t(M)%*%M)
       infra   vols   eco  crim vma  vvi  camb  
infra 308599 191433 29179 21290 642 7042 30951 
vols  191433 119961 17859 12967 404 4427 19335 
eco    29179  17859  2946  2042  59  648  2942  
crim   21290  12967  2042  1564  42  448  2129  
vma      642    404    59    42   2   19    62   
vvi     7042   4427   648   448  19  237   665  
camb   30951  19335  2942  2129  62  665  3200  
roul   55363  34745  5089  3792 113 1217  5612 
auto   24841  15747  2214  1676  53  545  2520

(j’ai viré les deux dernières colonnes pour des raisons de présentation) donne ici

> eigen(t(M)%*%M)
$values
[1] 4.471766e+05 1.239465e+03 3.462936e+02 1.764038e+02 1.303316e+02 6.623592e+01 2.624739e+01 1.761801e+01 1.564292e-01
 
$vectors

(etc). Pour rappel, les liens entre la diagonalisation et l’analyse spectrale se voit avec l’écriture

> P=eigen(t(M)%*%M)$vectors
> P%*%diag(eigen(t(M)%*%M)$values)%*%t(P)

Maintenant, deux remarques. Tout d’abord le lien entre la décomposition spectrale de cette matrice, et la décomposition en valeurs singulières

> sqrt(eigen(t(M)%*%M)$values)
[1] 668.7126681  35.2060415  18.6089656  13.2817097  11.4162872   8.1385454   5.1232210   4.1973809   0.3955113
> svd(M)
$d
[1] 668.7126681  35.2060415  18.6089656  13.2817097  11.4162872   8.1385454   5.1232210   4.1973809   0.3955113
 
$u
             [,1]          [,2]          [,3]          [,4]         [,5]          [,6]         [,7]         [,8]          [,9]
 [1,] -0.08156446 -0.0632514738 -2.596265e-02  0.0960587227 -0.064759395 -0.0430947134 -0.024719699 -0.074347281  0.0550356487
 [2,] -0.08087892  0.0541859480 -5.777492e-02 -0.0628249751 -0.099084701 -0.1311683858  0.083484434  0.011872601 -0.0195034904

(etc) et la décomspotion spectrale de l’autre produit matriciel

> sqrt(eigen(M%*%t(M))$values)
 [1] 6.687127e+02 3.520604e+01 1.860897e+01 1.328171e+01 1.141629e+01 8.138545e+00 5.123221e+00 4.197381e+00 3.955113e-01 7.546505e-06 6.656212e-06 5.285131e-06 4.270035e-06

(etc).

Maintenant, pour comprendre un peu mieux la recherche de composantes pricinpales, considérons les deux variables

> sM=M[,c(1,3)]
> plot(sM)

 

On a tout intérêt à centrer et réduire les variables (ou changer la métrique, comme vu en cours)

> sMcr=sM
> for(j in 1:2) sMcr[,j]=(sMcr[,j]-mean(sMcr[,j]))/sd(sMcr[,j])
> plot(sMcr)

Avant de faire des projections sur un axe, introduisons deux fonctions

> prod_scal=function(x,u){
+   ps=rep(NA,nrow(x))
+   for(i in 1:nrow(x)) ps[i]=sum(x[i,]*u)
+   return(ps)
+ }

> proj=function(x,u){
+   px=x
+   for(j in 1:length(u)){
+     px[,j]=prod_scal(x,u)/sqrt(sum(u^2))*u[j]  
+   }
+   return(px)
+ }

Par exemple, si on projette sur l’axe des abscisses,

> points(proj(sMcr,c(1,0)),col="blue")

On peut ensuite chercher la direction de l’axe qui nous donnera le nuage de points qui aura la plus grande inertie

> inertie=function(x) sum(x^2)
> Theta=seq(0,3.141592657,length=101)
> V=unlist(lapply(Theta,function(theta) inertie(proj(sMcr,c(cos(theta),sin(theta))))))
> plot(Theta,V,type='l')

> (angle=optim(0,function(theta) -inertie(proj(sMcr,c(cos(theta),
sin(theta)))))$par)
[1] 0.7853516

Visuellement, on obtient

> plot(sMcr)
> psMcr=proj(sMcr,c(cos(angle),sin(angle)))
> points(psMcr,col="blue")

https://freakonometrics.hypotheses.org/files/2016/01/proj1.png

On notera que l’axe qui donne l’inertie maximale est lié aux vecteurs propres de la décomposition spectrale (celui associé à la plus grande valeur propre)

> c(cos(angle),sin(angle))
[1] 0.7071397 0.7070738
> eigen(t(sMcr)%*%sMcr)
$values
[1] 159.64663  26.35337
 
$vectors
          [,1]       [,2]
[1,] 0.7071068 -0.7071068
[2,] 0.7071068  0.7071068

On continuera demain, à manipuler des matrices de données, de faire des projections, avant de commencer l’analyse en composantes principales, à proprement parler, la semaine prochaine

Analyse des Données et Rappels d’Algèbre Linéaire

Cette semaine, on va commencer le cours d’Analyse des Données. Pour le premier cours, on reverra les bases d’algèbre linéaire (valeurs propres, diagonalisation, décomposition en valeurs singulières, etc). Pour ceux qui souhaitent réviser un peu, je renvoie vers le support de cours de Lara Thomas, qui contient tous les résultats nécessaires. Le reste se verra au tableau, en cours.

Histoire d’illustrer les premiers concepts, on pourra utiliser la base de données securite, extrait du “palmarès des départements: où vit-on en sécurité ?” paru dans le journal L’Express (N 2589, du 15 février 2001).

Visualising a Classification in High Dimension, part 2

A few weeks ago, I published a post on Visualising a Classification in High Dimension, based on the use of a principal component analysis, to get a projection on the first two components. Following that post, I was wondering what could be done in the context of a classification on categorical covariates. A natural idea would be to consider a correspondance analysis, and to run a similar code.

Consider here the dataset used in a recent post,

> source("http://freakonometrics.free.fr/import_data_credit.R")

If we consider a correspondance analysis, we get

> library(FactoMineR)
> acm=MCA(train.db,quali.sup = 
+ which(names(train.db,)=="class"),ncp=10)

For the covariates (including also the variable we want to model, considered here as some supplementary variable), the visualisation – on the first two components – is

and for the individuals

Continue reading Visualising a Classification in High Dimension, part 2

Classification with Categorical Variables (the fuzzy side)

The Gaussian and the (log) Poisson regressions share a very interesting property,

i.e. the average predicted value is the empirical mean of our sample.

> mean(predict(lm(dist~speed,data=cars)))
[1] 42.98
> mean(cars$dist)
[1] 42.98

One can prove that it is also the prediction for the average individual in our sample

> predict(lm(dist~speed,data=cars),
+ newdata=data.frame(speed=mean(cars$speed))) 
42.98

The geometric interpretation is that the regression line passes through the centroid,

> plot(cars)
> abline(lm(dist~speed,data=cars),col="red")
> abline(h=mean(cars$dist),col="blue")
> abline(v=mean(cars$speed),col="blue")
> points(mean(cars$speed),mean(cars$dist))

But in all other cases, it is no longer the case. Consider for instance the case of a logistic regression. And to ask for something even more complicated, consider the case where we have only categorical explanatory variables. In that context, it is more difficult to get a prediction for the “average individual”. Unless we consider some fuzzy interpretation of the regression.

Continue reading Classification with Categorical Variables (the fuzzy side)

Spliting a Node in a Tree

If we grow a tree with standard functions in R, on the same dataset used to introduce classification tree in some previous post,

> MYOCARDE=read.table(
+ "http://freakonometrics.free.fr/saporta.csv",
+ head=TRUE,sep=";")
> library(rpart)
> cart<-rpart(PRONO~.,data=MYOCARDE)

we get

> library(rpart.plot)
> library(rattle)
> prp(cart,type=2,extra=1)

Continue reading Spliting a Node in a Tree

Growing some Trees

Consider here the dataset used in a previous post, about visualising a classification (with more than 2 features),

> MYOCARDE=read.table(
+ "http://freakonometrics.free.fr/saporta.csv",
+ header=TRUE,sep=";")

The default classification tree is

> arbre = rpart(factor(PRONO)~.,data=MYOCARDE)
> rpart.plot(arbre,type=4,extra=6)

We can change the options here, such as the minimum number of observations, per node

> arbre = rpart(factor(PRONO)~.,data=MYOCARDE,
+       control=rpart.control(minsplit=10))
> rpart.plot(arbre,type=4,extra=6)

or

> arbre = rpart(factor(PRONO)~.,data=MYOCARDE,
+        control=rpart.control(minsplit=5))
> rpart.plot(arbre,type=4,extra=6)

Continue reading Growing some Trees