Tag Archives: cluster

ACT6100, analyse non-supervisée

On avance tranquillement dans le cours ACT6100 d’analyse des données en actuariat. Les supports de cours sont en ligne sur https://github.com/freakonometrics/ACT6100. Cette session (mais aussi celle d’hiver) étant en distanciel, le cours est asynchrone, et je poste régulièrement des capsules vidéos. Les capsules présentant les principales méthodes d’analyse non-supervisée sont en ligne

  1. ACP (1) (PCAvideo pdf (48:42)
  2. ACP (2) video pdf (31:10)
  3. ACP (3) video pdf (47:51)
  4. ACP (4) video pdf (39:59)
  5. ACP (5) video pdf (28:53)
  6. CA (1) video pdf (38:33)
  7. CA (2) video pdf (46:51)
  8. MCA (1) video pdf (28:03)
  9. Clusters (k-means) video pdf (48:20)
  10. Clusters (CAH) video pdf (37:38)
  11. NA & Imputations (k-nn) video pdf (17:47)
  12. NA & Imputations (ACP) video pdf (15:28)

Si les liens des vidéos ne marchent pas, je renvoie vers l’ensemble des capsules du cours, ici.

INF7100, statistiques

La seconde partie de mon intervention sur la science des données, dans le cadre du cours INF7100 portera sur les statistiques, univariées et multivarirées. Le plan sera le suivant

  • 201: De la Statistique aux Sciences de Données pdf video (14:24)
  • 211: Fonctions Usuelles en Statistique (fonction de réparition, densité, histogramme) pdf video (28:37)
  • 221: Indicateurs Statistique: Valeur Centrale (moyenne) pdf video (32:56)
  • 222: Indicateurs Statistique: Dispersion (variance, inégalités) pdf video (22:21)
  • 223: Indicateurs Statistique: Approximations (approximation normale) pdf video (18:42)
  • 224: Indicateurs Statistique: Quantiles pdf video (24:54)
  • 231: Inférence (statistique bayésienne) pdf video (39:33)
  • 241: Tests Statistiques (1) (tests, significativité, p-value) pdf video (43:41)
  • 242: Tests Statistiques (2) (erreurs) pdf video (16:51)
  • 261: Statistiques Bivariées pdf video (25:16)
  • 271: Statistiques Multivariées: Projections pdf video (29:06)
  • 272: Statistiques Multivariées: Clusters pdf video (32:21)
  • 281: Réseaux et Graphs pdf video (32:40)
  • 291: Données Chronologiques pdf video (29:01)

 

Clustering French Cities (based on Temperatures)

In order to illustrate hierarchical clustering techniques and k-means, I did borrow François Husson‘s dataset, with monthly average temperature in several French cities.

> temp=read.table(
+ "http://freakonometrics.free.fr/FR_temp.txt",
+ header=TRUE,dec=",")

We have 15 cities, with monthly observations

> X=temp[,1:12]
> boxplot(X)

Since the variance seems to be rather stable, we will not ‘normalize’ the variables here,

> apply(X,2,sd)
    Janv     Fevr     Mars     Avri 
2.007296 1.868409 1.529083 1.414820 
     Mai     Juin     juil     Aout 
1.504596 1.793507 2.128939 2.011988 
    Sept     Octo     Nove     Dece 
1.848114 1.829988 1.803753 1.958449

In order to get a hierarchical cluster analysis, use for instance

> h <- hclust(dist(X), method = "ward")
> plot(h, labels = rownames(X), sub = "")

An alternative is to use

> library(FactoMineR)
> h2=HCPC(X)
> plot(h2)

Here, we visualise observations with a principal components analysis. We have here also an automatic selection of the number of classes, here 3. We can get the description of the groups using

> h2$desc.ind

or directly

> cah=hclust(dist(X))
> groups.3 <- cutree(cah,3)

We can also visualise those classes by ourselves,

> acp=PCA(X,scale.unit=FALSE)
> plot(acp$ind$coord[,1:2],col="white")
> text(acp$ind$coord[,1],acp$ind$coord[,2],
+ rownames(acp$ind$coord),col=groups.3)

It is possible to plot the centroïds of those clusters

> PT=aggregate(acp$ind$coord,list(groups.3),mean)
> points(PT$Dim.1,PT$Dim.2,pch=19)

If we add Voroid sets around those centroïds, here we do not see them (actually, we see the point – in the middle – that is exactly at the intersection of the three regions),

> library(tripack)
> V <- voronoi.mosaic(PT$Dim.1,PT$Dim.2)
> plot(V,add=TRUE)

To visualize those regions, use

> p=function(x,y){
+   which.min((PT$Dim.1-x)^2+(PT$Dim.2-y)^2)
+ }
> vx=seq(-10,12,length=251)
> vy=seq(-6,8,length=251)
> z=outer(vx,vy,Vectorize(p))
> image(vx,vy,z,col=c(rgb(1,0,0,.2),
+ rgb(0,1,0,.2),rgb(0,0,1,.2)))
> CL=c("red","black","blue")
> text(acp$ind$coord[,1],acp$ind$coord[,2],
+ rownames(acp$ind$coord),col=CL[groups.3])

Actually, those three groups (and those three regions) are also the ones we obtain using a k-mean algorithm,

> km=kmeans(acp$ind$coord[,1:2],3)
> km
K-means clustering 
with 3 clusters of sizes 3, 7, 5

(etc). But actually, since again we have some spatial data, it is possible to visualize them on a map

> library(maps)
> map("france")
> points(temp$Long,temp$Lati,col=groups.3,pch=19)

or, to visualize the regions, use e.g.

> library(car)
> for(i in 1:3) 
+ dataEllipse(temp$Long[groups.3==i],
+ temp$Lati[groups.3==i], levels=.7,add=TRUE,
+ col=i+1,fill=TRUE)

Those three regions actually make sense, geographically speaking.

Clusters of Texts

Another popular application of classification techniques is on texmining (see e.g. an old post on French president speaches). Consider the following example,  inspired by Nobert Ryciak’s post, with 12 wikipedia pages, on various topics,

> library(tm)
> library(stringi)
> library(proxy)
> titles = c("Boosting_(machine_learning)",
+            "Random_forest",
+            "K-nearest_neighbors_algorithm",
+            "Logistic_regression",
+            "Boston_Bruins",
+            "Los_Angeles_Lakers",
+            "Game_of_Thrones",
+            "House_of_Cards_(U.S._TV_series)",
+            "True_Detective_(TV_series)",
+            "Picasso",
+            "Henri_Matisse",
+            "Jackson_Pollock")
> articles = character(length(titles))
> wiki = "http://en.wikipedia.org/wiki/"
> for (i in 1:length(titles)) {
+   articles[i] = stri_flatten(readLines(stri_paste(wiki, titles[i])), col = " ")
+ }

Here, we store all the contents of the pages in a corpus (from the text mining package).

> docs = Corpus(VectorSource(articles))

This is what we have in that corpus

> a = stri_flatten(readLines(stri_paste(wiki, titles[1])), col = " ")
> a = Corpus(VectorSource(a))
> a[[1]]

Thoughts on Hypothesis Boosting</i></a>, Unpublished manuscript (Machine Learning class project, December 1988)</span></li> <li id="cite_note-4"><span class="mw-cite-backlink"><b><a href="#cite_ref-4">^</a></b></span> <span class="reference-text"><cite class="citation journal"><a href="/wiki/Michael_Kearns" title="Michael Kearns">Michael Kearns</a>; <a href="/wiki/Leslie_Valiant" title="Leslie Valiant">Leslie Valiant</a> (1989). <a rel="nofollow" class="external text" href="http://dl.acm.org/citation.cfm?id=73049">"Crytographic limitations on learning Boolean formulae and finite automata"</a>. <i>Symposium on T

This is because we read an html page.

> a = tm_map(a, function(x) stri_replace_all_fixed(x, "\t", " "))
> a = tm_map(a, PlainTextDocument)
> a = tm_map(a, stripWhitespace)
> a = tm_map(a, removeWords, stopwords("english"))
> a = tm_map(a, removePunctuation)
> a = tm_map(a, tolower)
> a 

can  set  weak learners create  single strong learner  a weak learner  defined    classifier    slightly correlated   true classification  can label examples better  random guessing in contrast  strong learner   classifier   arbitrarily wellcorrelated   true classification robert 

Now we have the text of the wikipedia document. What we did was

  • replace all “” elements with a space. We do it because there are not a part of text document but in general a html code.
  • replace all “/t” with a space.
  • convert previous result (returned type was “string”) to “PlainTextDocument”, so that we can apply the other functions from tm package, which require this type of argument.
  • remove extra whitespaces from the documents.
  • remove punctuation marks.
  • remove from the documents words which we find redundant for text mining (e.g. pronouns, conjunctions). We set this words as stopwords(“english”) which is a built-in list for English language (this argument is passed to the function removeWords.
  • transform characters to lower case.

Now we can do it on the entire corpus

> docs2 = tm_map(docs, function(x) stri_replace_all_regex(x, "<.+?>", " "))
> docs3 = tm_map(docs2, function(x) stri_replace_all_fixed(x, "\t", " "))
> docs4 = tm_map(docs3, PlainTextDocument)
> docs5 = tm_map(docs4, stripWhitespace)
> docs6 = tm_map(docs5, removeWords, stopwords("english"))
> docs7 = tm_map(docs6, removePunctuation)
> docs8 = tm_map(docs7, tolower)

Now, we simply count words in each page,

> dtm <- DocumentTermMatrix(docs8)
> dtm2 <- as.matrix(dtm)
> dim(dtm2)
[1] 12 13683
> frequency <- colSums(dtm2)
> frequency <- sort(frequency, decreasing=TRUE)
> mots=frequency[frequency>20]
> s=dtm2[1,which(colnames(dtm2) %in% names(mots))]
> for(i in 2:nrow(dtm2)) s=cbind(s,dtm2[i,which(colnames(dtm2) %in% names(mots))])
> colnames(s)=titles

 

Once we have that dataset, we can use a PCA to visualise the ‘variables’ i.e. the pages

> library(FactoMineR)
> PCA(s)

We can also use non-supervised classification to group pages. But first, let us normalize the dataset

> s0=s/apply(s,1,sd)

Then, we can run a cluster dendrogram, using the Ward distance

> h <- hclust(dist(t(s0)), method = "ward")
> plot(h, labels = titles, sub = "")

Groups are consistent with intuition: painters are in the same cluster, as well as TV series, sports teams, and statistical techniques.

Clusters of (French) Regions

For the data scienec course of tomorrow, I just wanted to post some functions to illustrate cluster analysis. Consider the dataset of the French 2012 elections

> elections2012=read.table(
"http://freakonometrics.free.fr/elections_2012_T1.csv",sep=";",dec=",",header=TRUE)
> voix=which(substr(names(
+ elections2012),1,11)=="X..Voix.Exp")
> elections2012=elections2012[1:96,]
> X=as.matrix(elections2012[,voix])
> colnames(X)=c("JOLY","LE PEN","SARKOZY","MÉLENCHON","POUTOU","ARTHAUD","CHEMINADE","BAYROU","DUPONT-AIGNAN","HOLLANDE")
> rownames(X)=elections2012[,1]

The hierarchical cluster analysis is obtained using

> cah=hclust(dist(X))
> plot(cah,cex=.6)

To get five groups, we have to prune the tree

> rect.hclust(cah,k=5)
> groups.5 <- cutree(cah,5)

We have to zoom-in to visualize the French regions,

It is also possible to use

> library(dendroextras)
> plot(colour_clusters(cah,k=5))

And again, if we zoom-in, we get

The interpretation of the clusters can be obtained using

> aggregate(X,list(groups.5),mean)
  Group.1     JOLY   LE PEN  SARKOZY
1       1 2.185000 18.00042 28.74042
2       2 1.943824 23.22324 25.78029
3       3 2.240667 15.34267 23.45933
4       4 2.620000 21.90600 34.32200
5       5 3.140000  9.05000 33.80000

It is also possible to visualize those clusters on a map, using

> library(RColorBrewer)
> CL=brewer.pal(8,"Set3")
> carte_classe <- function(groupes){
+ library(stringr)
+ elections2012$dep <- elections2012[,2]
+ elections2012$dep <- tolower(elections2012$dep)
+ elections2012$dep <- str_replace_all(elections2012$dep, pattern = " |-|'|/", replacement = "")
+ library(maps)
+ france<-map(database="france")
+ france$dep <- france$names
+ france$dep <- tolower(france$dep)
+ france$dep <- str_replace_all(france$dep, pattern = " |-|'|/", replacement = "")
+ corresp_noms <- elections2012[, c(1,2, ncol(elections2012))]
+ corresp_noms$dep[which(corresp_noms$dep %in% "corsesud")] <- "corsedusud"
+ col2001<-groupes+1
+ names(col2001) <- corresp_noms$dep[match(names(col2001), corresp_noms[,1])]
+ color <- col2001[match(france$dep, names(col2001))]
+ map(database="france", fill=TRUE, col=CL[color])
+ }
> carte_classe(cutree(cah,5))

or, if we simply want 4 clusters

> carte_classe(cutree(cah,4))

 

The odds of a cluster of airplane accidents

Recently, there have been a lot of airplane accidents.

  • July, 17th 2014, Hrabove, Ukraine, Malaysia Airlines, Boeing 777, fatalities 298 (/298)
  • July, 23rd 2014, Magong, Taiwan, TransAsia Airways, ATR 72-500, fatalities 47 (/58)
  • July, 24th 2014, Aguelhok, Mali, Air Algerie, Mc Donnell Douglas MD-83, fatalities 116 (/116)

It is simple to find a lot of datasets about airplane crashes. For instance on http://ntsb.gov/aviationquery. The dataset is nice, with a lot of information,

> planes=read.table(
+ "cbad3ca6-6b8f-4c98-9ee0-601faAviationData.txt",
+ sep="|",header=TRUE)

for instance the exact location of the crashes,

> library(maps)
> map("world", interior = FALSE)
> points(planes$Longitude,planes$Latitude,
+ pch=19,cex=planes$Total.Fatal.Injuries/50,
+ col="red")

Continue reading The odds of a cluster of airplane accidents