Tag Archives: dendrogram

Clustering French Cities (based on Temperatures)

In order to illustrate hierarchical clustering techniques and k-means, I did borrow François Husson‘s dataset, with monthly average temperature in several French cities.

> temp=read.table(
+ "http://freakonometrics.free.fr/FR_temp.txt",
+ header=TRUE,dec=",")

We have 15 cities, with monthly observations

> X=temp[,1:12]
> boxplot(X)

Since the variance seems to be rather stable, we will not ‘normalize’ the variables here,

> apply(X,2,sd)
    Janv     Fevr     Mars     Avri 
2.007296 1.868409 1.529083 1.414820 
     Mai     Juin     juil     Aout 
1.504596 1.793507 2.128939 2.011988 
    Sept     Octo     Nove     Dece 
1.848114 1.829988 1.803753 1.958449

In order to get a hierarchical cluster analysis, use for instance

> h <- hclust(dist(X), method = "ward")
> plot(h, labels = rownames(X), sub = "")

An alternative is to use

> library(FactoMineR)
> h2=HCPC(X)
> plot(h2)

Here, we visualise observations with a principal components analysis. We have here also an automatic selection of the number of classes, here 3. We can get the description of the groups using

> h2$desc.ind

or directly

> cah=hclust(dist(X))
> groups.3 <- cutree(cah,3)

We can also visualise those classes by ourselves,

> acp=PCA(X,scale.unit=FALSE)
> plot(acp$ind$coord[,1:2],col="white")
> text(acp$ind$coord[,1],acp$ind$coord[,2],
+ rownames(acp$ind$coord),col=groups.3)

It is possible to plot the centroïds of those clusters

> PT=aggregate(acp$ind$coord,list(groups.3),mean)
> points(PT$Dim.1,PT$Dim.2,pch=19)

If we add Voroid sets around those centroïds, here we do not see them (actually, we see the point – in the middle – that is exactly at the intersection of the three regions),

> library(tripack)
> V <- voronoi.mosaic(PT$Dim.1,PT$Dim.2)
> plot(V,add=TRUE)

To visualize those regions, use

> p=function(x,y){
+   which.min((PT$Dim.1-x)^2+(PT$Dim.2-y)^2)
+ }
> vx=seq(-10,12,length=251)
> vy=seq(-6,8,length=251)
> z=outer(vx,vy,Vectorize(p))
> image(vx,vy,z,col=c(rgb(1,0,0,.2),
+ rgb(0,1,0,.2),rgb(0,0,1,.2)))
> CL=c("red","black","blue")
> text(acp$ind$coord[,1],acp$ind$coord[,2],
+ rownames(acp$ind$coord),col=CL[groups.3])

Actually, those three groups (and those three regions) are also the ones we obtain using a k-mean algorithm,

> km=kmeans(acp$ind$coord[,1:2],3)
> km
K-means clustering 
with 3 clusters of sizes 3, 7, 5

(etc). But actually, since again we have some spatial data, it is possible to visualize them on a map

> library(maps)
> map("france")
> points(temp$Long,temp$Lati,col=groups.3,pch=19)

or, to visualize the regions, use e.g.

> library(car)
> for(i in 1:3) 
+ dataEllipse(temp$Long[groups.3==i],
+ temp$Lati[groups.3==i], levels=.7,add=TRUE,
+ col=i+1,fill=TRUE)

Those three regions actually make sense, geographically speaking.

Clusters of Texts

Another popular application of classification techniques is on texmining (see e.g. an old post on French president speaches). Consider the following example,  inspired by Nobert Ryciak’s post, with 12 wikipedia pages, on various topics,

> library(tm)
> library(stringi)
> library(proxy)
> titles = c("Boosting_(machine_learning)",
+            "Random_forest",
+            "K-nearest_neighbors_algorithm",
+            "Logistic_regression",
+            "Boston_Bruins",
+            "Los_Angeles_Lakers",
+            "Game_of_Thrones",
+            "House_of_Cards_(U.S._TV_series)",
+            "True_Detective_(TV_series)",
+            "Picasso",
+            "Henri_Matisse",
+            "Jackson_Pollock")
> articles = character(length(titles))
> wiki = "http://en.wikipedia.org/wiki/"
> for (i in 1:length(titles)) {
+   articles[i] = stri_flatten(readLines(stri_paste(wiki, titles[i])), col = " ")
+ }

Here, we store all the contents of the pages in a corpus (from the text mining package).

> docs = Corpus(VectorSource(articles))

This is what we have in that corpus

> a = stri_flatten(readLines(stri_paste(wiki, titles[1])), col = " ")
> a = Corpus(VectorSource(a))
> a[[1]]

Thoughts on Hypothesis Boosting</i></a>, Unpublished manuscript (Machine Learning class project, December 1988)</span></li> <li id="cite_note-4"><span class="mw-cite-backlink"><b><a href="#cite_ref-4">^</a></b></span> <span class="reference-text"><cite class="citation journal"><a href="/wiki/Michael_Kearns" title="Michael Kearns">Michael Kearns</a>; <a href="/wiki/Leslie_Valiant" title="Leslie Valiant">Leslie Valiant</a> (1989). <a rel="nofollow" class="external text" href="http://dl.acm.org/citation.cfm?id=73049">"Crytographic limitations on learning Boolean formulae and finite automata"</a>. <i>Symposium on T

This is because we read an html page.

> a = tm_map(a, function(x) stri_replace_all_fixed(x, "\t", " "))
> a = tm_map(a, PlainTextDocument)
> a = tm_map(a, stripWhitespace)
> a = tm_map(a, removeWords, stopwords("english"))
> a = tm_map(a, removePunctuation)
> a = tm_map(a, tolower)
> a 

can  set  weak learners create  single strong learner  a weak learner  defined    classifier    slightly correlated   true classification  can label examples better  random guessing in contrast  strong learner   classifier   arbitrarily wellcorrelated   true classification robert 

Now we have the text of the wikipedia document. What we did was

  • replace all “” elements with a space. We do it because there are not a part of text document but in general a html code.
  • replace all “/t” with a space.
  • convert previous result (returned type was “string”) to “PlainTextDocument”, so that we can apply the other functions from tm package, which require this type of argument.
  • remove extra whitespaces from the documents.
  • remove punctuation marks.
  • remove from the documents words which we find redundant for text mining (e.g. pronouns, conjunctions). We set this words as stopwords(“english”) which is a built-in list for English language (this argument is passed to the function removeWords.
  • transform characters to lower case.

Now we can do it on the entire corpus

> docs2 = tm_map(docs, function(x) stri_replace_all_regex(x, "<.+?>", " "))
> docs3 = tm_map(docs2, function(x) stri_replace_all_fixed(x, "\t", " "))
> docs4 = tm_map(docs3, PlainTextDocument)
> docs5 = tm_map(docs4, stripWhitespace)
> docs6 = tm_map(docs5, removeWords, stopwords("english"))
> docs7 = tm_map(docs6, removePunctuation)
> docs8 = tm_map(docs7, tolower)

Now, we simply count words in each page,

> dtm <- DocumentTermMatrix(docs8)
> dtm2 <- as.matrix(dtm)
> dim(dtm2)
[1] 12 13683
> frequency <- colSums(dtm2)
> frequency <- sort(frequency, decreasing=TRUE)
> mots=frequency[frequency>20]
> s=dtm2[1,which(colnames(dtm2) %in% names(mots))]
> for(i in 2:nrow(dtm2)) s=cbind(s,dtm2[i,which(colnames(dtm2) %in% names(mots))])
> colnames(s)=titles

 

Once we have that dataset, we can use a PCA to visualise the ‘variables’ i.e. the pages

> library(FactoMineR)
> PCA(s)

We can also use non-supervised classification to group pages. But first, let us normalize the dataset

> s0=s/apply(s,1,sd)

Then, we can run a cluster dendrogram, using the Ward distance

> h <- hclust(dist(t(s0)), method = "ward")
> plot(h, labels = titles, sub = "")

Groups are consistent with intuition: painters are in the same cluster, as well as TV series, sports teams, and statistical techniques.