Tag Archives: k-means

ACT6100, analyse non-supervisée

On avance tranquillement dans le cours ACT6100 d’analyse des données en actuariat. Les supports de cours sont en ligne sur https://github.com/freakonometrics/ACT6100. Cette session (mais aussi celle d’hiver) étant en distanciel, le cours est asynchrone, et je poste régulièrement des capsules vidéos. Les capsules présentant les principales méthodes d’analyse non-supervisée sont en ligne

  1. ACP (1) (PCAvideo pdf (48:42)
  2. ACP (2) video pdf (31:10)
  3. ACP (3) video pdf (47:51)
  4. ACP (4) video pdf (39:59)
  5. ACP (5) video pdf (28:53)
  6. CA (1) video pdf (38:33)
  7. CA (2) video pdf (46:51)
  8. MCA (1) video pdf (28:03)
  9. Clusters (k-means) video pdf (48:20)
  10. Clusters (CAH) video pdf (37:38)
  11. NA & Imputations (k-nn) video pdf (17:47)
  12. NA & Imputations (ACP) video pdf (15:28)

Si les liens des vidéos ne marchent pas, je renvoie vers l’ensemble des capsules du cours, ici.

Clustering French Cities (based on Temperatures)

In order to illustrate hierarchical clustering techniques and k-means, I did borrow François Husson‘s dataset, with monthly average temperature in several French cities.

> temp=read.table(
+ "http://freakonometrics.free.fr/FR_temp.txt",
+ header=TRUE,dec=",")

We have 15 cities, with monthly observations

> X=temp[,1:12]
> boxplot(X)

Since the variance seems to be rather stable, we will not ‘normalize’ the variables here,

> apply(X,2,sd)
    Janv     Fevr     Mars     Avri 
2.007296 1.868409 1.529083 1.414820 
     Mai     Juin     juil     Aout 
1.504596 1.793507 2.128939 2.011988 
    Sept     Octo     Nove     Dece 
1.848114 1.829988 1.803753 1.958449

In order to get a hierarchical cluster analysis, use for instance

> h <- hclust(dist(X), method = "ward")
> plot(h, labels = rownames(X), sub = "")

An alternative is to use

> library(FactoMineR)
> h2=HCPC(X)
> plot(h2)

Here, we visualise observations with a principal components analysis. We have here also an automatic selection of the number of classes, here 3. We can get the description of the groups using

> h2$desc.ind

or directly

> cah=hclust(dist(X))
> groups.3 <- cutree(cah,3)

We can also visualise those classes by ourselves,

> acp=PCA(X,scale.unit=FALSE)
> plot(acp$ind$coord[,1:2],col="white")
> text(acp$ind$coord[,1],acp$ind$coord[,2],
+ rownames(acp$ind$coord),col=groups.3)

It is possible to plot the centroïds of those clusters

> PT=aggregate(acp$ind$coord,list(groups.3),mean)
> points(PT$Dim.1,PT$Dim.2,pch=19)

If we add Voroid sets around those centroïds, here we do not see them (actually, we see the point – in the middle – that is exactly at the intersection of the three regions),

> library(tripack)
> V <- voronoi.mosaic(PT$Dim.1,PT$Dim.2)
> plot(V,add=TRUE)

To visualize those regions, use

> p=function(x,y){
+   which.min((PT$Dim.1-x)^2+(PT$Dim.2-y)^2)
+ }
> vx=seq(-10,12,length=251)
> vy=seq(-6,8,length=251)
> z=outer(vx,vy,Vectorize(p))
> image(vx,vy,z,col=c(rgb(1,0,0,.2),
+ rgb(0,1,0,.2),rgb(0,0,1,.2)))
> CL=c("red","black","blue")
> text(acp$ind$coord[,1],acp$ind$coord[,2],
+ rownames(acp$ind$coord),col=CL[groups.3])

Actually, those three groups (and those three regions) are also the ones we obtain using a k-mean algorithm,

> km=kmeans(acp$ind$coord[,1:2],3)
> km
K-means clustering 
with 3 clusters of sizes 3, 7, 5

(etc). But actually, since again we have some spatial data, it is possible to visualize them on a map

> library(maps)
> map("france")
> points(temp$Long,temp$Lati,col=groups.3,pch=19)

or, to visualize the regions, use e.g.

> library(car)
> for(i in 1:3) 
+ dataEllipse(temp$Long[groups.3==i],
+ temp$Lati[groups.3==i], levels=.7,add=TRUE,
+ col=i+1,fill=TRUE)

Those three regions actually make sense, geographically speaking.

Visualizing Clusters

Consider the following dataset, with (only) ten points

x=c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y=c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
plot(x,y,pch=19,cex=2)

We want to get – say – two clusters. Or more specifically, two sets of observations, each of them sharing some similarities.

Since the number of observations is rather small, it is actually possible to get an exhaustive list of all partitions, and to minimize some criteria, such as the within variance. Given a vector with clusters, we compute the within variance using

within_var = function(I){
I0=which(I==0)
I1=which(I==1)
xbar0=mean(x[I0])
xbar1=mean(x[I1])
ybar0=mean(y[I0])
ybar1=mean(y[I1])
w=sum(I0)*sum( (x[I0]-xbar0)^2+(y[I0]-ybar0)^2 )+
  sum(I1)*sum( (x[I1]-xbar1)^2+(y[I1]-ybar1)^2 )
return(c(I,w))
}

Then, to compute all possible partitions, use

base2=function(z,n=10){
  Base.b=rep(0,n)
  ndigits=(floor(logb(z, base=2))+1)
  for(i in 1:ndigits){
    Base.b[ n-i+1]=(z%%2)
    z=(z%/%2)}
  return(Base.b)}
L=function(x) within_var(base2(x))
S=sapply(1:(2^10),L)

The cluster indices at the mimimum is here

I=S[1:n,which.min(S[n+1,])]

To visualize those clusters, use

cluster_viz = function(indices){
library(RColorBrewer)
CL2palette=rev(brewer.pal(n = 9, name = "RdYlBu"))
CL2f=CL2palette[c(1,9)]
plot(x,y,pch=19,xlab="",ylab="",xlim=0:1,ylim=0:1,cex=2,col=CL2f[1+I])
CL2c=CL2palette[c(3,7)]
I0=which(indices==0)
I1=which(indices==1)
xbar0=mean(x[I0])
xbar1=mean(x[I1])
ybar0=mean(y[I0])
ybar1=mean(y[I1])
segments(x[I0],y[I0],xbar0,ybar0,col=CL2c[1])
segments(x[I1],y[I1],xbar1,ybar1,col=CL2c[2])
points(xbar0,ybar0,pch=19,cex=1.5,col=CL2c[1])
points(xbar1,ybar1,pch=19,cex=1.5,col=CL2c[2])}

and then, simply

cluster_viz(I)

But that was possible only because https://latex.codecogs.com/gif.latex?n is not to large (since the total number of scenarios – with only 2 clusters – is https://latex.codecogs.com/gif.latex?2^n, or https://latex.codecogs.com/gif.latex?2^{n-1} if we changes zeros in ones).

Continue reading Visualizing Clusters

k-means clustering and Voronoi sets

In the context of https://latex.codecogs.com/gif.latex?k-means, we want to partition the space of our observations into https://latex.codecogs.com/gif.latex?k classes. each observation belongs to the cluster with the nearest mean. Here “nearest” is in the sense of some norm, usually the https://latex.codecogs.com/gif.latex?\ell_2 (Euclidean) norm.

Consider the case where we have 2 classes. The means being respectively the 2 black dots. If we partition based on the nearest mean, with the https://latex.codecogs.com/gif.latex?\ell_2 (Euclidean) norm we get the graph on the left, and with the https://latex.codecogs.com/gif.latex?\ell_1 (Manhattan) norm, the one on the right,

Points in the red region are closer to the mean in the upper part, while points in the blue region are closer to the mean in the lower part. Here, we will always use the standard https://latex.codecogs.com/gif.latex?\ell_2 (Euclidean) norm. Note that the graph above is related to Voronoi diagrams (or Voronoy, from Вороний in Ukrainian, or Вороно́й in Russian) with 2 points, the 2 means.

Continue reading k-means clustering and Voronoi sets