In the context of -means, we want to partition the space of our observations into classes. each observation belongs to the cluster with the nearest mean. Here “nearest” is in the sense of some norm, usually the (Euclidean) norm.
Consider the case where we have 2 classes. The means being respectively the 2 black dots. If we partition based on the nearest mean, with the (Euclidean) norm we get the graph on the left, and with the (Manhattan) norm, the one on the right,
Points in the red region are closer to the mean in the upper part, while points in the blue region are closer to the mean in the lower part. Here, we will always use the standard (Euclidean) norm. Note that the graph above is related to Voronoi diagrams (or Voronoy, from Вороний in Ukrainian, or Вороно́й in Russian) with 2 points, the 2 means.
In order to illustrate the -means clustering algorithm (here Lloyd’s algorithm) consider the following dataset
Here, we have 5 groups. So let us run a 5-means algorithm here.
- we draw randomly 5 points in the space (intial values for the means),
- in the assignment step, we assign each point to the nearest mean
- in the update step, we compute the new centroids of the clusters
To visualize it, see
The code the get the clusters is
kmeans(pts, centers=5, nstart = 1, algorithm = "Lloyd")
Observe that the assignment step is based on computations of Voronoi sets. This can be done in R using
This is what we can visualize below
The code to visualize the means, and the clusters (or regions), use
km1 <- kmeans(pts, centers=5, nstart = 1, algorithm = "Lloyd") library(tripack) library(RColorBrewer) CL5 <- brewer.pal(5, "Pastel1") V <- voronoi.mosaic(km1$centers[,1],km1$centers[,2]) P <- voronoi.polygons(V) plot(pts,pch=19,xlim=0:1,ylim=0:1,xlab="",ylab="",col=CL5[km1$cluster]) points(km1$centers[,1],km1$centers[,2],pch=3,cex=1.5,lwd=2) plot(V,add=TRUE)
Here, starting points are draw randomly. If we run it again, we might get
or
On that dataset, it is difficult to get cluster that are the five groups we can actually see. If we use
we usually get something better
Colors are obtained from clusters of the -means function, but additional lines are obtained using as outputs of Voronoi diagrams functions.
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (February 22, 2015). k-means clustering and Voronoi sets. Freakonometrics. Retrieved January 23, 2025 from https://doi.org/10.58079/ouyl
I play around with this kind of stuff in connection with the Vehicle Routing Problem (It’s on wikipedia). I use (capacitated) k-means to initialise routes. I have a procedure to choose the initial seeds:
– First two seeds are the nodes that are furthest apart
– Next seed is the one that maximises the sum of the distances to the first 2 (i.e. furthest from the ones we already have)
– Next seed maximises sum of distances to first 3
– etc
In a quick test, this procedure at least gave me a seed in each cluster – but still, the k-means clusters crossed the obvious boundaries.
Interestingly, solving an uncapacitated VRP with the depot at (0.5, -1) gave me the clusters you are after 🙂 Ahh, the interchangeability of NP-hard problems 🙂
Hello, I am interested un this subject. I am a student and I investigate Clustering methods, specificaly k-means algorithm. -i fiind it very interesting that you never get the 5 clusters right. Could I maybe chat with you about this someday?
How did you calculated means for clusters to provide it as input for plotting voronoi cells?