K-Means Clustering

K-means clustering is one of the simplest clustering algorithms one can use to find natural groupings of an unlabeled data set.

Another way of stating this is that k-means clustering is an unsupervised learning algorithm. Aka. “learning the structure of X without being given Y”.

In words:

K-means clustering finds “k” different means (surprise surprise) which represent the centers of k clusters and assigns each data point to one of these clusters.

The cluster it is assigned to is the one where the distance (usually Euclidean) from the point to the mean is smallest.

Using a little math:

Given “k” means – m(1), m(2), m(3), …, m(k) – for each data point “x” we assign a cluster label “c” where c* = arg min || x – m(c) || for c = 1..k.

The algorithm:

The k-means algorithm is extremely simple, just 2 steps:

1. Assign each data point to one of the k means. (Each data point gets assigned to the closest mean)

2. Recalculate the means from every point assigned to its cluster. (The new mean of center “c” is the average of all the points currently assigned to center ‘c”)

The k-means are initialized by choosing k random points from the input data set.

Back and forth concept:

At the 10 000 foot level, there is sort of an “opposite” relationship between the two steps. In one, we’re using the data in a cluster to calculate a new mean for that cluster. In the other, we’re using the mean of the cluster to calculate where the data should go.

In fact, many machine learning algorithms involve an idea like this – one part of the algorithm involves going in the “forward” direction, and the other part of the algorithm involves going in the “backward” direction.

For neural networks, we have “backpropagation”.

For Hidden Markov Models, we have the forward-backward algorithm.

Something to think about as you journey through the world of machine learning.

The code:

Can be found here: https://github.com/lazyprogrammer/machine_learning_examples/blob/master/kmeans_clustering.py

In pictures:

I generated 5 true (target) centers at (0,0), and (+/-3, +/-3).

From these I generated random points by adding spherical Gaussian noise to the centers (50 points for each center).

I then ran k-means clustering to see the centers the algorithm found, and you can confirm that they look pretty similar:

image

image

Pitfalls:

Of course, the example I provide is doctored and in real-life your clusters won’t look as pretty.

Some drawbacks of k-means:

  • You must choose k.
  • The algorithm is dependent on the initial choice of centers. You can end up with really bad clusters.
  • Clusters found are usually suboptimal.

Modifications:

The above problems can be mitigated somewhat via some modications:

  • Run it multiple times.
  • Fuzzy clustering: Make the cluster assignment a size-k vector, i.e. for k = 3 we might have (0.2, 0.5, 0.3) for some data point, which could be interpreted as the data point is “50% part of cluster 2”.
  • Use a criterion function to judge how good the cluster is: the variance within each cluster should be small compared to the variance between each cluster.