Lazy Programmer

Your source for the latest in deep learning, big data, data science, and artificial intelligence. Sign up now

New machine learning course! Cluster Analysis and Unsupervised Machine Learning in Python

April 20, 2016

course-image-small

[Scroll to the bottom if you want to jump straight to the coupon]

Cluster analysis is a staple of unsupervised machine learning and data science.

It is very useful for data mining and big data because it automatically finds patterns in the data, without the need for labels, unlike supervised machine learning.

In a real-world environment, you can imagine that a robot or an artificial intelligence won’t always have access to the optimal answer, or maybe there isn’t an optimal correct answer. You’d want that robot to be able to explore the world on its own, and learn things just by looking for patterns.

Do you ever wonder how we get the data that we use in our supervised machine learning algorithms?

We always seem to have a nice CSV or a table, complete with Xs and corresponding Ys.

If you haven’t been involved in acquiring data yourself, you might not have thought about this, but someone has to make this data!

Those “Y”s have to come from somewhere, and a lot of the time that involves manual labor.

Sometimes, you don’t have access to this kind of information or it is infeasible or costly to acquire.

But you still want to have some idea of the structure of the data. If you’re doing data analytics automating pattern recognition in your data would be invaluable.

This is where unsupervised machine learning comes into play.

In this course we are first going to talk about clustering. This is where instead of training on labels, we try to create our own labels! We’ll do this by grouping together data that looks alike.

There are 2 methods of clustering we’ll talk about: k-means clustering and hierarchical clustering.

Next, because in machine learning we like to talk about probability distributions, we’ll go into Gaussian mixture models and kernel density estimation, where we talk about how to “learn” the probability distribution of a set of data.

One interesting fact is that under certain conditions, Gaussian mixture models and k-means clustering are exactly the same! You can think of GMMs as a “souped up” version of k-means. We’ll prove how this is the case.

All the algorithms we’ll talk about in this course are staples in machine learning and data science, so if you want to know how to automatically find patterns in your data with data mining and pattern extraction, without needing someone to put in manual work to label that data, then this course is for you.

All the materials for this course are FREE. You can download and install Python, Numpy, and Scipy with simple commands on Windows, Linux, or Mac.

50% OFF COUPON: https://www.udemy.com/cluster-analysis-unsupervised-machine-learning-python/?couponCode=EARLYBIRD

#agglomerative clustering #cluster analysis #data mining #data science #expectation-maximization #gaussian mixture model #hierarchical clustering #k-means clustering #kernel density estimation #pattern recognition #udemy #unsupervised machine learning

Go to comments


K-Means Clustering

December 31, 2014

K-means clustering is one of the simplest clustering algorithms one can use to find natural groupings of an unlabeled data set.

Another way of stating this is that k-means clustering is an unsupervised learning algorithm. Aka. “learning the structure of X without being given Y”.

In words:

K-means clustering finds “k” different means (surprise surprise) which represent the centers of k clusters and assigns each data point to one of these clusters.

The cluster it is assigned to is the one where the distance (usually Euclidean) from the point to the mean is smallest.

Using a little math:

Given “k” means – m(1), m(2), m(3), …, m(k) – for each data point “x” we assign a cluster label “c” where c* = arg min || x – m(c) || for c = 1..k.

The algorithm:

The k-means algorithm is extremely simple, just 2 steps:

1. Assign each data point to one of the k means. (Each data point gets assigned to the closest mean)

2. Recalculate the means from every point assigned to its cluster. (The new mean of center “c” is the average of all the points currently assigned to center ‘c”)

The k-means are initialized by choosing k random points from the input data set.

Back and forth concept:

At the 10 000 foot level, there is sort of an “opposite” relationship between the two steps. In one, we’re using the data in a cluster to calculate a new mean for that cluster. In the other, we’re using the mean of the cluster to calculate where the data should go.

In fact, many machine learning algorithms involve an idea like this – one part of the algorithm involves going in the “forward” direction, and the other part of the algorithm involves going in the “backward” direction.

For neural networks, we have “backpropagation”.

For Hidden Markov Models, we have the forward-backward algorithm.

Something to think about as you journey through the world of machine learning.

The code:

Can be found here: https://github.com/lazyprogrammer/machine_learning_examples/blob/master/kmeans_clustering.py

In pictures:

I generated 5 true (target) centers at (0,0), and (+/-3, +/-3).

From these I generated random points by adding spherical Gaussian noise to the centers (50 points for each center).

I then ran k-means clustering to see the centers the algorithm found, and you can confirm that they look pretty similar:

image

image

Pitfalls:

Of course, the example I provide is doctored and in real-life your clusters won’t look as pretty.

Some drawbacks of k-means:

  • You must choose k.
  • The algorithm is dependent on the initial choice of centers. You can end up with really bad clusters.
  • Clusters found are usually suboptimal.

Modifications:

The above problems can be mitigated somewhat via some modications:

  • Run it multiple times.
  • Fuzzy clustering: Make the cluster assignment a size-k vector, i.e. for k = 3 we might have (0.2, 0.5, 0.3) for some data point, which could be interpreted as the data point is “50% part of cluster 2”.
  • Use a criterion function to judge how good the cluster is: the variance within each cluster should be small compared to the variance between each cluster.
#algorithms #clustering #facebook #google #k-means clustering #linkedin #machine learning #programming

Go to comments