Tutorial on Collaborative Filtering and Matrix Factorization in Python

April 25, 2016

UPDATE: I now have a massive course all about Recommender Systems which teaches this technique and more in-depth: https://deeplearningcourses.com/c/recommender-systems

This article will be of interest to you if you want to learn about recommender systems and predicting movie ratings (or book ratings, or product ratings, or any other kind of rating).

Contests like the $1 million Netflix Challenge are an example of what collaborative filtering can be used for.

netflix

Problem Setup

Let’s use the “users rating movies” example for this tutorial. After some Internet searching, we can determine that there are approximately 500, 000 movies in existence. Let’s also suppose that your very popular movie website has 1 billion users (Facebook has 1.6 billion users as of 2015, so this number is plausible).

How many possible user-movie ratings can you have? That is \( 10^9 \times 5 \times 10^5 = 5 \times 10^{14} \). That’s a lot of ratings! Way too much to fit into your RAM, in fact.

But that’s just one problem.

How many movies have you seen in your life? Of those movies, what percentage of them have you rated? The number is miniscule. In fact, most users have not rated most movies.

This is why recommender systems exist in the first place – so we can recommend you movies that you haven’t seen yet, that we know you’ll like.

So if you were to create a user-movie matrix of movie ratings, most of it would just have missing values.

However, that’s not to say there isn’t a pattern to be found.

Suppose we look at a subset of movie ratings, and we find the following:

Batman
Batman Returns
Batman Begins
The Dark Knight
Batman v. Superman
Guy A
N/A
4
5
5
2
Guy B
4
N/A
5
5
1

 

Where we’ve used N/A to show that a movie has not yet been rated by a user.

If we used the “cosine distance” ( \( \frac{u^T v}{ |u||v| } \) ) on the vectors created by looking at only the common movies, we could see that Guy A and Guy B have similar tastes. We could then surmise, based on this closeness, that Guy A might rate the Batman movie a “4”, and Guy B might rate Batman Returns a “4”. And since this is a pretty high rating, we might want to recommend these movies to these users.

This is the idea behind collaborative filtering.

 

Enter Matrix Factorization

Matrix factorization solves the above problems by reducing the number of free parameters (so the total number of parameters is much smaller than #users times #movies), and by fitting these parameters to the data (ratings) that do exist.

What is matrix factorization?

Think of factorization in general:

15 = 3 x 5 (15 is made up of the factors 3 and 5)

\( x^2 + x = x(x + 1) \)

We can do the same thing with matrices:

$$\left( \begin{matrix}3 & 4 & 5 \\ 6 & 8 & 10 \end{matrix} \right) = \left( \begin{matrix}1 \\ 2 \end{matrix} \right) \left( \begin{matrix}3 & 4 & 5 \end{matrix} \right) $$

In fact, this is exactly what we do in matrix factorization. We “pretend” the big ratings matrix (the one that can’t fit into our RAM) is actually made up of 2 smaller matrices multiplied together.

Remember that to do a valid matrix multiply, the inner dimensions must match. What is the size of this dimension? We call it “K”. It is unknown, but we can choose it via possibly cross-validation so that our model generalizes well.

If we have \( M \) users and \( N \) ratings, then the total number of parameters in our model is \(  MK + NK \). If we set \( K = 10 \), the total number of parameters we’d have for the user-movie problem would be \( 10^{10} + 5 \times 10^6 \), which is still approximately \( 10^{10} \), which is a factor of \( 10^4 \) smaller than before.

This is a big improvement!

So now we have:

$$ A \simeq \hat{ A } = UV $$

If you were to picture the matrices themselves, they would look like this:

matrix_factorization

Because I am lazy and took this image from elsewhere on the Internet, the “d” here is what I am calling “K”. And their “R” is my “A”.

You know that with any machine learning algorithm we have 2 procedures – the fitting procedure and the prediction procedure.

For the fitting procedure, we want every known \( A_{ij} \) to be as close to \( \hat{A}_{ij} = u_i^Tv_j \) as possible. \( u_i \) is the ith row of \( U \). \( v_j \) is the jth column of \( V \).

For the prediction procedure, we won’t have an \( A_{ij} \), but we can use \( \hat{A}_{ij} = u_i^Tv_j \) to tell us what user i might rate movie j given the existing patterns.

 

The Cost Function

A natural cost function for this problem is the squared error. Think of it as a regression. This is just:

$$ J = \sum_{(i, j) \in \Omega} (A_{ij} – \hat{A}_{ij})^2 $$

Where \( \Omega \) is the set of all pairs \( (i, j) \) where user i has rated movie j.

Later, we will use \( \Omega_i \) to be the set of all j’s (movies) that user i has rated, and we will use \( \Omega_j \) to be the set of all i’s (users) that have rated movie j.

 

Coordinate Descent

What do you do when you want to minimize a function? Take the derivative and set it to 0, of course. No need to use anything more complicated if the simple approach is solvable and performs well. It is also possible to use gradient descent on this problem by taking the derivative and then taking small steps in that direction.

You will notice that there are 2 derivatives to take here. The first is \( \partial{J} / \partial{u} \).

The other is \( \partial{J} / \partial{v} \). After calculating the derivatives and solving for \( u \) and \( v \), you get:

$$ u_i = ( \sum_{j \in \Omega_i} v_j v_j^T )^{-1} \sum_{j \in \Omega_i} A_{ij} v_j $$

$$ v_j = ( \sum_{i \in \Omega_j} u_i u_i^T )^{-1} \sum_{i \in \Omega_j} A_{ij} u_i $$

So you take both derivatives. You set both to 0. You solve for the optimal u and v. Now what?

The answer is: coordinate descent.

You first update \( u \) using the current setting of \( v \), then you update \( v \) using the current setting of \( u \). The order doesn’t matter, just that you alternate between the two.

There is a mathematical guarantee that J will improve on each iteration.

This technique is also known as alternating least squares. (This makes sense because we’re minimizing the squared error and updating \( u \) and \( v \) in an alternating fashion.)

 

Bias Parameters

As with other methods like linear regression and logistic regression, we can add bias parameters to our model to improve accuracy. In this case our model becomes:

$$ \hat{A}_{ij} = u_i^T v_j + b_i + c_j + \mu $$

Where \( \mu \) is the global mean (average of all known ratings).

You can interpret \( b_i \) as the bias of a user. A negative bias means this user just hates movies more than the average person. A positive bias would mean the opposite. Similarly, \( c_j \) is the bias of a movie. A positive bias would mean, “Wow, this movie is good, regardless of who is watching it!” A negative bias would be a movie like Avatar: The Last Airbender.

We can re-calculate the optimal settings for each parameter (again by taking the derivatives and setting them to 0) to get:

$$ u_i = ( \sum_{j \in \Omega_i} v_j v_j^T )^{-1} \sum_{j \in \Omega_i} (A_{ij} – b_i – c_j – \mu )v_j $$

$$ v_j = ( \sum_{i \in \Omega_j} u_i u_i^T )^{-1} \sum_{i \in \Omega_j}(A_{ij} – b_i – c_j – \mu )u_i $$

$$ b_i = \frac{1}{| \Omega_i |}\sum_{j \in \Omega_i} A_{ij} – u_i^Tv_j – c_j – \mu $$

$$ c_j= \frac{1}{| \Omega_j |}\sum_{i \in \Omega_j} A_{ij} – u_i^Tv_j – b_i – \mu $$

 

Regularization

With the above model, you may encounter what is called the “singular covariance” problem. This is what happens when you can’t invert the matrix that appears in the updates for \( u \) and \( v \).

The solution is again, similar to what you would do in linear regression or logistic regression: Add a squared error term with a weight \( \lambda \) that keeps the parameters small.

In terms of the likelihood, the previous formulation assumes that the difference between \( A_{ij} \) and \( \hat{A}_{ij} \) is normally distributed, while the cost function with regularization is like adding a normally-distributed prior on each parameter centered at 0.

i.e. \( u_i, v_j, b_i, c_j \sim N(0, 1/\lambda) \).

So the cost function becomes:

$$ J = \sum_{(i, j) \in \Omega} (A_{ij} – \hat{A}_{ij})^2 + \lambda(||U||_F + ||V||_F + ||b||^2 + ||c||^2) $$

Where \( ||X||_F \) is the Frobenius norm of \( X \).

For each parameter, setting the derivative with respect to that parameter, setting it to 0 and solving for the optimal value yields:

$$ u_i = ( \sum_{j \in \Omega_i} v_j v_j^T + \lambda{I})^{-1} \sum_{j \in \Omega_i} (A_{ij} – b_i – c_j – \mu )v_j $$

$$ v_j = ( \sum_{i \in \Omega_j} u_i u_i^T + \lambda{I})^{-1} \sum_{i \in \Omega_j}(A_{ij} – b_i – c_j – \mu )u_i $$

$$ b_i = \frac{1}{| \Omega_i | +\lambda}\sum_{j \in \Omega_i} A_{ij} – u_i^Tv_j – c_j – \mu $$

$$ c_j= \frac{1}{| \Omega_j | +\lambda}\sum_{i \in \Omega_j} A_{ij} – u_i^Tv_j – b_i – \mu $$

 

Python Code

The simplest way to implement the above formulas would be to just code them directly.

Initialize your parameters as follows:

U = np.random.randn(M, K) / K
V = np.random.randn(K, N) / K
B = np.zeros(M)
C = np.zeros(N)

Next, you want \( \Omega_i \) and \( \Omega_j \) to be easily accessible, so create dictionaries “ratings_by_i” where “i” is the key, and the value is an array of all the (j, r) pairs that user i has rated (r is the rating). Do the same for “ratings_by_j”.

Then, your updates would be as follows:

for t in xrange(T):

  # update B
  for i in xrange(M):
  if i in ratings_by_i:
    accum = 0
    for j, r in ratings_by_i[i]:
      accum += (r - U[i,:].dot(V[:,j]) - C[j] - mu)
    B[i] = accum / (len(ratings_by_i[i]) + reg)

  # update U
  for i in xrange(M):
    if i in ratings_by_i:
      matrix = np.zeros((K, K)) + reg*np.eye(K)
      vector = np.zeros(K)
      for j, r in ratings_by_i[i]:
        matrix += np.outer(V[:,j], V[:,j])
        vector += (r - B[i] - C[j] - mu)*V[:,j]
      U[i,:] = np.linalg.solve(matrix, vector)

  # update C
  for j in xrange(N):
    if j in ratings_by_j:
      accum = 0
      for i, r in ratings_by_j[j]:
        accum += (r - U[i,:].dot(V[:,j]) - B[i] - mu)
      C[j] = accum / (len(ratings_by_j[j]) + reg)

  # update V
  for j in xrange(N):
    if j in ratings_by_j:
      matrix = np.zeros((K, K)) + reg*np.eye(K)
      vector = np.zeros(K)
      for i, r in ratings_by_j[j]:
        matrix += np.outer(U[i,:], U[i,:])
        vector += (r - B[i] - C[j] - mu)*U[i,:]
      V[:,j] = np.linalg.solve(matrix, vector)

And that’s all there is to it!

For more free machine learning and data science tutorials, sign up for my newsletter.

UPDATE: I now have a massive course all about Recommender Systems which teaches this technique and more in-depth: https://deeplearningcourses.com/c/recommender-systems

Go to comments


New machine learning course! Cluster Analysis and Unsupervised Machine Learning in Python

April 20, 2016

course-image-small

[Scroll to the bottom if you want to jump straight to the coupon]

Cluster analysis is a staple of unsupervised machine learning and data science.

It is very useful for data mining and big data because it automatically finds patterns in the data, without the need for labels, unlike supervised machine learning.

In a real-world environment, you can imagine that a robot or an artificial intelligence won’t always have access to the optimal answer, or maybe there isn’t an optimal correct answer. You’d want that robot to be able to explore the world on its own, and learn things just by looking for patterns.

Do you ever wonder how we get the data that we use in our supervised machine learning algorithms?

We always seem to have a nice CSV or a table, complete with Xs and corresponding Ys.

If you haven’t been involved in acquiring data yourself, you might not have thought about this, but someone has to make this data!

Those “Y”s have to come from somewhere, and a lot of the time that involves manual labor.

Sometimes, you don’t have access to this kind of information or it is infeasible or costly to acquire.

But you still want to have some idea of the structure of the data. If you’re doing data analytics automating pattern recognition in your data would be invaluable.

This is where unsupervised machine learning comes into play.

In this course we are first going to talk about clustering. This is where instead of training on labels, we try to create our own labels! We’ll do this by grouping together data that looks alike.

There are 2 methods of clustering we’ll talk about: k-means clustering and hierarchical clustering.

Next, because in machine learning we like to talk about probability distributions, we’ll go into Gaussian mixture models and kernel density estimation, where we talk about how to “learn” the probability distribution of a set of data.

One interesting fact is that under certain conditions, Gaussian mixture models and k-means clustering are exactly the same! You can think of GMMs as a “souped up” version of k-means. We’ll prove how this is the case.

All the algorithms we’ll talk about in this course are staples in machine learning and data science, so if you want to know how to automatically find patterns in your data with data mining and pattern extraction, without needing someone to put in manual work to label that data, then this course is for you.

All the materials for this course are FREE. You can download and install Python, Numpy, and Scipy with simple commands on Windows, Linux, or Mac.

50% OFF COUPON: https://www.udemy.com/cluster-analysis-unsupervised-machine-learning-python/?couponCode=EARLYBIRD

#agglomerative clustering #cluster analysis #data mining #data science #expectation-maximization #gaussian mixture model #hierarchical clustering #k-means clustering #kernel density estimation #pattern recognition #udemy #unsupervised machine learning

Go to comments


New Deep Learning Course! Convolutional Neural Networks

April 2, 2016

I was aiming to get this course out before the end of March, and it is now April. So you know I put it some extra work to make it as awesome as possible.

the_neural_network2_by_rajasegar-d2xx41q

Course summary (scroll down for coupons):

This is the 3rd part in my Data Science and Machine Learning series on Deep Learning in Python. At this point, you already know a lot about neural networks and deep learning, including not just the basics like backpropagation, but how to improve it using modern techniques like momentum and adaptive learning rates. You’ve already written deep neural networks in Theano and TensorFlow, and you know how to run code using the GPU.

This course is all about how to use deep learning for computer vision using convolutional neural networks. These are the state of the art when it comes to image classification and they beat vanilla deep networks at tasks like MNIST.

In this course we are going to up the ante and look at the StreetView House Number (SVHN) dataset – which uses larger color images at various angles – so things are going to get tougher both computationally and in terms of the difficulty of the classification task. But we will show that convolutional neural networks, or CNNs, are capable of handling the challenge!

Because convolution is such a central part of this type of neural network, we are going to go in-depth on this topic. It has more applications than you might imagine, such as modeling artificial organs like the pancreas and the heart. I’m going to show you how to build convolutional filters that can be applied to audio, like the echo effect, and I’m going to show you how to build filters for image effects, like the Gaussian blur and edge detection.

We will also do some biology and talk about how convolutional neural networks have been inspired by the animal visual cortex.

After describing the architecture of a convolutional neural network, we will jump straight into code, and I will show you how to extend the deep neural networks we built last time (in part 2) with just a few new functions to turn them into CNNs. We will then test their performance and show how convolutional neural networks written in both Theano and TensorFlow can outperform the accuracy of a plain neural network on the StreetView House Number dataset.

All the materials for this course are FREE. You can download and install Python, Numpy, Scipy, Theano, and TensorFlow with simple commands shown in previous courses.

Coupons:

$5!!! https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow/?couponCode=SUPERAWESOME

If other people beat you to that one:

$15!!! https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow/?couponCode=SUPERCOOL

And if you were ultra slow:

$25!!! https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow/?couponCode=SUPERGREAT

Go to comments


Deep Learning and Artificial Intelligence Newsletter

Get discount coupons, free machine learning material, and new course announcements