Lazy Programmer

Your source for the latest in deep learning, big data, data science, and artificial intelligence. Sign up now

Announcing Data Science: Supervised Machine Learning in Python (Less Math, More Action!)

September 16, 2016

supervised-ml-small

If you don’t want to read about the course and just want the 88% OFF coupon code, skip to the bottom.

In recent years, we’ve seen a resurgence in AI, or artificial intelligence, and machine learning.

Machine learning has led to some amazing results, like being able to analyze medical images and predict diseases on-par with human experts.

Google’s AlphaGo program was able to beat a world champion in the strategy game go using deep reinforcement learning.

Machine learning is even being used to program self driving cars, which is going to change the automotive industry forever. Imagine a world with drastically reduced car accidents, simply by removing the element of human error.

Google famously announced that they are now “machine learning first”, meaning that machine learning is going to get a lot more attention now, and this is what’s going to drive innovation in the coming years.

Machine learning is used in many industries, like finance, online advertising, medicine, and robotics.

It is a widely applicable tool that will benefit you no matter what industry you’re in, and it will also open up a ton of career opportunities once you get good.

Machine learning also raises some philosophical questions. Are we building a machine that can think? What does it mean to be conscious? Will computers one day take over the world?

The best part about this course is that it requires WAY less math than my usual courses; just some basic probability and geometry, no calculus!

In this course, we are first going to discuss the K-Nearest Neighbor algorithm. It’s extremely simple and intuitive, and it’s a great first classification algorithm to learn. After we discuss the concepts and implement it in code, we’ll look at some ways in which KNN can fail.

It’s important to know both the advantages and disadvantages of each algorithm we look at.

Next we’ll look at the Naive Bayes Classifier and the General Bayes Classifier. This is a very interesting algorithm to look at because it is grounded in probability.

We’ll see how we can transform the Bayes Classifier into a linear and quadratic classifier to speed up our calculations.

Next we’ll look at the famous Decision Tree algorithm. This is the most complex of the algorithms we’ll study, and most courses you’ll look at won’t implement them. We will, since I believe implementation is good practice.

The last algorithm we’ll look at is the Perceptron algorithm. Perceptrons are the ancestor of neural networks and deep learning, so they are important to study in the context of machine learning.

One we’ve studied these algorithms, we’ll move to more practical machine learning topics. Hyperparameters, cross-validation, feature extraction, feature selection, and multiclass classification.

We’ll do a comparison with deep learning so you understand the pros and cons of each approach.

We’ll discuss the Sci-Kit Learn library, because even though implementing your own algorithms is fun and educational, you should use optimized and well-tested code in your actual work.

We’ll cap things off with a very practical, real-world example by writing a web service that runs a machine learning model and makes predictions. This is something that real companies do and make money from.

All the materials for this course are FREE. You can download and install Python, Numpy, and Scipy with simple commands on Windows, Linux, or Mac.

https://www.udemy.com/data-science-supervised-machine-learning-in-python/?couponCode=EARLYBIRDSITE

UPDATE: New coupon if the above is sold out:

https://www.udemy.com/data-science-supervised-machine-learning-in-python/?couponCode=SLOWBIRD_SITE

#data science #machine learning #matplotlib #numpy #pandas #python

Go to comments


Tutorial on Collaborative Filtering and Matrix Factorization in Python

April 25, 2016

This article will be of interest to you if you want to learn about recommender systems and predicting movie ratings (or book ratings, or product ratings, or any other kind of rating).

Contests like the $1 million Netflix Challenge are an example of what collaborative filtering can be used for.

netflix

Problem Setup

Let’s use the “users rating movies” example for this tutorial. After some Internet searching, we can determine that there are approximately 500, 000 movies in existence. Let’s also suppose that your very popular movie website has 1 billion users (Facebook has 1.6 billion users as of 2015, so this number is plausible).

How many possible user-movie ratings can you have? That is \( 10^9 \times 5 \times 10^5 = 5 \times 10^{14} \). That’s a lot of ratings! Way too much to fit into your RAM, in fact.

But that’s just one problem.

How many movies have you seen in your life? Of those movies, what percentage of them have you rated? The number is miniscule. In fact, most users have not rated most movies.

This is why recommender systems exist in the first place – so we can recommend you movies that you haven’t seen yet, that we know you’ll like.

So if you were to create a user-movie matrix of movie ratings, most of it would just have missing values.

However, that’s not to say there isn’t a pattern to be found.

Suppose we look at a subset of movie ratings, and we find the following:

Batman
Batman Returns
Batman Begins
The Dark Knight
Batman v. Superman
Guy A
N/A
4
5
5
2
Guy B
4
N/A
5
5
1

 

Where we’ve used N/A to show that a movie has not yet been rated by a user.

If we used the “cosine distance” ( \( \frac{u^T v}{ |u||v| } \) ) on the vectors created by looking at only the common movies, we could see that Guy A and Guy B have similar tastes. We could then surmise, based on this closeness, that Guy A might rate the Batman movie a “4”, and Guy B might rate Batman Returns a “4”. And since this is a pretty high rating, we might want to recommend these movies to these users.

This is the idea behind collaborative filtering.

 

Enter Matrix Factorization

Matrix factorization solves the above problems by reducing the number of free parameters (so the total number of parameters is much smaller than #users times #movies), and by fitting these parameters to the data (ratings) that do exist.

What is matrix factorization?

Think of factorization in general:

15 = 3 x 5 (15 is made up of the factors 3 and 5)

\( x^2 + x = x(x + 1) \)

We can do the same thing with matrices:

$$\left( \begin{matrix}3 & 4 & 5 \\ 6 & 8 & 10 \end{matrix} \right) = \left( \begin{matrix}1 \\ 2 \end{matrix} \right) \left( \begin{matrix}3 & 4 & 5 \end{matrix} \right) $$

In fact, this is exactly what we do in matrix factorization. We “pretend” the big ratings matrix (the one that can’t fit into our RAM) is actually made up of 2 smaller matrices multiplied together.

Remember that to do a valid matrix multiply, the inner dimensions must match. What is the size of this dimension? We call it “K”. It is unknown, but we can choose it via possibly cross-validation so that our model generalizes well.

If we have \( M \) users and \( N \) ratings, then the total number of parameters in our model is \(  MK + NK \). If we set \( K = 10 \), the total number of parameters we’d have for the user-movie problem would be \( 10^{10} + 5 \times 10^6 \), which is still approximately \( 10^{10} \), which is a factor of \( 10^4 \) smaller than before.

This is a big improvement!

So now we have:

$$ A \simeq \hat{ A } = UV $$

If you were to picture the matrices themselves, they would look like this:

matrix_factorization

Because I am lazy and took this image from elsewhere on the Internet, the “d” here is what I am calling “K”. And their “R” is my “A”.

You know that with any machine learning algorithm we have 2 procedures – the fitting procedure and the prediction procedure.

For the fitting procedure, we want every known \( A_{ij} \) to be as close to \( \hat{A}_{ij} = u_i^Tv_j \) as possible. \( u_i \) is the ith row of \( U \). \( v_j \) is the jth column of \( V \).

For the prediction procedure, we won’t have an \( A_{ij} \), but we can use \( \hat{A}_{ij} = u_i^Tv_j \) to tell us what user i might rate movie j given the existing patterns.

 

The Cost Function

A natural cost function for this problem is the squared error. Think of it as a regression. This is just:

$$ J = \sum_{(i, j) \in \Omega} (A_{ij} – \hat{A}_{ij})^2 $$

Where \( \Omega \) is the set of all pairs \( (i, j) \) where user i has rated movie j.

Later, we will use \( \Omega_i \) to be the set of all j’s (movies) that user i has rated, and we will use \( \Omega_j \) to be the set of all i’s (users) that have rated movie j.

 

Coordinate Descent

What do you do when you want to minimize a function? Take the derivative and set it to 0, of course. No need to use anything more complicated if the simple approach is solvable and performs well. It is also possible to use gradient descent on this problem by taking the derivative and then taking small steps in that direction.

You will notice that there are 2 derivatives to take here. The first is \( \partial{J} / \partial{u} \).

The other is \( \partial{J} / \partial{v} \). After calculating the derivatives and solving for \( u \) and \( v \), you get:

$$ u_i = ( \sum_{j \in \Omega_i} v_j v_j^T )^{-1} \sum_{j \in \Omega_i} A_{ij} v_j $$

$$ v_j = ( \sum_{i \in \Omega_j} u_i u_i^T )^{-1} \sum_{i \in \Omega_j} A_{ij} u_i $$

So you take both derivatives. You set both to 0. You solve for the optimal u and v. Now what?

The answer is: coordinate descent.

You first update \( u \) using the current setting of \( v \), then you update \( v \) using the current setting of \( u \). The order doesn’t matter, just that you alternate between the two.

There is a mathematical guarantee that J will improve on each iteration.

This technique is also known as alternating least squares. (This makes sense because we’re minimizing the squared error and updating \( u \) and \( v \) in an alternating fashion.)

 

Bias Parameters

As with other methods like linear regression and logistic regression, we can add bias parameters to our model to improve accuracy. In this case our model becomes:

$$ \hat{A}_{ij} = u_i^T v_j + b_i + c_j + \mu $$

Where \( \mu \) is the global mean (average of all known ratings).

You can interpret \( b_i \) as the bias of a user. A negative bias means this user just hates movies more than the average person. A positive bias would mean the opposite. Similarly, \( c_j \) is the bias of a movie. A positive bias would mean, “Wow, this movie is good, regardless of who is watching it!” A negative bias would be a movie like Avatar: The Last Airbender.

We can re-calculate the optimal settings for each parameter (again by taking the derivatives and setting them to 0) to get:

$$ u_i = ( \sum_{j \in \Omega_i} v_j v_j^T )^{-1} \sum_{j \in \Omega_i} (A_{ij} – b_i – c_j – \mu )v_j $$

$$ v_j = ( \sum_{i \in \Omega_j} u_i u_i^T )^{-1} \sum_{i \in \Omega_j}(A_{ij} – b_i – c_j – \mu )u_i $$

$$ b_i = \frac{1}{| \Omega_i |}\sum_{j \in \Omega_i} A_{ij} – u_i^Tv_j – c_j – \mu $$

$$ c_j= \frac{1}{| \Omega_j |}\sum_{i \in \Omega_j} A_{ij} – u_i^Tv_j – b_i – \mu $$

 

Regularization

With the above model, you may encounter what is called the “singular covariance” problem. This is what happens when you can’t invert the matrix that appears in the updates for \( u \) and \( v \).

The solution is again, similar to what you would do in linear regression or logistic regression: Add a squared error term with a weight \( \lambda \) that keeps the parameters small.

In terms of the likelihood, the previous formulation assumes that the difference between \( A_{ij} \) and \( \hat{A}_{ij} \) is normally distributed, while the cost function with regularization is like adding a normally-distributed prior on each parameter centered at 0.

i.e. \( u_i, v_j, b_i, c_j \sim N(0, 1/\lambda) \).

So the cost function becomes:

$$ J = \sum_{(i, j) \in \Omega} (A_{ij} – \hat{A}_{ij})^2 + \lambda(||U||_F + ||V||_F + ||b||^2 + ||c||^2) $$

Where \( ||X||_F \) is the Frobenius norm of \( X \).

For each parameter, setting the derivative with respect to that parameter, setting it to 0 and solving for the optimal value yields:

$$ u_i = ( \sum_{j \in \Omega_i} v_j v_j^T + \lambda{I})^{-1} \sum_{j \in \Omega_i} (A_{ij} – b_i – c_j – \mu )v_j $$

$$ v_j = ( \sum_{i \in \Omega_j} u_i u_i^T + \lambda{I})^{-1} \sum_{i \in \Omega_j}(A_{ij} – b_i – c_j – \mu )u_i $$

$$ b_i = \frac{1}{1 + \lambda} \frac{1}{| \Omega_i |}\sum_{j \in \Omega_i} A_{ij} – u_i^Tv_j – c_j – \mu $$

$$ c_j= \frac{1}{1 + \lambda} \frac{1}{| \Omega_j |}\sum_{i \in \Omega_j} A_{ij} – u_i^Tv_j – b_i – \mu $$

 

Python Code

The simplest way to implement the above formulas would be to just code them directly.

Initialize your parameters as follows:

U = np.random.randn(M, K) / K
V = np.random.randn(K, N) / K
B = np.zeros(M)
C = np.zeros(N)

Next, you want \( \Omega_i \) and \( \Omega_j \) to be easily accessible, so create dictionaries “ratings_by_i” where “i” is the key, and the value is an array of all the (j, r) pairs that user i has rated (r is the rating). Do the same for “ratings_by_j”.

Then, your updates would be as follows:

for t in xrange(T):

  # update B
  for i in xrange(M):
  if i in ratings_by_i:
    accum = 0
    for j, r in ratings_by_i[i]:
      accum += (r - U[i,:].dot(V[:,j]) - C[j] - mu)
    B[i] = accum / (1 + reg) / len(ratings_by_i[i])

  # update U
  for i in xrange(M):
    if i in ratings_by_i:
      matrix = np.zeros((K, K)) + reg*np.eye(K)
      vector = np.zeros(K)
      for j, r in ratings_by_i[i]:
        matrix += np.outer(V[:,j], V[:,j])
        vector += (r - B[i] - C[j] - mu)*V[:,j]
      U[i,:] = np.linalg.solve(matrix, vector)

  # update C
  for j in xrange(N):
    if j in ratings_by_j:
      accum = 0
      for i, r in ratings_by_j[j]:
        accum += (r - U[i,:].dot(V[:,j]) - B[i] - mu)
      C[j] = accum / (1 + reg) / len(ratings_by_j[j])

  # update V
  for j in xrange(N):
    if j in ratings_by_j:
      matrix = np.zeros((K, K)) + reg*np.eye(K)
      vector = np.zeros(K)
      for i, r in ratings_by_j[j]:
        matrix += np.outer(U[i,:], U[i,:])
        vector += (r - B[i] - C[j] - mu)*U[i,:]
      V[:,j] = np.linalg.solve(matrix, vector)

And that’s all there is to it!

For more free machine learning and data science tutorials, sign up for my newsletter.

 

Go to comments


New machine learning course! Cluster Analysis and Unsupervised Machine Learning in Python

April 20, 2016

course-image-small

[Scroll to the bottom if you want to jump straight to the coupon]

Cluster analysis is a staple of unsupervised machine learning and data science.

It is very useful for data mining and big data because it automatically finds patterns in the data, without the need for labels, unlike supervised machine learning.

In a real-world environment, you can imagine that a robot or an artificial intelligence won’t always have access to the optimal answer, or maybe there isn’t an optimal correct answer. You’d want that robot to be able to explore the world on its own, and learn things just by looking for patterns.

Do you ever wonder how we get the data that we use in our supervised machine learning algorithms?

We always seem to have a nice CSV or a table, complete with Xs and corresponding Ys.

If you haven’t been involved in acquiring data yourself, you might not have thought about this, but someone has to make this data!

Those “Y”s have to come from somewhere, and a lot of the time that involves manual labor.

Sometimes, you don’t have access to this kind of information or it is infeasible or costly to acquire.

But you still want to have some idea of the structure of the data. If you’re doing data analytics automating pattern recognition in your data would be invaluable.

This is where unsupervised machine learning comes into play.

In this course we are first going to talk about clustering. This is where instead of training on labels, we try to create our own labels! We’ll do this by grouping together data that looks alike.

There are 2 methods of clustering we’ll talk about: k-means clustering and hierarchical clustering.

Next, because in machine learning we like to talk about probability distributions, we’ll go into Gaussian mixture models and kernel density estimation, where we talk about how to “learn” the probability distribution of a set of data.

One interesting fact is that under certain conditions, Gaussian mixture models and k-means clustering are exactly the same! You can think of GMMs as a “souped up” version of k-means. We’ll prove how this is the case.

All the algorithms we’ll talk about in this course are staples in machine learning and data science, so if you want to know how to automatically find patterns in your data with data mining and pattern extraction, without needing someone to put in manual work to label that data, then this course is for you.

All the materials for this course are FREE. You can download and install Python, Numpy, and Scipy with simple commands on Windows, Linux, or Mac.

50% OFF COUPON: https://www.udemy.com/cluster-analysis-unsupervised-machine-learning-python/?couponCode=EARLYBIRD

#agglomerative clustering #cluster analysis #data mining #data science #expectation-maximization #gaussian mixture model #hierarchical clustering #k-means clustering #kernel density estimation #pattern recognition #udemy #unsupervised machine learning

Go to comments