Lazy Programmer

Your source for the latest in deep learning, big data, data science, and artificial intelligence. Sign up now

$10 Deep Learning Courses (+more) July Special part 2!

July 24, 2017

Since I am still busy hacking away at my next course, we are going to do another HUGE sale. ALL courses on Udemy are now $10. Take this opportunity to grab as many courses as you can because you never know when the next sale is going to be!

As usual, I’m providing $10 coupons for all my courses in the links below. Please use these links and share them with your friends!

You can also just type in the coupon code “JUL456”.

The $10 promo doesn’t come around often, so make sure you pick up everything you are interested in, or could become interested in later this year. The promo goes until July 31. Don’t wait!

At the end of this post, I’m going to provide you with some additional links to get machine learning prerequisites (calculus, linear algebra, Python, etc…) for $10 too!

If you don’t know what order to take the courses in, please check here: https://deeplearningcourses.com/course_order
Here are the links for my courses:

Deep Learning Prerequisites: Linear Regression in Python
https://www.udemy.com/data-science-linear-regression-in-python/?couponCode=JUL456

Deep Learning Prerequisites: Logistic Regression in Python
https://www.udemy.com/data-science-logistic-regression-in-python/?couponCode=JUL456

Deep Learning in Python
https://www.udemy.com/data-science-deep-learning-in-python/?couponCode=JUL456

Practical Deep Learning in Theano and TensorFlow
https://www.udemy.com/data-science-deep-learning-in-theano-tensorflow/?couponCode=JUL456

Deep Learning: Convolutional Neural Networks in Python
https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow/?couponCode=JUL456

Unsupervised Deep Learning in Python
https://www.udemy.com/unsupervised-deep-learning-in-python/?couponCode=JUL456

Deep Learning: Recurrent Neural Networks in Python
https://www.udemy.com/deep-learning-recurrent-neural-networks-in-python/?couponCode=JUL456

Advanced Natural Language Processing: Deep Learning in Python
https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=JUL456

Advanced AI: Deep Reinforcement Learning in Python
https://www.udemy.com/deep-reinforcement-learning-in-python/?couponCode=JUL456

Easy Natural Language Processing in Python
https://www.udemy.com/data-science-natural-language-processing-in-python/?couponCode=JUL456

Cluster Analysis and Unsupervised Machine Learning in Python
https://www.udemy.com/cluster-analysis-unsupervised-machine-learning-python/?couponCode=JUL456

Unsupervised Machine Learning: Hidden Markov Models in Python
https://www.udemy.com/unsupervised-machine-learning-hidden-markov-models-in-python/?couponCode=JUL456

Data Science: Supervised Machine Learning in Python
https://www.udemy.com/data-science-supervised-machine-learning-in-python/?couponCode=JUL456

Bayesian Machine Learning in Python: A/B Testing
https://www.udemy.com/bayesian-machine-learning-in-python-ab-testing/?couponCode=JUL456

Ensemble Machine Learning in Python: Random Forest and AdaBoost
https://www.udemy.com/machine-learning-in-python-random-forest-adaboost/?couponCode=JUL456

Artificial Intelligence: Reinforcement Learning in Python
https://www.udemy.com/artificial-intelligence-reinforcement-learning-in-python/?couponCode=JUL456

SQL for Newbs and Marketers
https://www.udemy.com/sql-for-marketers-data-analytics-data-science-big-data/?couponCode=JUL456

PREREQUISITE COURSE COUPONS

And last but not least, $10 coupons for some helpful prerequisite courses. You NEED to know this stuff before you study machine learning:

General (site-wide): http://bit.ly/2oCY14Z
Python http://bit.ly/2pbXxXz
Calc 1 http://bit.ly/2okPUib
Calc 2 http://bit.ly/2oXnhpX
Calc 3 http://bit.ly/2pVU0gQ
Linalg 1 http://bit.ly/2oBBir1
Linalg 2 http://bit.ly/2q5SGEE
Probability (option 1) http://bit.ly/2prFQ7o
Probability (option 2) http://bit.ly/2p8kcC0
Probability (option 3) http://bit.ly/2oXa2pb
Probability (option 4) http://bit.ly/2oXbZSK

OTHER UDEMY COURSE COUPONS

As you know, I’m the “Lazy Programmer”, not just the “Lazy Data Scientist” – I love all kinds of programming!

And I’ve got sales for everything:

iOS
Android
Ruby on Rails
Python
Big Data (Spark, Hadoop)
Javascript, ReactJS, AngularJS

Remember, these links will self-destruct on July 31 (7 days). Act NOW!

Go to comments


$10 Deep Learning Courses (+more) July Special!

July 11, 2017

Since I am still busy hacking away at my next course, we are going to do another HUGE sale. ALL courses on Udemy are now $10. Take this opportunity to grab as many courses as you can because you never know when the next sale is going to be!

As usual, I’m providing $10 coupons for all my courses in the links below. Please use these links and share them with your friends!

You can also just type in the coupon code “JUL123”.

The $10 promo doesn’t come around often, so make sure you pick up everything you are interested in, or could become interested in later this year. The promo goes until July 20. Don’t wait!

At the end of this post, I’m going to provide you with some additional links to get machine learning prerequisites (calculus, linear algebra, Python, etc…) for $10 too!

If you don’t know what order to take the courses in, please check here: https://deeplearningcourses.com/course_order

 

 
Here are the links for my courses:

Deep Learning Prerequisites: Linear Regression in Python
https://www.udemy.com/data-science-linear-regression-in-python/?couponCode=JUL123

Deep Learning Prerequisites: Logistic Regression in Python
https://www.udemy.com/data-science-logistic-regression-in-python/?couponCode=JUL123

Deep Learning in Python
https://www.udemy.com/data-science-deep-learning-in-python/?couponCode=JUL123

Practical Deep Learning in Theano and TensorFlow
https://www.udemy.com/data-science-deep-learning-in-theano-tensorflow/?couponCode=JUL123

Deep Learning: Convolutional Neural Networks in Python
https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow/?couponCode=JUL123

Unsupervised Deep Learning in Python
https://www.udemy.com/unsupervised-deep-learning-in-python/?couponCode=JUL123

Deep Learning: Recurrent Neural Networks in Python
https://www.udemy.com/deep-learning-recurrent-neural-networks-in-python/?couponCode=JUL123

Advanced Natural Language Processing: Deep Learning in Python
https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=JUL123

Advanced AI: Deep Reinforcement Learning in Python
https://www.udemy.com/deep-reinforcement-learning-in-python/?couponCode=JUL123

Easy Natural Language Processing in Python
https://www.udemy.com/data-science-natural-language-processing-in-python/?couponCode=JUL123

Cluster Analysis and Unsupervised Machine Learning in Python
https://www.udemy.com/cluster-analysis-unsupervised-machine-learning-python/?couponCode=JUL123

Unsupervised Machine Learning: Hidden Markov Models in Python
https://www.udemy.com/unsupervised-machine-learning-hidden-markov-models-in-python/?couponCode=JUL123

Data Science: Supervised Machine Learning in Python
https://www.udemy.com/data-science-supervised-machine-learning-in-python/?couponCode=JUL123

Bayesian Machine Learning in Python: A/B Testing
https://www.udemy.com/bayesian-machine-learning-in-python-ab-testing/?couponCode=JUL123

Ensemble Machine Learning in Python: Random Forest and AdaBoost
https://www.udemy.com/machine-learning-in-python-random-forest-adaboost/?couponCode=JUL123

Artificial Intelligence: Reinforcement Learning in Python
https://www.udemy.com/artificial-intelligence-reinforcement-learning-in-python/?couponCode=JUL123

SQL for Newbs and Marketers
https://www.udemy.com/sql-for-marketers-data-analytics-data-science-big-data/?couponCode=JUL123

 

 
RECENT COURSE UPDATES

You guys know I’m always working hard to improve my courses with new and fresh content. Here’s what I’ve done recently:

– Hidden Markov Models: added Tensorflow implementation and improved Theano implementation

– Hidden Markov Models: the Tensorflow scan function

– Unsupervised Deep Learning: added Tensorflow version of Autoencoder, RBM, and Unsupervised Pre-Training

– Deep Learning in Python (pt1): How to interpret the weights

– Deep Learning in Theano / Tensorflow (pt2): How to improve your Theano and Tensorflow skills

– Deep Learning in Theano / Tensorflow (pt2): Some ideas about combining big data + deep learning

 

 
PREREQUISITE COURSE COUPONS

And last but not least, $10 coupons for some helpful prerequisite courses. You NEED to know this stuff before you study machine learning:

General (site-wide): http://bit.ly/2oCY14Z
Python http://bit.ly/2pbXxXz
Calc 1 http://bit.ly/2okPUib
Calc 2 http://bit.ly/2oXnhpX
Calc 3 http://bit.ly/2pVU0gQ
Linalg 1 http://bit.ly/2oBBir1
Linalg 2 http://bit.ly/2q5SGEE
Probability (option 1) http://bit.ly/2prFQ7o
Probability (option 2) http://bit.ly/2p8kcC0
Probability (option 3) http://bit.ly/2oXa2pb
Probability (option 4) http://bit.ly/2oXbZSK

Remember, these links will self-destruct on July 20 (9 days). Act NOW!

Go to comments


Deep Learning and Machine Learning Courses $10 Special June 2017

June 14, 2017

Today, Udemy has decided to do yet another AMAZING $10 promo.

As usual, I’m providing $10 coupons for all my courses in the links below. Please use these links and share them with your friends!

The $10 promo doesn’t come around often, so make sure you pick up everything you are interested in, or could become interested in later this year. The promo goes until June 21. Don’t wait!

At the end of this post, I’m going to provide you with some additional links to get machine learning prerequisites (calculus, linear algebra, Python, etc…) for $10 too!

If you don’t know what order to take the courses in, please check here: https://deeplearningcourses.com/course_order

Here are the links for my courses:

Deep Learning Prerequisites: Linear Regression in Python
https://www.udemy.com/data-science-linear-regression-in-python/?couponCode=JUN456

Deep Learning Prerequisites: Logistic Regression in Python
https://www.udemy.com/data-science-logistic-regression-in-python/?couponCode=JUN456

Deep Learning in Python
https://www.udemy.com/data-science-deep-learning-in-python/?couponCode=JUN456

Practical Deep Learning in Theano and TensorFlow
https://www.udemy.com/data-science-deep-learning-in-theano-tensorflow/?couponCode=JUN456

Deep Learning: Convolutional Neural Networks in Python
https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow/?couponCode=JUN456

Unsupervised Deep Learning in Python
https://www.udemy.com/unsupervised-deep-learning-in-python/?couponCode=JUN456

Deep Learning: Recurrent Neural Networks in Python
https://www.udemy.com/deep-learning-recurrent-neural-networks-in-python/?couponCode=JUN456

Advanced Natural Language Processing: Deep Learning in Python
https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=JUN456

Advanced AI: Deep Reinforcement Learning in Python
https://www.udemy.com/deep-reinforcement-learning-in-python/?couponCode=JUN456

Easy Natural Language Processing in Python
https://www.udemy.com/data-science-natural-language-processing-in-python/?couponCode=JUN456

Cluster Analysis and Unsupervised Machine Learning in Python
https://www.udemy.com/cluster-analysis-unsupervised-machine-learning-python/?couponCode=JUN456

Unsupervised Machine Learning: Hidden Markov Models in Python
https://www.udemy.com/unsupervised-machine-learning-hidden-markov-models-in-python/?couponCode=JUN456

Data Science: Supervised Machine Learning in Python
https://www.udemy.com/data-science-supervised-machine-learning-in-python/?couponCode=JUN456

Bayesian Machine Learning in Python: A/B Testing
https://www.udemy.com/bayesian-machine-learning-in-python-ab-testing/?couponCode=JUN456

Ensemble Machine Learning in Python: Random Forest and AdaBoost
https://www.udemy.com/machine-learning-in-python-random-forest-adaboost/?couponCode=JUN456

Artificial Intelligence: Reinforcement Learning in Python
https://www.udemy.com/artificial-intelligence-reinforcement-learning-in-python/?couponCode=JUN456

SQL for Newbs and Marketers
https://www.udemy.com/sql-for-marketers-data-analytics-data-science-big-data/?couponCode=JUN456

And last but not least, $10 coupons for some helpful prerequisite courses. You NEED to know this stuff before you study machine learning:

General (site-wide): http://bit.ly/2oCY14Z
Python http://bit.ly/2pbXxXz
Calc 1 http://bit.ly/2okPUib
Calc 2 http://bit.ly/2oXnhpX
Calc 3 http://bit.ly/2pVU0gQ
Linalg 1 http://bit.ly/2oBBir1
Linalg 2 http://bit.ly/2q5SGEE
Probability (option 1) http://bit.ly/2prFQ7o
Probability (option 2) http://bit.ly/2p8kcC0
Probability (option 3) http://bit.ly/2oXa2pb
Probability (option 4) http://bit.ly/2oXbZSK

Remember, these links will self-destruct on June 21 (7 days). Act NOW!

Go to comments


Deep Learning and Machine Learning $10 Special!

June 5, 2017

Today, Udemy has decided to do yet another AMAZING $10 promo.

As usual, I’m providing $10 coupons for all my courses in the links below. Please use these links and share them with your friends!

The $10 promo doesn’t come around often, so make sure you pick up everything you are interested in, or could become interested in later this year. The promo goes until June 10. Don’t wait!

At the end of this post, I’m going to provide you with some additional links to get machine learning prerequisites (calculus, linear algebra, Python, etc…) for $10 too!

If you don’t know what order to take the courses in, please check here: https://deeplearningcourses.com/course_order

Here are the links for my courses:

Deep Learning Prerequisites: Linear Regression in Python
https://www.udemy.com/data-science-linear-regression-in-python/?couponCode=JUN123

Deep Learning Prerequisites: Logistic Regression in Python
https://www.udemy.com/data-science-logistic-regression-in-python/?couponCode=JUN123

Deep Learning in Python
https://www.udemy.com/data-science-deep-learning-in-python/?couponCode=JUN123

Practical Deep Learning in Theano and TensorFlow
https://www.udemy.com/data-science-deep-learning-in-theano-tensorflow/?couponCode=JUN123

Deep Learning: Convolutional Neural Networks in Python
https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow/?couponCode=JUN123

Unsupervised Deep Learning in Python
https://www.udemy.com/unsupervised-deep-learning-in-python/?couponCode=JUN123

Deep Learning: Recurrent Neural Networks in Python
https://www.udemy.com/deep-learning-recurrent-neural-networks-in-python/?couponCode=JUN123

Advanced Natural Language Processing: Deep Learning in Python
https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=JUN123

Advanced AI: Deep Reinforcement Learning in Python
https://www.udemy.com/deep-reinforcement-learning-in-python/?couponCode=JUN123

Easy Natural Language Processing in Python
https://www.udemy.com/data-science-natural-language-processing-in-python/?couponCode=JUN123

Cluster Analysis and Unsupervised Machine Learning in Python
https://www.udemy.com/cluster-analysis-unsupervised-machine-learning-python/?couponCode=JUN123

Unsupervised Machine Learning: Hidden Markov Models in Python
https://www.udemy.com/unsupervised-machine-learning-hidden-markov-models-in-python/?couponCode=JUN123

Data Science: Supervised Machine Learning in Python
https://www.udemy.com/data-science-supervised-machine-learning-in-python/?couponCode=JUN123

Bayesian Machine Learning in Python: A/B Testing
https://www.udemy.com/bayesian-machine-learning-in-python-ab-testing/?couponCode=JUN123

Ensemble Machine Learning in Python: Random Forest and AdaBoost
https://www.udemy.com/machine-learning-in-python-random-forest-adaboost/?couponCode=JUN123

Artificial Intelligence: Reinforcement Learning in Python
https://www.udemy.com/artificial-intelligence-reinforcement-learning-in-python/?couponCode=JUN123

SQL for Newbs and Marketers
https://www.udemy.com/sql-for-marketers-data-analytics-data-science-big-data/?couponCode=JUN123

And last but not least, $10 coupons for some helpful prerequisite courses. You NEED to know this stuff before you study machine learning:

General (site-wide): http://bit.ly/2oCY14Z
Python http://bit.ly/2pbXxXz
Calc 1 http://bit.ly/2okPUib
Calc 2 http://bit.ly/2oXnhpX
Calc 3 http://bit.ly/2pVU0gQ
Linalg 1 http://bit.ly/2oBBir1
Linalg 2 http://bit.ly/2q5SGEE
Probability (option 1) http://bit.ly/2prFQ7o
Probability (option 2) http://bit.ly/2p8kcC0
Probability (option 3) http://bit.ly/2oXa2pb
Probability (option 4) http://bit.ly/2oXbZSK

Remember, these links will self-destruct on June 10 (5 days). Act NOW!

Go to comments


New course – Natural Language Processing: Deep Learning in Python part 6

August 9, 2016

stock-photo-robot-child-reading-a-book-in-the-workshop-of-its-creator-287641082

[Scroll to the bottom for the early bird discount if you already know what this course is about]

In this course we are going to look at advanced NLP using deep learning.

Previously, you learned about some of the basics, like how many NLP problems are just regular machine learning and data science problems in disguise, and simple, practical methods like bag-of-words and term-document matrices.

These allowed us to do some pretty cool things, like detect spam emails, write poetry, spin articles, and group together similar words.

In this course I’m going to show you how to do even more awesome things. We’ll learn not just 1, but 4 new architectures in this course.

First up is word2vec.

In this course, I’m going to show you exactly how word2vec works, from theory to implementation, and you’ll see that it’s merely the application of skills you already know.

Word2vec is interesting because it magically maps words to a vector space where you can find analogies, like:

  • king – man = queen – woman
  • France – Paris = England – London
  • December – Novemeber = July – June

We are also going to look at the GLoVe method, which also finds word vectors, but uses a technique called matrix factorization, which is a popular algorithm for recommender systems.

Amazingly, the word vectors produced by GLoVe are just as good as the ones produced by word2vec, and it’s way easier to train.

We will also look at some classical NLP problems, like parts-of-speech tagging and named entity recognition, and use recurrent neural networks to solve them. You’ll see that just about any problem can be solved using neural networks, but you’ll also learn the dangers of having too much complexity.

Lastly, you’ll learn about recursive neural networks, which finally help us solve the problem of negation in sentiment analysis. Recursive neural networks exploit the fact that sentences have a tree structure, and we can finally get away from naively using bag-of-words.

All of the materials required for this course can be downloaded and installed for FREE. We will do most of our work in Numpy and Matplotlib,and Theano. I am always available to answer your questions and help you along your data science journey.

See you in class!

https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=EARLYBIRDSITE

UPDATE: New coupon if the above is sold out:

https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=SLOWBIRD_SITE

#deep learning #GLoVe #natural language processing #nlp #python #recursive neural networks #tensorflow #theano #word2vec

Go to comments


Tutorial on Collaborative Filtering and Matrix Factorization in Python

April 25, 2016

This article will be of interest to you if you want to learn about recommender systems and predicting movie ratings (or book ratings, or product ratings, or any other kind of rating).

Contests like the $1 million Netflix Challenge are an example of what collaborative filtering can be used for.

netflix

Problem Setup

Let’s use the “users rating movies” example for this tutorial. After some Internet searching, we can determine that there are approximately 500, 000 movies in existence. Let’s also suppose that your very popular movie website has 1 billion users (Facebook has 1.6 billion users as of 2015, so this number is plausible).

How many possible user-movie ratings can you have? That is \( 10^9 \times 5 \times 10^5 = 5 \times 10^{14} \). That’s a lot of ratings! Way too much to fit into your RAM, in fact.

But that’s just one problem.

How many movies have you seen in your life? Of those movies, what percentage of them have you rated? The number is miniscule. In fact, most users have not rated most movies.

This is why recommender systems exist in the first place – so we can recommend you movies that you haven’t seen yet, that we know you’ll like.

So if you were to create a user-movie matrix of movie ratings, most of it would just have missing values.

However, that’s not to say there isn’t a pattern to be found.

Suppose we look at a subset of movie ratings, and we find the following:

Batman
Batman Returns
Batman Begins
The Dark Knight
Batman v. Superman
Guy A
N/A
4
5
5
2
Guy B
4
N/A
5
5
1

 

Where we’ve used N/A to show that a movie has not yet been rated by a user.

If we used the “cosine distance” ( \( \frac{u^T v}{ |u||v| } \) ) on the vectors created by looking at only the common movies, we could see that Guy A and Guy B have similar tastes. We could then surmise, based on this closeness, that Guy A might rate the Batman movie a “4”, and Guy B might rate Batman Returns a “4”. And since this is a pretty high rating, we might want to recommend these movies to these users.

This is the idea behind collaborative filtering.

 

Enter Matrix Factorization

Matrix factorization solves the above problems by reducing the number of free parameters (so the total number of parameters is much smaller than #users times #movies), and by fitting these parameters to the data (ratings) that do exist.

What is matrix factorization?

Think of factorization in general:

15 = 3 x 5 (15 is made up of the factors 3 and 5)

\( x^2 + x = x(x + 1) \)

We can do the same thing with matrices:

$$\left( \begin{matrix}3 & 4 & 5 \\ 6 & 8 & 10 \end{matrix} \right) = \left( \begin{matrix}1 \\ 2 \end{matrix} \right) \left( \begin{matrix}3 & 4 & 5 \end{matrix} \right) $$

In fact, this is exactly what we do in matrix factorization. We “pretend” the big ratings matrix (the one that can’t fit into our RAM) is actually made up of 2 smaller matrices multiplied together.

Remember that to do a valid matrix multiply, the inner dimensions must match. What is the size of this dimension? We call it “K”. It is unknown, but we can choose it via possibly cross-validation so that our model generalizes well.

If we have \( M \) users and \( N \) ratings, then the total number of parameters in our model is \(  MK + NK \). If we set \( K = 10 \), the total number of parameters we’d have for the user-movie problem would be \( 10^{10} + 5 \times 10^6 \), which is still approximately \( 10^{10} \), which is a factor of \( 10^4 \) smaller than before.

This is a big improvement!

So now we have:

$$ A \simeq \hat{ A } = UV $$

If you were to picture the matrices themselves, they would look like this:

matrix_factorization

Because I am lazy and took this image from elsewhere on the Internet, the “d” here is what I am calling “K”. And their “R” is my “A”.

You know that with any machine learning algorithm we have 2 procedures – the fitting procedure and the prediction procedure.

For the fitting procedure, we want every known \( A_{ij} \) to be as close to \( \hat{A}_{ij} = u_i^Tv_j \) as possible. \( u_i \) is the ith row of \( U \). \( v_j \) is the jth column of \( V \).

For the prediction procedure, we won’t have an \( A_{ij} \), but we can use \( \hat{A}_{ij} = u_i^Tv_j \) to tell us what user i might rate movie j given the existing patterns.

 

The Cost Function

A natural cost function for this problem is the squared error. Think of it as a regression. This is just:

$$ J = \sum_{(i, j) \in \Omega} (A_{ij} – \hat{A}_{ij})^2 $$

Where \( \Omega \) is the set of all pairs \( (i, j) \) where user i has rated movie j.

Later, we will use \( \Omega_i \) to be the set of all j’s (movies) that user i has rated, and we will use \( \Omega_j \) to be the set of all i’s (users) that have rated movie j.

 

Coordinate Descent

What do you do when you want to minimize a function? Take the derivative and set it to 0, of course. No need to use anything more complicated if the simple approach is solvable and performs well. It is also possible to use gradient descent on this problem by taking the derivative and then taking small steps in that direction.

You will notice that there are 2 derivatives to take here. The first is \( \partial{J} / \partial{u} \).

The other is \( \partial{J} / \partial{v} \). After calculating the derivatives and solving for \( u \) and \( v \), you get:

$$ u_i = ( \sum_{j \in \Omega_i} v_j v_j^T )^{-1} \sum_{j \in \Omega_i} A_{ij} v_j $$

$$ v_j = ( \sum_{i \in \Omega_j} u_i u_i^T )^{-1} \sum_{i \in \Omega_j} A_{ij} u_i $$

So you take both derivatives. You set both to 0. You solve for the optimal u and v. Now what?

The answer is: coordinate descent.

You first update \( u \) using the current setting of \( v \), then you update \( v \) using the current setting of \( u \). The order doesn’t matter, just that you alternate between the two.

There is a mathematical guarantee that J will improve on each iteration.

This technique is also known as alternating least squares. (This makes sense because we’re minimizing the squared error and updating \( u \) and \( v \) in an alternating fashion.)

 

Bias Parameters

As with other methods like linear regression and logistic regression, we can add bias parameters to our model to improve accuracy. In this case our model becomes:

$$ \hat{A}_{ij} = u_i^T v_j + b_i + c_j + \mu $$

Where \( \mu \) is the global mean (average of all known ratings).

You can interpret \( b_i \) as the bias of a user. A negative bias means this user just hates movies more than the average person. A positive bias would mean the opposite. Similarly, \( c_j \) is the bias of a movie. A positive bias would mean, “Wow, this movie is good, regardless of who is watching it!” A negative bias would be a movie like Avatar: The Last Airbender.

We can re-calculate the optimal settings for each parameter (again by taking the derivatives and setting them to 0) to get:

$$ u_i = ( \sum_{j \in \Omega_i} v_j v_j^T )^{-1} \sum_{j \in \Omega_i} (A_{ij} – b_i – c_j – \mu )v_j $$

$$ v_j = ( \sum_{i \in \Omega_j} u_i u_i^T )^{-1} \sum_{i \in \Omega_j}(A_{ij} – b_i – c_j – \mu )u_i $$

$$ b_i = \frac{1}{| \Omega_i |}\sum_{j \in \Omega_i} A_{ij} – u_i^Tv_j – c_j – \mu $$

$$ c_j= \frac{1}{| \Omega_j |}\sum_{i \in \Omega_j} A_{ij} – u_i^Tv_j – b_i – \mu $$

 

Regularization

With the above model, you may encounter what is called the “singular covariance” problem. This is what happens when you can’t invert the matrix that appears in the updates for \( u \) and \( v \).

The solution is again, similar to what you would do in linear regression or logistic regression: Add a squared error term with a weight \( \lambda \) that keeps the parameters small.

In terms of the likelihood, the previous formulation assumes that the difference between \( A_{ij} \) and \( \hat{A}_{ij} \) is normally distributed, while the cost function with regularization is like adding a normally-distributed prior on each parameter centered at 0.

i.e. \( u_i, v_j, b_i, c_j \sim N(0, 1/\lambda) \).

So the cost function becomes:

$$ J = \sum_{(i, j) \in \Omega} (A_{ij} – \hat{A}_{ij})^2 + \lambda(||U||_F + ||V||_F + ||b||^2 + ||c||^2) $$

Where \( ||X||_F \) is the Frobenius norm of \( X \).

For each parameter, setting the derivative with respect to that parameter, setting it to 0 and solving for the optimal value yields:

$$ u_i = ( \sum_{j \in \Omega_i} v_j v_j^T + \lambda{I})^{-1} \sum_{j \in \Omega_i} (A_{ij} – b_i – c_j – \mu )v_j $$

$$ v_j = ( \sum_{i \in \Omega_j} u_i u_i^T + \lambda{I})^{-1} \sum_{i \in \Omega_j}(A_{ij} – b_i – c_j – \mu )u_i $$

$$ b_i = \frac{1}{1 + \lambda} \frac{1}{| \Omega_i |}\sum_{j \in \Omega_i} A_{ij} – u_i^Tv_j – c_j – \mu $$

$$ c_j= \frac{1}{1 + \lambda} \frac{1}{| \Omega_j |}\sum_{i \in \Omega_j} A_{ij} – u_i^Tv_j – b_i – \mu $$

 

Python Code

The simplest way to implement the above formulas would be to just code them directly.

Initialize your parameters as follows:

U = np.random.randn(M, K) / K
V = np.random.randn(K, N) / K
B = np.zeros(M)
C = np.zeros(N)

Next, you want \( \Omega_i \) and \( \Omega_j \) to be easily accessible, so create dictionaries “ratings_by_i” where “i” is the key, and the value is an array of all the (j, r) pairs that user i has rated (r is the rating). Do the same for “ratings_by_j”.

Then, your updates would be as follows:

for t in xrange(T):

  # update B
  for i in xrange(M):
  if i in ratings_by_i:
    accum = 0
    for j, r in ratings_by_i[i]:
      accum += (r - U[i,:].dot(V[:,j]) - C[j] - mu)
    B[i] = accum / (1 + reg) / len(ratings_by_i[i])

  # update U
  for i in xrange(M):
    if i in ratings_by_i:
      matrix = np.zeros((K, K)) + reg*np.eye(K)
      vector = np.zeros(K)
      for j, r in ratings_by_i[i]:
        matrix += np.outer(V[:,j], V[:,j])
        vector += (r - B[i] - C[j] - mu)*V[:,j]
      U[i,:] = np.linalg.solve(matrix, vector)

  # update C
  for j in xrange(N):
    if j in ratings_by_j:
      accum = 0
      for i, r in ratings_by_j[j]:
        accum += (r - U[i,:].dot(V[:,j]) - B[i] - mu)
      C[j] = accum / (1 + reg) / len(ratings_by_j[j])

  # update V
  for j in xrange(N):
    if j in ratings_by_j:
      matrix = np.zeros((K, K)) + reg*np.eye(K)
      vector = np.zeros(K)
      for i, r in ratings_by_j[j]:
        matrix += np.outer(U[i,:], U[i,:])
        vector += (r - B[i] - C[j] - mu)*U[i,:]
      V[:,j] = np.linalg.solve(matrix, vector)

And that’s all there is to it!

For more free machine learning and data science tutorials, sign up for my newsletter.

 

Go to comments