# May 2018

### Grab these courses before these sales go away

I’m hard at work at my next course, so guess what that means? Everything on sale!

For the next 7 days, ALL courses on Udemy (not just mine) are available for just $9.99! This is the lowest price possible on Udemy, so make sure you grab these courses while you have the chance. For my courses, please use the coupons below (included in the links), or if you want, enter the coupon code: MAY2018. For prerequisite courses (math, stats, Python programming) and all other courses, follow the links at the bottom. Since ALL courses on Udemy on sale, for any course not listed here, just click the general (site-wide) link, and search for courses from that page. https://www.udemy.com/deep-learning-gans-and-variational-autoencoders/?couponCode=MAY2018 https://www.udemy.com/deep-reinforcement-learning-in-python/?couponCode=MAY2018 https://www.udemy.com/artificial-intelligence-reinforcement-learning-in-python/?couponCode=MAY2018 https://www.udemy.com/data-science-linear-regression-in-python/?couponCode=MAY2018 https://www.udemy.com/data-science-logistic-regression-in-python/?couponCode=MAY2018 https://www.udemy.com/data-science-deep-learning-in-python/?couponCode=MAY2018 https://www.udemy.com/data-science-natural-language-processing-in-python/?couponCode=MAY2018 https://www.udemy.com/sql-for-marketers-data-analytics-data-science-big-data/?couponCode=MAY2018 https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow/?couponCode=MAY2018 https://www.udemy.com/cluster-analysis-unsupervised-machine-learning-python/?couponCode=MAY2018 https://www.udemy.com/unsupervised-deep-learning-in-python/?couponCode=MAY2018 https://www.udemy.com/unsupervised-machine-learning-hidden-markov-models-in-python/?couponCode=MAY2018 https://www.udemy.com/deep-learning-recurrent-neural-networks-in-python/?couponCode=MAY2018 https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=MAY2018 https://www.udemy.com/data-science-supervised-machine-learning-in-python/?couponCode=MAY2018 https://www.udemy.com/bayesian-machine-learning-in-python-ab-testing/?couponCode=MAY2018 https://www.udemy.com/machine-learning-in-python-random-forest-adaboost/?couponCode=MAY2018 ### PREREQUISITE COURSE COUPONS And just as important,$10.99 coupons for some helpful prerequisite courses. You NEED to know this stuff to understand machine learning in-depth:

General (site-wide): http://bit.ly/2oCY14Z
Python http://bit.ly/2pbXxXz
Calc 1 http://bit.ly/2okPUib
Calc 2 http://bit.ly/2oXnhpX
Calc 3 http://bit.ly/2pVU0gQ
Linalg 1 http://bit.ly/2oBBir1
Linalg 2 http://bit.ly/2q5SGEE
Probability (option 1) http://bit.ly/2prFQ7o
Probability (option 2) http://bit.ly/2p8kcC0
Probability (option 3) http://bit.ly/2oXa2pb
Probability (option 4) http://bit.ly/2oXbZSK

### OTHER UDEMY COURSE COUPONS

As you know, I’m the “Lazy Programmer”, not just the “Lazy Data Scientist” – I love all kinds of programming!

iOS courses:
https://lazyprogrammer.me/ios

Android courses:
https://lazyprogrammer.me/android

Ruby on Rails courses:
https://lazyprogrammer.me/ruby-on-rails

Python courses:
https://lazyprogrammer.me/python

Big Data (Spark + Hadoop) courses:

Javascript, ReactJS, AngularJS courses:
https://lazyprogrammer.me/js

### EVEN MORE COOL STUFF

Into Yoga in your spare time? Photography? Painting? There are courses, and I’ve got coupons! If you find a course on Udemy that you’d like a coupon for, just let me know and I’ll hook you up!

# Linear Regression in the Wild – AV1: Next Generation Video Codec

April 10, 2018

A lot of students come up to me and ask about when they’re going to learn the latest and greatest new deep learning algorithm.

Sometimes, it’s easy to forget how applicable even the most basic of tools are.

As you know, I consider Linear Regression to be the best starting point for deep learning and machine learning in general.

And wouldn’t you know, here it is being used in the most advanced, state-of-the-art video codec we have today:

next generation video: Introducing AV1

Check it out!

This new state-of-the-art video codec is based on research done by multiple big companies, such as Google, Cisco, and Mozilla.

As you can see, the final equation is just a line ($$y = mx + b$$).

$$CfL(\alpha) = \alpha L^{AC} + DC$$

# Deep Learning and Machine Learning Courses $10 Special! May 15, 2017 This month, Udemy is having a special event called the “Udemy Learn Fest”, and you know I watch these things like a hawk so that when Udemy has their best deals I can bring the news to you as soon as they happen. As usual, I’m providing$10 coupons for all my courses in the links below. Please use these links and share them with your friends!

The $10 promo doesn’t come around often, so make sure you pick up everything you are interested in, or could become interested in later this year. The promo goes until May 25. Don’t wait! At the end of this post, I’m going to provide you with some additional links to get machine learning prerequisites (calculus, linear algebra, Python, etc…) for$10 too!

If you don’t know what order to take the courses in, please check here: https://deeplearningcourses.com/course_order

Here are the links for my courses:

Deep Learning Prerequisites: Linear Regression in Python
https://www.udemy.com/data-science-linear-regression-in-python/?couponCode=MAY123

Deep Learning Prerequisites: Logistic Regression in Python
https://www.udemy.com/data-science-logistic-regression-in-python/?couponCode=MAY123

Deep Learning in Python
https://www.udemy.com/data-science-deep-learning-in-python/?couponCode=MAY123

Practical Deep Learning in Theano and TensorFlow
https://www.udemy.com/data-science-deep-learning-in-theano-tensorflow/?couponCode=MAY123

Deep Learning: Convolutional Neural Networks in Python
https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow/?couponCode=MAY123

Unsupervised Deep Learning in Python
https://www.udemy.com/unsupervised-deep-learning-in-python/?couponCode=MAY123

Deep Learning: Recurrent Neural Networks in Python
https://www.udemy.com/deep-learning-recurrent-neural-networks-in-python/?couponCode=MAY123

Advanced Natural Language Processing: Deep Learning in Python
https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=MAY123

Advanced AI: Deep Reinforcement Learning in Python
https://www.udemy.com/deep-reinforcement-learning-in-python/?couponCode=MAY123

Easy Natural Language Processing in Python
https://www.udemy.com/data-science-natural-language-processing-in-python/?couponCode=MAY123

Cluster Analysis and Unsupervised Machine Learning in Python
https://www.udemy.com/cluster-analysis-unsupervised-machine-learning-python/?couponCode=MAY123

Unsupervised Machine Learning: Hidden Markov Models in Python
https://www.udemy.com/unsupervised-machine-learning-hidden-markov-models-in-python/?couponCode=MAY123

Data Science: Supervised Machine Learning in Python
https://www.udemy.com/data-science-supervised-machine-learning-in-python/?couponCode=MAY123

Bayesian Machine Learning in Python: A/B Testing
https://www.udemy.com/bayesian-machine-learning-in-python-ab-testing/?couponCode=MAY123

Ensemble Machine Learning in Python: Random Forest and AdaBoost

Artificial Intelligence: Reinforcement Learning in Python
https://www.udemy.com/artificial-intelligence-reinforcement-learning-in-python/?couponCode=MAY123

SQL for Newbs and Marketers
https://www.udemy.com/sql-for-marketers-data-analytics-data-science-big-data/?couponCode=MAY123

And last but not least, $10 coupons for some helpful prerequisite courses. You NEED to know this stuff before you study machine learning: General (site-wide): http://bit.ly/2oCY14Z Python http://bit.ly/2pbXxXz Calc 1 http://bit.ly/2okPUib Calc 2 http://bit.ly/2oXnhpX Calc 3 http://bit.ly/2pVU0gQ Linalg 1 http://bit.ly/2oBBir1 Linalg 2 http://bit.ly/2q5SGEE Probability (option 1) http://bit.ly/2prFQ7o Probability (option 2) http://bit.ly/2p8kcC0 Probability (option 3) http://bit.ly/2oXa2pb Probability (option 4) http://bit.ly/2oXbZSK Remember, these links will self-destruct on May 25 (10 days). Act NOW! Go to comments # How to get ANY course on Udemy for$10 for the next week

August 25, 2016

For some reason Udemy announced a promotion but when you go to the site it doesn’t appear.

Just use this link to get ANY course on Udemy for $10: http://bit.ly/2byIkWW Go to comments # Tutorial on Collaborative Filtering and Matrix Factorization in Python April 25, 2016 This article will be of interest to you if you want to learn about recommender systems and predicting movie ratings (or book ratings, or product ratings, or any other kind of rating). Contests like the$1 million Netflix Challenge are an example of what collaborative filtering can be used for.

## Problem Setup

Let’s use the “users rating movies” example for this tutorial. After some Internet searching, we can determine that there are approximately 500, 000 movies in existence. Let’s also suppose that your very popular movie website has 1 billion users (Facebook has 1.6 billion users as of 2015, so this number is plausible).

How many possible user-movie ratings can you have? That is $$10^9 \times 5 \times 10^5 = 5 \times 10^{14}$$. That’s a lot of ratings! Way too much to fit into your RAM, in fact.

But that’s just one problem.

How many movies have you seen in your life? Of those movies, what percentage of them have you rated? The number is miniscule. In fact, most users have not rated most movies.

This is why recommender systems exist in the first place – so we can recommend you movies that you haven’t seen yet, that we know you’ll like.

So if you were to create a user-movie matrix of movie ratings, most of it would just have missing values.

However, that’s not to say there isn’t a pattern to be found.

Suppose we look at a subset of movie ratings, and we find the following:

Batman
Batman Returns
Batman Begins
The Dark Knight
Batman v. Superman
Guy A
N/A
4
5
5
2
Guy B
4
N/A
5
5
1

Where we’ve used N/A to show that a movie has not yet been rated by a user.

If we used the “cosine distance” ( $$\frac{u^T v}{ |u||v| }$$ ) on the vectors created by looking at only the common movies, we could see that Guy A and Guy B have similar tastes. We could then surmise, based on this closeness, that Guy A might rate the Batman movie a “4”, and Guy B might rate Batman Returns a “4”. And since this is a pretty high rating, we might want to recommend these movies to these users.

This is the idea behind collaborative filtering.

## Enter Matrix Factorization

Matrix factorization solves the above problems by reducing the number of free parameters (so the total number of parameters is much smaller than #users times #movies), and by fitting these parameters to the data (ratings) that do exist.

What is matrix factorization?

Think of factorization in general:

15 = 3 x 5 (15 is made up of the factors 3 and 5)

$$x^2 + x = x(x + 1)$$

We can do the same thing with matrices:

$$\left( \begin{matrix}3 & 4 & 5 \\ 6 & 8 & 10 \end{matrix} \right) = \left( \begin{matrix}1 \\ 2 \end{matrix} \right) \left( \begin{matrix}3 & 4 & 5 \end{matrix} \right)$$

In fact, this is exactly what we do in matrix factorization. We “pretend” the big ratings matrix (the one that can’t fit into our RAM) is actually made up of 2 smaller matrices multiplied together.

Remember that to do a valid matrix multiply, the inner dimensions must match. What is the size of this dimension? We call it “K”. It is unknown, but we can choose it via possibly cross-validation so that our model generalizes well.

If we have $$M$$ users and $$N$$ ratings, then the total number of parameters in our model is $$MK + NK$$. If we set $$K = 10$$, the total number of parameters we’d have for the user-movie problem would be $$10^{10} + 5 \times 10^6$$, which is still approximately $$10^{10}$$, which is a factor of $$10^4$$ smaller than before.

This is a big improvement!

So now we have:

$$A \simeq \hat{ A } = UV$$

If you were to picture the matrices themselves, they would look like this:

Because I am lazy and took this image from elsewhere on the Internet, the “d” here is what I am calling “K”. And their “R” is my “A”.

You know that with any machine learning algorithm we have 2 procedures – the fitting procedure and the prediction procedure.

For the fitting procedure, we want every known $$A_{ij}$$ to be as close to $$\hat{A}_{ij} = u_i^Tv_j$$ as possible. $$u_i$$ is the ith row of $$U$$. $$v_j$$ is the jth column of $$V$$.

For the prediction procedure, we won’t have an $$A_{ij}$$, but we can use $$\hat{A}_{ij} = u_i^Tv_j$$ to tell us what user i might rate movie j given the existing patterns.

## The Cost Function

A natural cost function for this problem is the squared error. Think of it as a regression. This is just:

$$J = \sum_{(i, j) \in \Omega} (A_{ij} – \hat{A}_{ij})^2$$

Where $$\Omega$$ is the set of all pairs $$(i, j)$$ where user i has rated movie j.

Later, we will use $$\Omega_i$$ to be the set of all j’s (movies) that user i has rated, and we will use $$\Omega_j$$ to be the set of all i’s (users) that have rated movie j.

## Coordinate Descent

What do you do when you want to minimize a function? Take the derivative and set it to 0, of course. No need to use anything more complicated if the simple approach is solvable and performs well. It is also possible to use gradient descent on this problem by taking the derivative and then taking small steps in that direction.

You will notice that there are 2 derivatives to take here. The first is $$\partial{J} / \partial{u}$$.

The other is $$\partial{J} / \partial{v}$$. After calculating the derivatives and solving for $$u$$ and $$v$$, you get:

$$u_i = ( \sum_{j \in \Omega_i} v_j v_j^T )^{-1} \sum_{j \in \Omega_i} A_{ij} v_j$$

$$v_j = ( \sum_{i \in \Omega_j} u_i u_i^T )^{-1} \sum_{i \in \Omega_j} A_{ij} u_i$$

So you take both derivatives. You set both to 0. You solve for the optimal u and v. Now what?

You first update $$u$$ using the current setting of $$v$$, then you update $$v$$ using the current setting of $$u$$. The order doesn’t matter, just that you alternate between the two.

There is a mathematical guarantee that J will improve on each iteration.

This technique is also known as alternating least squares. (This makes sense because we’re minimizing the squared error and updating $$u$$ and $$v$$ in an alternating fashion.)

## Bias Parameters

As with other methods like linear regression and logistic regression, we can add bias parameters to our model to improve accuracy. In this case our model becomes:

$$\hat{A}_{ij} = u_i^T v_j + b_i + c_j + \mu$$

Where $$\mu$$ is the global mean (average of all known ratings).

You can interpret $$b_i$$ as the bias of a user. A negative bias means this user just hates movies more than the average person. A positive bias would mean the opposite. Similarly, $$c_j$$ is the bias of a movie. A positive bias would mean, “Wow, this movie is good, regardless of who is watching it!” A negative bias would be a movie like Avatar: The Last Airbender.

We can re-calculate the optimal settings for each parameter (again by taking the derivatives and setting them to 0) to get:

$$u_i = ( \sum_{j \in \Omega_i} v_j v_j^T )^{-1} \sum_{j \in \Omega_i} (A_{ij} – b_i – c_j – \mu )v_j$$

$$v_j = ( \sum_{i \in \Omega_j} u_i u_i^T )^{-1} \sum_{i \in \Omega_j}(A_{ij} – b_i – c_j – \mu )u_i$$

$$b_i = \frac{1}{| \Omega_i |}\sum_{j \in \Omega_i} A_{ij} – u_i^Tv_j – c_j – \mu$$

$$c_j= \frac{1}{| \Omega_j |}\sum_{i \in \Omega_j} A_{ij} – u_i^Tv_j – b_i – \mu$$

## Regularization

With the above model, you may encounter what is called the “singular covariance” problem. This is what happens when you can’t invert the matrix that appears in the updates for $$u$$ and $$v$$.

The solution is again, similar to what you would do in linear regression or logistic regression: Add a squared error term with a weight $$\lambda$$ that keeps the parameters small.

In terms of the likelihood, the previous formulation assumes that the difference between $$A_{ij}$$ and $$\hat{A}_{ij}$$ is normally distributed, while the cost function with regularization is like adding a normally-distributed prior on each parameter centered at 0.

i.e. $$u_i, v_j, b_i, c_j \sim N(0, 1/\lambda)$$.

So the cost function becomes:

$$J = \sum_{(i, j) \in \Omega} (A_{ij} – \hat{A}_{ij})^2 + \lambda(||U||_F + ||V||_F + ||b||^2 + ||c||^2)$$

Where $$||X||_F$$ is the Frobenius norm of $$X$$.

For each parameter, setting the derivative with respect to that parameter, setting it to 0 and solving for the optimal value yields:

$$u_i = ( \sum_{j \in \Omega_i} v_j v_j^T + \lambda{I})^{-1} \sum_{j \in \Omega_i} (A_{ij} – b_i – c_j – \mu )v_j$$

$$v_j = ( \sum_{i \in \Omega_j} u_i u_i^T + \lambda{I})^{-1} \sum_{i \in \Omega_j}(A_{ij} – b_i – c_j – \mu )u_i$$

$$b_i = \frac{1}{1 + \lambda} \frac{1}{| \Omega_i |}\sum_{j \in \Omega_i} A_{ij} – u_i^Tv_j – c_j – \mu$$

$$c_j= \frac{1}{1 + \lambda} \frac{1}{| \Omega_j |}\sum_{i \in \Omega_j} A_{ij} – u_i^Tv_j – b_i – \mu$$

## Python Code

The simplest way to implement the above formulas would be to just code them directly.

U = np.random.randn(M, K) / K
V = np.random.randn(K, N) / K
B = np.zeros(M)
C = np.zeros(N)


Next, you want $$\Omega_i$$ and $$\Omega_j$$ to be easily accessible, so create dictionaries “ratings_by_i” where “i” is the key, and the value is an array of all the (j, r) pairs that user i has rated (r is the rating). Do the same for “ratings_by_j”.

for t in xrange(T):

# update B
for i in xrange(M):
if i in ratings_by_i:
accum = 0
for j, r in ratings_by_i[i]:
accum += (r - U[i,:].dot(V[:,j]) - C[j] - mu)
B[i] = accum / (1 + reg) / len(ratings_by_i[i])

# update U
for i in xrange(M):
if i in ratings_by_i:
matrix = np.zeros((K, K)) + reg*np.eye(K)
vector = np.zeros(K)
for j, r in ratings_by_i[i]:
matrix += np.outer(V[:,j], V[:,j])
vector += (r - B[i] - C[j] - mu)*V[:,j]
U[i,:] = np.linalg.solve(matrix, vector)

# update C
for j in xrange(N):
if j in ratings_by_j:
accum = 0
for i, r in ratings_by_j[j]:
accum += (r - U[i,:].dot(V[:,j]) - B[i] - mu)
C[j] = accum / (1 + reg) / len(ratings_by_j[j])

# update V
for j in xrange(N):
if j in ratings_by_j:
matrix = np.zeros((K, K)) + reg*np.eye(K)
vector = np.zeros(K)
for i, r in ratings_by_j[j]:
matrix += np.outer(U[i,:], U[i,:])
vector += (r - B[i] - C[j] - mu)*U[i,:]
V[:,j] = np.linalg.solve(matrix, vector)


And that’s all there is to it!

# New Deep Learning Course! Convolutional Neural Networks

April 2, 2016

I was aiming to get this course out before the end of March, and it is now April. So you know I put it some extra work to make it as awesome as possible.

Course summary (scroll down for coupons):

This is the 3rd part in my Data Science and Machine Learning series on Deep Learning in Python. At this point, you already know a lot about neural networks and deep learning, including not just the basics like backpropagation, but how to improve it using modern techniques like momentum and adaptive learning rates. You’ve already written deep neural networks in Theano and TensorFlow, and you know how to run code using the GPU.

This course is all about how to use deep learning for computer vision using convolutional neural networks. These are the state of the art when it comes to image classification and they beat vanilla deep networks at tasks like MNIST.

In this course we are going to up the ante and look at the StreetView House Number (SVHN) dataset – which uses larger color images at various angles – so things are going to get tougher both computationally and in terms of the difficulty of the classification task. But we will show that convolutional neural networks, or CNNs, are capable of handling the challenge!

Because convolution is such a central part of this type of neural network, we are going to go in-depth on this topic. It has more applications than you might imagine, such as modeling artificial organs like the pancreas and the heart. I’m going to show you how to build convolutional filters that can be applied to audio, like the echo effect, and I’m going to show you how to build filters for image effects, like the Gaussian blur and edge detection.

We will also do some biology and talk about how convolutional neural networks have been inspired by the animal visual cortex.

After describing the architecture of a convolutional neural network, we will jump straight into code, and I will show you how to extend the deep neural networks we built last time (in part 2) with just a few new functions to turn them into CNNs. We will then test their performance and show how convolutional neural networks written in both Theano and TensorFlow can outperform the accuracy of a plain neural network on the StreetView House Number dataset.

All the materials for this course are FREE. You can download and install Python, Numpy, Scipy, Theano, and TensorFlow with simple commands shown in previous courses.

Coupons:

If other people beat you to that one:

And if you were ultra slow:

# 2 FREE books this week: Deep Learning Prerequisites and SQL for Newbs and Marketers

March 30, 2016

Quick note to announce 2 free books out this week (3/28 – 4/1):

Do you find deep learning difficult?

So you want to learn about deep learning and neural networks, but you don’t have a clue what machine learning even is. This book is for you.

Perhaps you’ve already tried to read some tutorials about deep learning, and were just left scratching your head because you did not understand any of it. This book is for you.

This book was designed to contain all the prerequisite information you need for my next book, Deep Learning in Python: Master Data Science and Machine Learning with Modern Neural Networks written in Python, Theano, and TensorFlow.

There are many techniques that you should be comfortable with before diving into deep learning. For example, the “backpropagation” algorithm is just gradient descent, which is the same technique that is used to solve logistic regression.

Do you want to know how to optimize your sales funnel using SQL, look at the seasonal trends in your industry, and run a SQL query on Hadoop? Then join me now in my new book, SQL for marketers: Dominate data analytics, data science, and big data.

# Principal Components Analysis in Theano

February 21, 2016

This is a follow-up post to my original PCA tutorial. It is of interest to you if you:

• Are interested in deep learning (this tutorial uses gradient descent)
• Are interested in learning more about Theano (it is not like regular Python, and it is very popular for implementing deep learning algorithms)
• Want to know how you can write your own PCA solver (in the previous post we used a library to get eigenvalues and eigenvectors)
• Work with big data (this technique can be used to process data where the dimensionality is very large – where the covariance matrix wouldn’t even fit into memory)

First, you should be familiar with creating variables and functions in Theano. Here is a simple example of how you would do matrix multiplication:

import numpy as np
import theano
import theano.tensor as T

X = T.matrix('X')
Q = T.matrix('Q')
Z = T.dot(X, Q)

transform = theano.function(inputs=[X,Q], outputs=Z)

X_val = np.random.randn(100,10)
Q_val = np.random.randn(10,10)

Z_val = transform(X_val, Q_val)


I think of Theano variables as “containers” for real numbers. They actually represent nodes in a graph. You will see the term “graph” a lot when you read about Theano, and probably think to yourself – what does matrix multiplication or machine learning have to do with graphs? (not graphs as in visual graphs, graphs as in nodes and edges) You can think of any “equation” or “formula” as a graph. Just draw the variables and functions as nodes and then connect them to make the equation using lines/edges. It’s just like drawing a “system” in control systems or a visual representation of a neural network (which is also a graph).

If you have ever done linear programming or integer programming in PuLP you are probably familiar with the idea of “variable” objects and them passing them into a “solver” after creating some “expressions” that represent the constraints and objective of the linear / integer program.

Anyway, onto principal components analysis.

Let’s consider how you would find the leading eigenvalue and eigenvector (the one corresponding to the largest eigenvalue) of a square matrix.

The loss function / objective for PCA is:

$$J = \sum_{n=1}^{N} |x_n – \hat{x}_n|^2$$

Where $$\hat{X}$$ is the reconstruction of $$X$$. If there is only one eigenvector, let’s call this $$v$$, then this becomes:

$$J = \sum_{n=1}^{N} |x_n – x_nvv^T|^2$$

This is equivalent to the Frobenius norm, so we can write:

$$J = |X – Xvv^T|_F$$

One identity of the Frobenius norm is:

$$|A|_F = \sqrt{ \sum_{i} \sum_{j} a_{ij} } = \sqrt{ Tr(A^T A ) }$$

Which means we can rewrite the loss function as:

$$J = Tr( (X – Xvv^T)^T(X – Xvv^T) )$$

Keeping in mind that with the trace function you can re-order matrix multiplications that you wouldn’t normally be able to (matrix multiplication isn’t commutative), and dropping any terms that don’t depend on $$v$$, you can use matrix algebra to rearrange this to get:

$$v^* = argmin\{-Tr(X^TXvv^T) \}$$

Which again using reordering would be equivalent to maximizing:

$$v^* = argmax\{ v^TX^TXv \}$$

The corresponding eigenvalue would then be:

$$\lambda = v^TX^TXv$$

Now that we have a function to maximize, we can simply use gradient descent to do it, similar to how you would do it in logistic regression or in a deep belief network.

$$v \leftarrow v + \eta \nabla_v(v^TX^TXv)$$

Next, let’s extend this algorithm for finding the other eigenvalues and eigenvectors. You essentially subtract the contributions of the eigenvalues you already found.

$$v_i \leftarrow v_i + \eta \nabla_{v_i}(v_i^T( X^TX – \sum_{j=1}^{i-1} \lambda_j v_j v_j^T )v_i )$$

Next, note that to implement this algorithm you never need to actually calculate the covariance $$X^T X$$. If your dimensionality is, say, 1 million, then your covariance matrix will have 1 trillion entries!

Instead, you can multiply by your eigenvector first to get $$Xv$$, which is only of size $$N \times 1$$. You can then “dot” this with itself to get a scalar, which is only an $$O(N)$$ operation.

So how do you write this code in Theano? If you’ve never used Theano for gradient descent there will be some new concepts here.

First, you don’t actually need to know how to differentiate your cost function. You use Theano’s T.grad(cost_function, differentiation_variable).

v = theano.shared(init_v, name="v")
Xv = T.dot(X, v)
cost = T.dot(Xv.T, Xv) - np.sum(evals[j]*T.dot(evecs[j], v)*T.dot(evecs[j], v) for j in xrange(i))



Note that we re-normalize the eigenvector on each step, so that $$v^T v = 1$$.

Next, you define your “weight update rule” as an expression, and pass this into the “updates” argument of Theano’s function creator.

y = v + learning_rate*gv
update_expression = y / y.norm(2)

train = theano.function(
inputs=[X],
)


Note that the update variable must be a “shared variable”. With this knowledge in hand, you are ready to implement the gradient descent version of PCA in Theano:

for i in xrange(number of eigenvalues you want to find):
... initialize variables and expressions ...
... initialize theano train function ...
while t < max_iterations and change in v < tol:
outputs = train(data)

... return eigenvalues and eigenvectors ...


This is not really trivial but at the same time it's a great exercise in both (a) linear algebra and (b) Theano coding.

If you are interested in learning more about PCA, dimensionality reduction, gradient descent, deep learning, or Theano, then check out my course on Udemy "Data Science: Deep Learning in Python" and let me know what you think in the comments.

#aws #data science #deep learning #gpu #machine learning #nvidia #pca #principal components analysis #statistics #theano

# The Simpsons: Tapped Out and Prisoner’s Dilemma

December 12, 2015

I have to admit that, despite all this science-y stuff I do, I still have my vices.

You know when you’re just sitting there waiting for the train to take you home, and you wish you had a game to play on your iPhone? Then you go home and download that game, fully intending to only play it when you’re bored or waiting? Then you end up sacrificing the time you should have spent doing important things playing that game instead?

That’s The Simpsons: Tapped Out for me (along with new contender Marvel Contest of Champions, but that’s another story).

Every holiday, and even sometimes when it’s not a holiday, The Simpsons has a special holiday update. This year’s Christmas special lets you play a classic game originally known as the Prisoner’s Dilemma.

It is the canonical example of a “game” in Game Theory. John Nash, the subject of the movie A Beautiful Mind, made fundamental contributions to the theory. It has applications in military strategy, politics, and even sports.

## So how does it work?

In Tapped Out, every player has their own town with their own characters and houses, etc. You can visit other towns to get points.

In the Christmas update, when you visit a friend’s town, you drop a present off, and you are asked if you want to make the present “nice” (Lisa), or “naughty” (Bart). Naturally.

When you receive a present, you also have to choose nice/Lisa, or naughty/Bart.

So there are 4 possibilities:

• You choose Bart, other player chooses Lisa
• You choose Bart, other player chooses Bart
• You choose Lisa, other player chooses Lisa
• You choose Lisa, other player chooses Bart

This is a picture of the game that explains how it is played.

So as you can see, the point system is as follows:

• You choose Bart, other player chooses Lisa – You get 25, other player gets 1
• You choose Bart, other player chooses Bart – You get 5, other player gets 5
• You choose Lisa, other player chooses Lisa – You get 15, other player gets 15
• You choose Lisa, other player chooses Bart – You get 1, other player gets 25

In the original prisoner’s dilemma, the problem is posed as follows:

2 prisoners are given a choice of whether to cooperate with or betray the other. If they both cooperate, they both serve 1 year. If they both betray each other, they both serve 2 years. But if one betrays and one cooperates, then the one who betrays goes free, while the one who cooperated gets 3 years.

The problem can be summarized in a table like this:

Where P, R, S, and T are called “payoffs”. To be a prisoner’s dilemma game, we must have T > R > P > S, which is true for The Simpsons: Tapped Out version also.

Once you understand the problem, the question becomes – how do I make a choice that will maximize my winnings?

## How do I maximize my winnings in prisoner’s dilemma?

The Prisoner’s Dilemma is a dilemma because it seems obvious that to maximize both our winnings (altruism), we should cooperate.

But if I know you’re rational and will cooperate with me, and I am selfish, then I will defect, so I win more and you get nothing.

The problem is, if we both think that way, then we will both defect and we will both end up with less.

I would also be hesitant to cooperate because if you are selfish, then you will defect and I will end up with nothing.

The only “rational” solution is to be selfish and betray. Why? Because it is the only choice for which choosing the other thing will result in a worse outcome.

Regardless of what you do – if I choose to betray, it is better than if I had chosen to cooperate.

If you choose cooperate – I choose betray, I get 25 points. I choose cooperate, I only get 15.

If you choose betray – I choose betray, I get 5 points, but if I choose cooperate, I only get 1.

So no matter what you choose, my choice of betray is better than my choice of cooperate.

Of course, since I can drop off multiple presents to my neighbors’ towns, there isn’t only one “game” to play. In fact, multiple or iterated games make the prisoner’s dilemma even more interesting.

## Repeated Games

There is a bigger social element to repeated games.

If you betray me, then now I know you are a betrayer, so I will not cooperate with you.

If you cooperate with me, then perhaps I can trust you, and we can cooperate with each other until the end of the holiday special. That will result in more points for both of us.

But not as many for me compared to what I’d get if I continually betrayed you.

Then again, if I kept betraying you, you would stop cooperating with me.

You see how this gets complex?

## What is the best strategy?

Research done by Robert Axelrod in his 1984 book The Evolution of Cooperation helps to shed light on this issue.

He studied competing strategies in a tournament and discovered that purely selfish strategies performed very poorly in the long run, and that more altruistic strategies performed better, even when measured from a selfish perspective (how many points did I win?).

In terms of natural selection, this may explain why we have evolved over time to be both altruistic and self-interested.

The winning strategy was called “tit for tat”, which in short, means you just play the move your opponent previously played. The tit for tat strategy can be improved upon slightly by adding a small random chance that you will forgive your opponent if they betray you, and cooperate.

Interestingly, Axelrod’s analysis of the top strategies have a very “human” component. The conditions for a successful strategy can be summarized as:

• Be nice. Cooperate unless your opponent betrays you.
• Have a spine. If your opponent betrays you, don’t keep cooperating with them.
• Forgive. Don’t fall into a cycle of repeated betrayals.
• Don’t be jealous. Don’t strive to get more points than the other player and don’t be outcome dependent.

## Limited number of games

In The Simpsons: Tapped Out, you don’t have an infinite number of games to learn from or an infinite number of gifts to give. You are limited to 5 per day. Thus your ultimate strategy will probably be adaptive as well – choose only the 5 players you want to mutually benefit from your gift giving and choose the ones who have a high probability of cooperating.

## Conclusion

Let’s be real though. I don’t think the people who are playing this game are really considering this stuff. For the first few hours I just choose Bart, because that was the naughty one.

So yeah.

#a beautiful mind #game theory #john nash #prisoner's dilemma #tapped out #the simpsons #tsto

# What the hell is a “z-score”?

December 4, 2015

Was having this conversation recently about what a z-score is.

What is a z-score? Why is it mentioned so often in statistics? You’ve probably heard of it in the context of the Student t-test.

If you are a machine learning guy more than a statistics guy, like me, you’ve probably heard you should “standardize” or “normalize” your variables before putting them into a machine learning model.

For example, if you’re doing linear regression and $$X_1$$ varies from $$0..1$$ and $$X_2$$ varies from $$0..1000$$, the weights $$\beta_1$$ and $$\beta_2$$ will give an inaccurate picture as to their importance.

Well, the z-score actually is the standardized variable.

I would avoid this terminology if possible, because in machine learning we usually think of a “score” as the output of a model, i.e. I’m getting the “score” of a set of links because I want to know in what order to rank them.

$$z = \frac{x – \mu}{\sigma}$$

So instead of thinking of “z-scores”, think of “I need to standardize my variables before using them in my machine learning model so that the effect of any one variable is on the same scale as the others”.

Do you find statistics terminology confusing? “Z score”? “F statistic”? Share your thoughts in the comments!

#frequentist statistics #machine learning #statistics #z score