Lazy Programmer

Your source for the latest in deep learning, big data, data science, and artificial intelligence. Sign up now

NEW Deep Learning Course: GANs and Variational Autoencoders

August 1, 2017


You asked for it, and it’s here!

I am pleased to announce my latest course:

Deep Learning: Variational Autoencoders and GANs

If you don’t want to read all my crazy writing and you just want to grab your coupon, click here:

GANs have been called one of the most interesting developments in deep learning in 2016.

This is coming from Yann LeCun – one of the grandmasters of deep learning.

Most of you already know why GANs are cool – that’s why you’ve been asking me for this course.

But just in case you don’t:

GANs are notable for being able to produce extremely high-quality, high-resolution, sharp samples.

We’ve had neural networks (and non-deep learning ML algorithms) that can generate samples for decades… but none come close to the quality of images generated by GANs.

This is some Jason Bourne-level stuff… you know how they “enhance” a tiny / blurry image from some government spy camera?

Guess what can actually do that? GANs.

Fun fact: a group of Harvard PhDs JUST did an AMA on Reddit this week. Check out what one had to say about Unsupervised Deep Learning:


There are a ton of more cool applications that we’ll discuss in the course like reinforcement learning. This stuff is basically the latest-and-greatest in deep learning.

Why variational autoencoders and not just GANs? Both of these neural networks fall into the category of “deep neural network samplers” – they both attempt to learn the structure of data in some way, which you can then use to generate new data that mimics what was learned.

I think variational autoencoders are super cool because they combine 2 of my favorite subjects: deep learning and Bayesian machine learning.

They were also invented at approximately the same time and are always mentioned in the context of one another, so in some sense they belong in the same “family” of algorithms.

Another cool thing about this course: a surprising lack of prerequisites! Technically, this will be Deep Learning part 8 and Unsupervised Deep Learning part 2. But, you won’t need to know anything from Deep Learning part 5, 6, or 7, nor Unsupervised Deep Learning part 1 (which is also Deep Learning part 4). You will need to know how to build convolutional neural networks and have a working understanding of Bayes classifiers, but that’s pretty much it! Not that learning how to build convolutional neural networks was an easy place to get to, but now that you’re there, you can breathe easy.

Now, this course is going to be available to all students on Udemy, but if you’re receiving this email, that means you’ve already taken a course of mine, which is why I am offering you something exclusive: The VIP version of this course.

It will ONLY be available to students who use this link, which applies a special coupon that I can check that you used:

What’s in it?





Here’s what you get if you sign up for the VIP version of the course:

Because I am such a geek, I decided to use LaTeX to create short, concise tutorials for both the GAN and variational autoencoder.

These will be great to help you review the material and ingest it in a different format, no doubt increasing your understanding of what you learn in the course.

You can take it with you on the train and read it at your leisure!

Students are constantly asking me for PDFs just like this. Ask and you shall receive!

But that’s not all…

One of the COOLEST applications of neural networks that just “learn the structure of data” (as opposed to trying to assign labels to it) is STYLE TRANSFER.

Ever wanted to know what the New York City skyline would look like if it were painted by Picasso?

Now you can find out!

Style transfer networks are neural networks that learn the “essence” or “style” of one image, and then have the ability to apply that same style to new images.

I find this to be one of the most FASCINATING applications of using deep learning to learn the structure of data.

Of course… for most of us, such a neural network would take around 4 months to train…

So here’s what you get for signing up for the VIP special:

A SUPER SIMPLE script you can just run, which automatically downloads pre-trained neural network weights for 3 different styles (Dora Maar, Rain Princess, and Starry Night), which you can then use to apply those styles to ANY input image within SECONDS.

The neural network accepts any size input image because all the weights are convolutional filters!

How cool is that?

And it’s in Tensorflow, so all you Windows users out there don’t feel left out. =)

And remember: these VIP specials are only available IF you use the VIP COUPON (IAMAVIP) – so make sure you use the coupons / links in this email, otherwise, you will not get the VIP bonuses!

Quick note: If you don’t receive the VIP extras right away, don’t worry. I will be going through the list myself, you WILL get them.

#deep learning #gans #unsupervised learning #variational autoencoders

Go to comments

A Tutorial on Autoencoders for Deep Learning

December 31, 2015

Despite its somewhat initially-sounding cryptic name, autoencoders are a fairly basic machine learning model (and the name is not cryptic at all when you know what it does).

Autoencoders belong to the neural network family, but they are also closely related to PCA (principal components analysis).

Some facts about the autoencoder:

  • It is an unsupervised learning algorithm (like PCA)
  • It minimizes the same objective function as PCA
  • It is a neural network
  • The neural network’s target output is its input

The last point is key here. This is the architecture of an autoencoder:


So the dimensionality of the input is the same as the dimensionality of the output, and essentially what we want is x’ = x.

It can be shown that the objective function for PCA is:

$$ J = \sum_{n=1}^{N} |x(n) – \hat{x}(n)|^2 $$

Where the prediction \( \hat{x}(n) = Q^{-1}Qx(n) \).

Q can be the full transformation matrix (which would result in getting exactly the old x back), or it can be a “rank k” matrix (i.e. keeping the k-most relevant eigenvectors), which would then result in only an approximation of x.

So the objective function can be written as:

$$ J = \sum_{n=1}^{N} |x(n) – Q^{-1}Qx(n)|^2 $$

Now let’s return to autoencoders.

Recall that to get the value at the hidden layer, we simply multiply the input->hidden weights by the input.

Like so:

$$ z = f(Wx) $$

And to get the value at the output, we multiply the hidden->output weights by the hidden layer values, like so:

$$ y = g(Vz) $$

The choice of \( f \) and \( g \) is up to us, we just have to know how to take the derivative for backpropagation.

We are of course free to make them “identity” functions, such that:

$$ y = g(V f(Wx)) = VWx $$

This gives us the objective:

$$ J = \sum_{n=1}^{N} |x(n) – VWx(n)|^2 $$

Which is the same as PCA!


If autoencoders are similar to PCA, why do we need autoencoders?

Autoencoders are much more flexible than PCA.

Recall that with neural networks we have an activation function – this can be a “ReLU” (aka. rectifier), “tanh” (hyperbolic tangent), or sigmoid.

This introduces nonlinearities in our encoding, whereas PCA can only represent linear transformations.

The network representation also means you can stack autoencoders to form a deep network.


Cool theory bro, but what can autoencoders actually do for me?

Good question!

Similar to PCA – autoencoders can be used for finding a low-dimensional representation of your input data. Why is this useful?

Some of your features may be redundant or correlated, resulting in wasted processing time and overfitting in your model (too many parameters).

It is thus ideal to only include the features we need.

If your “reconstruction” of x is very accurate, that means your low-dimensional representation is good.

You can then use this transformation as input into another model.


Training an autoencoder

Since autoencoders are really just neural networks where the target output is the input, you actually don’t need any new code.

Suppose we’re working with a sci-kit learn-like interface.

Instead of:, Y)

You would just have:, X)

Pretty simple, huh?

All the usual neural network training strategies work with autoencoders too:

  • backpropagation
  • regularization
  • dropout
  • RBM pre-training

If you want to get good with autoencoders – I would recommend trying to take some data and an existing neural network package you’re comfortable with – and see what low-dimensional representation you can come up with. How many dimensions are there?


Where can I learn more?

Autoencoders are part of a family of unsupervised deep learning methods, which I cover in-depth in my course, Unsupervised Deep Learning in Python. We discuss how to stack autoencoders to build deep belief networks, and compare them to RBMs which can be used for the same purpose. We derive all the equations and write all the code from scratch – no shortcuts. Ask me for a coupon so I can give you a discount!

P.S. “Autoencoders” means “encodes itself”. Not so cryptic now, right?

Leave a comment!

#autoencoders #deep learning #machine learning #pca #principal components analysis #unsupervised learning

Go to comments