Path to mastering Reinforcement Learning with Deep Learning

Reinforcement learning (RL) studies how an agent learns to interact with its environment. Computer vision and natural language processing are nice, but they are limited. Is a program “intelligent” if all it learns how to do is remember that pictures of dogs look different than pictures of cats? Or that emails about “Viagra” are spam and emails about a business meeting are not spam?

Reinforcement learning has been used to conquer the highest levels of human strategy. It was used to beat world champions in the game of Go, a strategy game so difficult that experts believed it would take decades before we would achieve superhuman performance. Today, RL has progressed so rapidly that it can now teach itself to play Go, without any human interaction whatsoever. It then comes out of this training process able to beat ALL human players.

RL has also been used to master video games. This is significant because video games have physics, which are the laws that govern how the environment works. If RL can learn video game physics, then there is a real possibility it can learn real-world physics too.

Here is Google DeepMind’s AI learning to do parkour:

 

 

It’s important to recognize that simulated physics is always simpler than real-world physics, but we are making progress there too. Here’s an example of OpenAI’s robot hand learning to manipulate a block in the real, physical world:

 

 

So where can you learn about all these cool topics? Great question!

My course Cutting-Edge AI: Deep Reinforcement Learning in Python covers topics such as:

  • A2C (Advantage Actor Critic)
  • OpenAI Baselines
  • DDPG (Deep Deterministic Policy Gradient)
  • ES (Evolution Strategies)
  • How to beat Atari video games such as Breakout, Pong, and Space Invaders
  • How to beat Flappy Bird
  • How to train agents that can learn physics tasks such as walking (in the MuJoCo simulator)

You might read this and think, “this guy is speaking another language!”

And, you’d be pretty much right.

The above course depends on more basic knowledge, such as Deep Q-Learning and Policy Gradient Methods. Let’s be honest, that probably sounds like Greek to you too. It’s a process, be patient.

In order to learn this stuff, you have to learn the basics first.

My course Advanced AI: Deep Reinforcement Learning in Python covers topics such as:

  • Deep Q-Networks (DQN)
  • Policy Gradient Methods (Actor-Critic Methods)
  • TD-Lambda and N-Step Methods
  • How to use Open AI Gym to train your agents
  • How to beat Atari video games such as Breakout, Pong, and Space Invaders
  • How to solve classic control problem such as Cartpole (inverted pendulum) and Mountain Car

The-cart-pole-system

But wait!

What the heck is “Q-Learning” and “Policy” and “TD”? What do these things mean?

Well, that is my “advanced” AI course for a reason.

Luckily, I teach you about the “basics” in another course (I put “basics” in quotes because while it is basic compared to the advanced course, it is still extremely challenging, in comparison to other areas of machine learning).

My course Artificial Intelligence: Reinforcement Learning in Python covers the fundamental algorithms of reinforcement learning, starting from scratch (basic probability and statistics).

You’ll learn about concepts such as:

  • The explore-exploit dilemma
  • The multi-armed bandit (and Bayesian bandit) methods
  • Defining basic terms such as “agent”, “reward”, “policy”, “state”, and “environment”
  • Programming an agent to solve tic-tac-toe
  • Programming an agent to solve the canonical Gridworld problem
  • Markov Decision Processes (MDP)
  • Value functions
  • Dynamic programming methods (Bellman equation)
  • Monte Carlo methods
  • Temporal Difference (TD) learning
  • Q-learning
  • Approximation methods (e.g. how to use supervised machine learning in reinforcement learning)

Although Reinforcement Learning theory is essentially only formulated using basic probability, be warned that it is advanced in comparison to other types of machine learning.

I always recommend students get the hang of Supervised Machine Learning and Unsupervised Machine Learning before tackling Reinforcement Learning.

One may also wish to study Markov Models, such as what I teach in my course Unsupervised Machine Learning: Hidden Markov Models in Python, since Markov Decision Processes are a kind of Markov Model.

One of the key technologies that has helped to accelerate the performance of Reinforcement Learning is Deep Learning (a.k.a. neural networks).

My Deep Reinforcement Learning course (the first course I mentioned above) is essentially about how to combine Deep Learning with the concepts introduced in my first Reinforcement Learning course.

It is clear that in order to apply Deep Learning to Reinforcement Learning, one must know what Deep Learning is in the first place.

Luckily, I have many courses to help you acquire the skills you need in this area.

Deep Learning: Convolutional Neural Networks in Python will teach you how to build neural networks that can “see”. This is important, since video game frames (such as what you get from an Atari game) are images.

Convolutional Neural Networks (CNNs) are specifically designed to process images.

As you can tell by the name, CNNs involve 2 things:

  1. Convolution
  2. Neural Networks

This course is all about (a) what is convolution? and (b) how can I add convolution to neural networks?

This, of course, necessitates knowing how to build a neural network (without convolution).

These days, to build neural networks, we use modern deep learning libraries such as Theano, Tensorflow, and PyTorch.

So where can you learn how to build a neural network using these modern libraries?

Well, I’m glad you asked!

I just so happen to have a course on that too.

Modern Deep Learning in Python

This course covers (as mentioned above) how to build neural networks in modern deep learning libraries such as Theano, Tensorflow, and PyTorch.

It also covers modern theoretical advancements, such as adaptive learning rate methods (such as RMSprop, Nesterov Momentum, and Adam), as well as modern regularization techniques such as Dropout and Batch Normalization.

These can all be thought of as “add-ons” to the vanilla backpropagation training algorithm.

Modern libraries like Theano, Tensorflow, and PyTorch do “automatic differentiation” and make use of the GPU to greatly speed up training time.

But wait!

What the heck is backpropagation? And how is a neural network “trained” in the first place?

Aha!

This is where Data Science: Deep Learning in Python enters the picture.

This course goes over, in painstaking detail, how to train a neural network from basic first principles.

You’ll see how basic mathematics – matrices, vectors, and partial derivatives – form the basis of neural networks.

You’ll learn about what it means for a neural network to “make a prediction”, and also what it means to “train a neural network”.

You’ll learn how to visualize what a neural network does, and how to interpret what a neural network has learned.

A “neural network” implies a network of neurons.

At this point, you might be wondering, “what is a neuron anyway?”

You guessed it – I’ve covered this too!

Deep Learning Prerequisites: Logistic Regression in Python

A “neuron” is actually a classic machine learning model also known as Logistic Regression.

In this course, you’ll learn the ins and outs of linear classification and how to train a neuron – an algorithm known as gradient descent (like a baby version of backpropagation, in some sense).

What does it mean for a model to be “linear”?

Since you asked, I’ve got this covered too.

Deep Learning Prerequisites: Linear Regression in Python

You may have noticed that all of these courses have a heavy reliance on writing code.

A huge part (maybe the most important part) of learning how these models work, is learning how to implement them in Python code.

In particular, we make heavy use of libraries such as Numpy, Scipy, Matplotlib, and Pandas.

You can of course, learn how to use these libraries in my always-free course:

Deep Learning Prerequisites: The Numpy Stack in Python

Since a lot of people requested it, I also added a special section to the course that covers Machine Learning Basics, to answer questions such as “what is classification?” and “what is regression?”, as well as to gain a very rudimentary understanding of machine learning by using Scikit-Learn.

I consider my free Numpy course the basic starting point to deep learning and machine learning, no matter what field you want to end up specializing in, whether that be computer vision, natural language processing, or reinforcement learning.

These libraries are the basic tools (like the screwdriver, hammer, ruler, …) that you will use to build bigger and more complicated systems.

Keep in mind, there are many more topics in deep learning and artificial intelligence than what I listed here. For a full list of topics, and a guide for what order to learn them in, please see my handy visual guide: “What order should I take your courses in?”