May 1, 2018
Over the past year, many of you have been asking for a followup on my RNN and Deep NLP courses. I am glad to announce that today, that course is here.
I decided to combine both NLP (natural language processing) and RNNs (recurrent neural networks) because these topics are so intertwined it’s almost impossible to talk about one without the other.
In recent years, a few ideas have started to bubble up and have shown themselves to be truly useful, and in this course, I bring those ideas to you.
Let’s start with the applications:
1. I’ve been asked quite a few times about how to do classification when each input can have multiple labels assigned to it. We will do a text classification problem that has data exactly like this.
2. Neural machine translation. One of the most popular applications of Deep NLP. We can’t not do this.
3. Question answering. You can think of this as “reading comprehension”. Can an AI read a story and answer a question about it? Facebook Research made this popular with their bAbI dataset.
4. Speech recognition (see below).
As you know I like to take an abstract view of machine learning. We know that all of the techniques for these applications can be used for yet more applications without any change in code because the “data is the same”. For example, a spam detection dataset looks no different than a sentiment analysis dataset.
In the same vein, neural machine translation is no different from simple versions of question answering and chatbots. So you are really learning how to do all of these things at the same time.
We will of course get a chance to review basics such as LSTMs, GRUs, language modeling, word embeddings, and so forth.
What techniques will we cover? These techniques are what have helped RNNs really work well for NLP in the recent past:
1. Bidirectional RNNs
So, if you’ve already heard about these and you wanted to learn about them – I hope you are excited!
This course is NOT just about RNNs but CNNs (convolutional neural networks) as well. This is an advanced course – ALL deep learning is fair game.
Early in the course, you’ll see how we can apply CNNs to text.
You will see that we get results on-par with LSTMs and GRUs.
That’s already pretty neat.
But there’s still more.
If you’re reading this, you automatically get access to the VIP version of the course, which contains EVEN MORE material.
For the first time, I’m releasing a course exclusively on https://deeplearningcourses.com
This course will appear on other sites in the future but you will NOT get the VIP version from those sites.
What’s in the VIP bonus?
It’s basically like an entirely new section of the course.
We will be looking at a topic I’ve wanted to cover for a long time: speech recognition.
Unlike the usual type of NLP stuff which focuses on text, speech recognition focuses on audio.
Text is neat and formatted. When you type the word “the” it’s the same as if I type the word “the”.
The same cannot be said for audio. When you say “the” it sounds different from when I say “the”.
Audio is a real-world, physical signal like images are.
In that sense, speech recognition is more like computer vision.
In fact, you’ll see how we can apply CNNs to this task as well.
I love this section of the course because we get to dive into some very cool, never-before-seen material in order to do speech processing – namely time-series techniques such as the Fourier transform.
You’ll even get a brief glimpse into how the Fourier transform is related to quantum mechanics and Heisenberg’s uncertainty principle!
Enough talk. Get the course here:
Deep Learning: Advanced NLP and RNNs
1. As usual, if you purchase the course on deeplearningcourses.com and you’d like access on Udemy as well, I will do that for you once the course is released there.
2. I’ve made a lot of updates to deeplearningcourses.com recently, so hopefully you find them useful! Always happy to consider feature requests.
3. I recently moved deeplearningcourses.com to a shiny new server, so if you have any problems, please let me know. Everything seems to be running smoothly so far!