This is a great video that explains a lot of what I’ve observed from students trying to machine learning, but put more eloquently than I could have said myself. =)
I’m always having to contend against students who have taken a super easy-peasy course, actually learned nothing, but believe they know everything. Then, when they come up against the real content, they believe it’s because the instructor is trying to make the course really “elite” or trying to make them feel “dumb” by including lots of math and/or programming that they can’t understand.
I (or any other instructor) did not invent these subjects
If the subject requires math, that’s because it does
If the subject requires programming, that’s because it does
We didn’t put math in there just to torture you. If you’re taking a math course, it’s probably going to have math in it.
A student gets frustrated because they don’t understand the real subject, but really they should be frustrated with the instructor who gave them the empty course that provided them with no skill and too much confidence.
This video is about software developers, but if you view it from the perspective of machine learning, everything still applies. Watch the video!
I decided to combine both NLP (natural language processing) and RNNs (recurrent neural networks) because these topics are so intertwined it’s almost impossible to talk about one without the other.
In recent years, a few ideas have started to bubble up and have shown themselves to be truly useful, and in this course, I bring those ideas to you.
Let’s start with the applications:
1. I’ve been asked quite a few times about how to do classification when each input can have multiple labels assigned to it. We will do a text classification problem that has data exactly like this.
2. Neural machine translation. One of the most popular applications of Deep NLP. We can’t not do this.
3. Question answering. You can think of this as “reading comprehension”. Can an AI read a story and answer a question about it? Facebook Research made this popular with their bAbI dataset.
4. Speech recognition (see below).
As you know I like to take an abstract view of machine learning. We know that all of the techniques for these applications can be used for yet more applications without any change in code because the “data is the same”. For example, a spam detection dataset looks no different than a sentiment analysis dataset.
In the same vein, neural machine translation is no different from simple versions of question answering and chatbots. So you are really learning how to do all of these things at the same time.
We will of course get a chance to review basics such as LSTMs, GRUs, language modeling, word embeddings, and so forth.
What techniques will we cover? These techniques are what have helped RNNs really work well for NLP in the recent past:
1. Bidirectional RNNs
2. Sequence-to-sequence models (seq2seq)
4. Memory networks
So, if you’ve already heard about these and you wanted to learn about them – I hope you are excited!
This course is NOT just about RNNs but CNNs (convolutional neural networks) as well. This is an advanced course – ALL deep learning is fair game.
Early in the course, you’ll see how we can apply CNNs to text.
You will see that we get results on-par with LSTMs and GRUs.
That’s already pretty neat.
But there’s still more.
If you’re reading this, you automatically get access to the VIP version of the course, which contains EVEN MORE material.