New course out today – Recurrent Neural Networks in Python: Deep Learning part 5.
If you already know what the course is about (recurrent units, GRU, LSTM), grab your 50% OFF coupon and go!:
Like the course I just released on Hidden Markov Models, Recurrent Neural Networks are all about learning sequences – but whereas Markov Models are limited by the Markov assumption, Recurrent Neural Networks are not – and as a result, they are more expressive, and more powerful than anything we’ve seen on tasks that we haven’t made progress on in decades.
Sequences appear everywhere – stock prices, language, credit scoring, and webpage visits.
Recurrent neural networks have a history of being very hard to train. It hasn’t been until recently that we’ve found ways around what is called the vanishing gradient problem, and since then, recurrent neural networks have become one of the most popular methods in deep learning.
If you took my course on Hidden Markov Models, we are going to go through a lot of the same examples in this class, except that our results are going to be a lot better.
Our classification accuracies will increase, and we’ll be able to create vectors of words, or word embeddings, that allow us to visualize how words are related on a graph.
We’ll see some pretty interesting results, like that our neural network seems to have learned that all religions and languages and numbers are related, and that cities and countries have hierarchical relationships.
If you’re interested in discovering how modern deep learning has propelled machine learning and data science to new heights, this course is for you.
I’ll see you in class.
Click here for 50% OFF: