UPDATE: The opportunity to get the VIP version on Udemy has expired. However, the main part of the course (without the VIP parts) is now available at a new low price. Click here to automatically get the current lowest price: https://bit.ly/3nT5fTX
UPDATE 2: Some of you may see the full price of $199 USD without any discount. This is because promotions going forward will now be decided by Udemy, so you will only get what they give. Such is the downside of not getting the VIP version. From what I hear, promotions happen quite often, so you should not have to wait too long.
UPDATE 3: I’ve updated the above with an actual coupon code, so ALL students should see a discount.
UPDATE 4: For those of you waiting for me to finish the rest of the course (e.g. deep learning sections) that has now been done. I’ve also added a big handful of advanced notebooks to the VIP content! (see “part 6” below)
IMPORTANT INFO: For those of you who missed the VIP discount but still want access to the VIP content, scroll to the bottom of this post. For those who got the VIP version on Udemy and want to access the VIP content for free at its new permanent home, scroll to the bottom of this post.
“Wait a minute… don’t you already have like, 3 courses on NLP?”
My first NLP course was released over 5 years ago. While there have been updates to it over the years, it has turned into a Frankenstein monster of sorts.
Therefore, the logical action was to simply start anew.
This course is another MASSIVE one – I say it’s basically 4 courses in 1 (not including the VIP section).
One of those “courses” (the ML part) is a revamp of my original 2016 NLP course. And therefore, this new course is actually a superset of NLP V1. The TL;DR: way more content, better organization.
Let’s get to the details:
Part 1: Vector models and text-preprocessing
Tokenization, stemming, lemmatization, stopwords, etc.
CountVectorizer and TF-IDF
Basic intro to word2vec and GloVe
Build a text classifier
Build a recommendation engine
Part 2: Probability models
Markov models and language models
Part 3: Machine learning
Spam detection with Naive Bayes
Sentiment analysis with Logistic Regression
Text summarization with TF-IDF and TextRank
Topic modeling with Latent Dirichlet Allocation and Non-negative Matrix Factorization*
Latent semantic indexing (LSI / LSA) with PCA / SVD*
VIP only: Applying LSI to text summarization, topic modeling, classification, and recommendations*
Part 4: Deep learning*
RNNs / LSTMs
Part 5: Beginner’s Corner on Transformers with Hugging Face (VIP only)
Sentiment analysis revisit
Text generation revisit
Article spinning revisit
Part 6: Even MORE bonus VIP notebooks (VIP only)
Stock Movement Prediction Using News
LSA / LSI for Recommendations
LSA / LSI for Classification (Feature Engineering)
LSA / LSI for Topic Modeling
LSA / LSI for Text Summarization (2 methods)
LSTM for Text Generation Notebook (i.e. the “decoder” part of an encoder-decoder network)
Masked language model with LSTM Notebook (revisiting the article spinner)
I’m sure many of you are most excited about the Transformers VIP section. Please note that this is not a full course on Transformers. As you know, I like to go very in-depth and as such, this is a topic which deserves its own course. This VIP section is a “beginner’s corner”-style set of lectures, which outlines the tasks that Transformers can do (listed above), along with code examples for each task. The Transformers “code” is very simple – basically just 1 or 2 lines. Don’t worry, the actual notebooks are much longer than that, and demonstrate real meaningful use-cases. The Transformer-specific part is just 1 or 2 lines – and that is great for practical purposes. It does not show you how to train or fine-tune a Transformer, only how to use existing models. If you just want to use Transformers and make use of these state-of-the-art models, but you don’t care about the nitty gritty details, this is perfect for you.
Is the VIP section only ideal for beginners? NO! Despite the name, this section will be useful for everyone, especially those who are interested in Transformers. This is quite a complex topic, and getting “good” with Transformers really requires a step-by-step approach. Think of this as the first step.
What is the “VIP version”? As usual, the VIP version of the course contains extra VIP content only available to those who purchase the course during the VIP period (i.e now). This content will be removed when it becomes a regular, non-VIP course, at which point I will make an announcement. All who sign up for the VIP version will retain access to the VIP content forever via my website, simply by letting me know via email you’d like access (you only need to email if I announce the VIP period is ending).
NOTE: If you are interested in Transformers, a lot of this course contains important prerequisites. The language models and article spinner from part 2 (“probability models”) are very important for understanding pre-training methods. The deep learning sections are very important for learning about embeddings and how neural networks deal with sequences.
NOTE: As per the last few releases, I’ve wanted to get the course into your hands as early as possible. Some sections are still in progress, specifically, those denoted with an asterisk (*) above. UPDATE: All post-release sections have been uploaded!
So what are you waiting for? Get the VIP version of Natural Language Processing (V2) NOW:
For those who missed the VIP version but still want it:
Does this course replace “Natural Language Processing with Deep Learning in Python”, or “Deep Learning: Advanced NLP and RNNs”?
In fact, this course replaces neither of these more advanced NLP courses.
Let’s first consider “Natural Language Processing with Deep Learning in Python”.
This course covers more advanced topics, generally.
For instance, both variants of word2vec (skip-gram and CBOW) are discussed in detail and implemented from scratch. In the current course, only the very basic ideas are discussed.
Another word embedding algorithm called GloVe is taught in detail, along with a from-scratch implementation. In the current course, again it is only mentioned very briefly.
This course reviews RNNs, but goes into great detail on a completely new architecture, the “Recursive Neural Tensor Network”.
Essentially, this is a neural network structured like a tree, which is very useful for tasks such as sentiment analysis where negation of whole phrases may be desired (and easily accomplished with a tree structure).
How about “Deep Learning: Advanced NLP and RNNs”?
Again, there is essentially no overlap.
As the title suggests, this course covers more advanced topics. Like the previously mentioned course, it can be thought of as another sequel to the current course.
This course covers topics such as: bidirectional RNNs, seq2seq (for many-to-many tasks where the input length is not equal to the target length), attention (the central mechanism in transformers), and memory networks.
These are what I’ve been calling “mini-courses” during their development and that’s what they are in spirit. They are:
Shorter in duration
There won’t be any time spent on stuff like appendices which most of you have already seen and are mainly for beginners.
The point of these courses is to have a faster turn-around time on course development. Sometimes, there are topics I want to cover really quickly that won’t ever become a full-sized course. They will also be used to cover more advanced topics.
Unfortunately, a lot of students on other platforms (e.g. Udemy) are complete beginners who have no desire advance and gain actual skill. They take “marketer-taught” courses which leads to a complex which I call “confidence without ability”. Dealing with such students is draining.
These mini-courses will bring us back to the old days (many of you have been around since then!) where the material was more concise, straight-to-the-point, and didn’t need “beginner reminders” all over the place.
Given that these mini-courses are much simpler for me to make, I expect there to be many more in the future.
This first exclusive mini-course is on LinearProgramming for Linear Regression.
Many students in my Linear Regression course often ask, “What if I want to use absolute error instead of squared error?” This course answers exactly that question and more.
The solution is based on LinearProgramming (LP).
We will also cover 2 other common problems: maximum absolute deviation and positive-only (or negative-only) error.
These kinds of problems are often found in professional fields such as quantitative finance, operations research, and engineering.
Each of these problems can be solved using LinearProgramming with the Scipy library.
BONUS FACT: I have a new pen and tablet set up so most of the derivations in this course are done by hand – really truly old-school like the Linear/Logistic Regression days!
MATLAB for Students, Engineers, and Professionals in STEM
Another exclusive course which has already been on deeplearningcourses.com for some time is my original MATLAB course. This was the first course I ever made and is basically a collector’s item. The quality isn’t that great compared to what I am creating now, but obviously you will still learn a lot.
I’m including it in this newsletter to announce that I was able to dig up an extra section on probability that didn’t exist before. So the course now has 3 major sections:
This is a MASSIVE (over 24 hours) Deep Learning course covering EVERYTHING from scratch. That includes:
Machine learning basics (linear neurons)
ANNs, CNNs, and RNNs for images and sequence data
Time series forecasting and stock predictions (+ why all those fake data scientists are doing it wrong)
NLP (natural language processing)
Transfer learning for computer vision
GANs (generative adversarial networks)
Deep reinforcement learning and applying it by building a stock trading bot
IN ADDITION, you will get some unique and never-before-seen VIP projects:
Estimating prediction uncertainty
Drawing the standard deviation of the prediction along with the prediction itself. This is useful for heteroskedastic data (that means the variance changes as a function of the input). The most popular application where heteroskedasticity appears is stock prices and stock returns – which I know a lot of you are interested in.
It allows you to draw your model predictions like this:
Sometimes, the data is simply such that a spot-on prediction can’t be made. But we can do better by letting the model tell us how certain it is in its predictions.
Facial recognition with siamese networks
This one is cool. I mean, I don’t have to tell you how big facial recognition has become, right? It’s the single most controversial technology to come out of deep learning. In the past, we looked at simple ways of doing this with classification, but in this section I will teach you about an architecture built specifically for facial recognition.
You will learn how this can work even on small datasets – so you can build a network that recognizes your friends or can even identify all of your coworkers!
You can really impress your boss with this one. Surprise them one day with an app that calls out your coworkers by name every time they walk by your desk. 😉
Please note: The VIP coupon will work only for the next month (ending May 1, 2020). It’s unknown whether the VIP period will renew after that time.
After that, although the VIP content will be removed from Udemy, all who purchased the VIP course will get permanent free access on deeplearningcourses.com.
This course is designed to be a beginner to advanced course. All that is required is that you take my free Numpy prerequisites to learn some basic scientific programming in Python. And it’s free, so why wouldn’t you!?
You will learn things that took me years to learn on my own. For many people, that is worth tens of thousands of dollars by itself.
There is no heavy math, no backpropagation, etc. Why? Because I already have courses on those things. So there’s no need to repeat them here, and PyTorch doesn’t use them. So you can relax and have fun. =)
All of my deep learning courses until now have been in Tensorflow (and prior to that Theano).
So why learn PyTorch?
Does this mean my future deep learning courses will use PyTorch?
In fact, if you have traveled in machine learning circles recently, you will have noticed that there has been a strong shift to PyTorch.
Case in point: OpenAI switched to PyTorch earlier this year (2020).
Major AI shops such as Apple, JPMorgan Chase, and Qualcomm have adopted PyTorch.
PyTorch is primarily maintained by Facebook (Facebook AI Research to be specific) – the “other” Internet giant who, alongside Google, have a strong vested interest in developing state-of-the-art AI.
But why PyTorch for you and me? (aside from the fact that you might want to work for one of the above companies)
As you know, Tensorflow has adopted the super simple Keras API. This makes common things easy, but it makes uncommon things hard.
With PyTorch, common things take a tiny bit of extra effort, but the upside is that uncommon things are still very easy.
Creating your own custom models and inventing your own ideas is seamless. We will see many examples of that in this course.
For this reason, it is very possible that future deep learning courses will use PyTorch, especially for those advanced topics that many of you have been asking for.
Because of the ease at which you can do advanced things, PyTorch is the main library used by deep learning researchers around the world. If that’s your goal, then PyTorch is for you.
In terms of growth rate, PyTorch dominates Tensorflow. PyTorch now outnumbers Tensorflow by 2:1 and even 3:1 at major machine learning conferences. Researchers hold that PyTorch is superior to Tensorflow in terms of the simplicity of its API, and even speed / performance!
You may recognize this course as one that has already existed in my catalog – however, the course I am announcing today contains ALL-NEW material. The entire course has been gutted and every lecture contained within the course did not exist in the original version.
One of the most common questions I get from students in my PyTorch, Tensorflow 2, and Financial Engineering courses is: “How can I learn reinforcement learning?”
While I do cover RL in those courses, it’s very brief. I’ve essentially summarized 12 hours of material into 2. So by necessity, you will be missing some things.
While that serves as a good way to scratch the surface of RL, it doesn’t give you a true, in-depth understanding that you will get by actually learning each component of RL step-by-step, and most importantly, getting a chance to put everything into code!
This course covers:
The explore-exploit dilemma and the Bayesian bandit method
MDPs (Markov Decision Processes)
Dynamic Programming solution for MDPs
Monte Carlo Method
Temporal Difference Method (including Q-Learning)
Approximation Methods using Radial Basis Functions
Applying your code to OpenAI Gym with zero effort / code changes
Building a stock trading bot (different approach in each course!)
When you get the DeepLearningCourses.com version, note that you will get both versions (new and old) of the course – totalling nearly 20 hours of material.
If you want access to the tic-tac-toe project, this is the version you should get.
Otherwise, if you prefer to use Udemy, that’s fine too. If you purchase on Udemy but would like access to DeepLearningCourses.com, I will allow this since they are the same price. Just send me an email and show me your proof of purchase.
Note that I’m not able to offer the reverse (can’t give you access to Udemy if you purchase on DeepLerningCourses.com, due to operational reasons).
As we all know, the near future is somewhat uncertain. With an invisible virus spreading around the world at an alarming rate, some experts have suggested that it may reach a significant portion of the population.
Schools may close, you may be ordered to work from home, or you may want to avoid going outside altogether. This is not fiction – it’s already happening.
There will be little warning, and as students of science and technology, we should know how rapidly things can change when we have exponential growth (just look at AI itself).
Have you decided how you will spend your time?
I find moments of quiet self-isolation to be excellent for learning advanced or difficult concepts – particularly those in machine learning and artificial intelligence.
To that end, I’ll be releasing several coupons today – hopefully that helps you out and you’re able to study along with me.
Despite the fact that I just released a huge course on Tensorflow 2, this course is more relevant than ever. You might take a course that uses batch norm, adam optimization, dropout, batch gradient descent, etc. without any clue how they work. Perhaps, like me, you find doing “batch norm in 1 line of code” to be unsatisfactory. What’s really going on?
And yes, although it was originally designed for Tensorflow 1 and Theano, everything has been done in Tensorflow 2 as well (you’ll see what I mean).
Cutting-Edge AI: Deep Reinforcement Learning in Python
Well, I don’t need to tell you how popular GANs are. They sparked a mini-revolution in deep learning with the ability to generate photo-realistic images, create music, and enhance low-resolution photos.
Variational autoencoders are a great (but often forgotten by those beginner courses) tool for understanding and generating data (much like GANs) from a principled, probabilistic viewpoint.
Ever seen those cool illustrations where they can change a picture of a person from smiling to frowning on a continuum? That’s VAEs in action!
This is one of my favorite courses. Every beginner ML course these days teaches you how to plug into scikit-learn.
This is trivial. Everyone can do this. Nobody will give you a job just because you can write 3 lines of code when there are 1000s of others lining up beside you who know just as much.
It’s so trivial I teach it for FREE.
That’s why, in this course (a real ML course), I teach you how to not just use, but implement each of the algorithms (the fundamental supervised models).
At the same time, I haven’t forgotten about the “practical” aspect of ML, so I also teach you how to build a web API to serve your trained model.
This is the eventual place where many of your machine learning models will end up. What? Did you think you would just write a script that prints your accuracy and then call it a day? Who’s going to use your model?
The answer is, you’re probably going to serve it (over a server, duh) using a web server framework, such as Django, Flask, Tornado, etc.
Never written your own backend web server application before? I’ll show you how.
Alright, that’s all from me. Stay safe out there folks!
Note: these coupons will last 31 days – don’t wait!
Yearly Black Friday sale is HERE! As I always tell my students – you never know when Udemy’s next “sale drought” is going to be – so if you are on the fence about getting a course, NOW is the time.
NOTE: If you are looking for the Tensorflow 2.0 VIP materials, as of now they can only be purchased here: https://deeplearningcourses.com/c/deep-learning-tensorflow-2 (coupon code automatically applied). The site contains only the VIP materials, and the main part of the course can be purchased on Udemy as per the link below. Therefore, if you want the “full” version of the course, each part now must be purchased separately.
What you’ll learn: Support Vector Machines (SVMs) in-depth starting from linear classification theory to the maximum margin method, kernel trick, quadratic programming, and the SMO (sequential minimal optimization) algorithm
BIG DISCOUNTS for everyone! If you’re in the USA you should see $10 coupons. If you’re in another country you’ll see the corresponding amount in your own currency.
But before we get to that, I want to mention that the VIP bonus for my latest Deep Learning course on GANs and Variational Autoencoders is CLOSING TODAY.
So if you want to get the VIP bonus and you haven’t gotten it yet, NOW is the time!
Just a reminder of what you get:
1) PDF cheatsheet / tutorial on Variational Autoencoders for your reading convenience
2) PDF cheatsheet / tutorial on GANs for your reading convenience (with exercises)
3) Pre-trained style transfer network! No need to train for 4 months on your slow CPU, or pay hundreds of dollars to use a GPU, or download 100s of MBs of Tensorflow checkpoint data! I’ve condensed the neural network weights to a few MBs so you can get going right away.
If you don’t know what “style transfer” is – that’s where I train a neural network to learn the “style” of Picasso or Da Vinci, and then apply it to a completely unrelated image like the Chicago skyline.
Very cool application of neural networks!
Remember: these VIP bonuses are ONLY available if you use the VIP coupon, which is automatically applied when you click this link: