Data Science: Transformers for Natural Language Processing
The complete Transformers course has arrived
Welcome to my latest course, Transformers for Natural Language Processing (NLP).
Don’t want to read my little spiel? Just click here to get the VIP discount:
https://www.udemy.com/course/data-science-transformers-nlp/?couponCode=TRANSFORMERSVIP (expires in 30 days – June 25, 2022!)
(expires Mar 1, 2023)
Transformers have changed deep learning immensely.
They’ve massively improved the state-of-the-art in all NLP tasks, like sentiment analysis, machine translation, question-answering, etc.
They’re even expanding their influence into other fields, such as computational biology and computer vision. DeepMind’s AlphaFold 2 has been said to “solve” a longstanding problem in molecular biology, known as protein structure prediction. Recently, DALL-E 2 demonstrated the ability to generate amazing art and photo-realistic images based only on simple text prompts. Imagine that – creating a realistic image out of just an idea!
Just within the past week, DeepMind introduced “Gato“, which is what they call a “generalist agent”, an AI that can do multiple things, like chat (i.e. do NLP!), play Atari games, caption images (i.e. computer vision!), manipulate a real, physical robot arm to stack blocks, and more!
Gato does all this by converting all the usual inputs from other domains into a sequence of tokens, so that they can be processed just like how we do in NLP. This is a great example of my oft-repeated rule, “all data is the same” (and also, another great reason to learn NLP since it would be a prerequisite to understanding this).
The course is split into 3 major parts:
- Using Transformers (Beginner)
- Fine-Tuning Transformers (Intermediate)
- Transformers In-Depth (Expert – VIP only)
In part 1, you will learn how to use transformers which were trained for you. This costs millions of dollars to do, so it’s not something you want to try by yourself!
We’ll see how these prebuilt models can already be used for a wide array of tasks, including:
- text classification (e.g. spam detection, sentiment analysis, document categorization)
- named entity recognition
- text summarization
- machine translation
- generating (believable) text
- masked language modeling (article spinning)
- zero-shot classification
This is already very practical.
If you need to do sentiment analysis, document categorization, entity recognition, translation, summarization, etc. on documents at your workplace or for your clients – you already have the most powerful state-of-the-art models at your fingertips with very few lines of code.
One of the most amazing applications is “zero-shot classification”, where you will observe that a pretrained model can categorize your documents, even without any training at all.
In part 2, you will learn how to improve the performance of transformers on your own custom datasets. By using “transfer learning”, you can leverage the millions of dollars of training that have already gone into making transformers work very well.
You’ll see that you can fine-tune a transformer for many of the above tasks with relatively little work (and little cost).
In part 3 (the VIP sections), you will learn how transformers really work. The previous sections are nice, but a little too nice. Libraries are OK for people who just want to get the job done, but they don’t work if you want to do anything new or interesting.
Let’s be clear: this is very practical.
How practical, you might ask?
Well, this is where the big bucks are.
Those who have a deep understanding of these models and can do things no one has ever done before are in a position to command higher salaries and prestigious titles. Machine learning is a competitive field, and a deep understanding of how things work can be the edge you need to come out on top.
We’ll also look at how to implement transformers from scratch.
As the great Richard Feynman once said, “what I cannot create, I do not understand”.
- As usual, I wanted to get this course into your hands as early as possible! There are a few sections and lectures still in the works, including (but not limited to):
fine-tuning for question-answering, more theory about transformers, and implementing transformers from scratch. As usual, I will update this post as new lectures are released. Update: all planned content has been released.
- Everyone makes mistakes (including me)! Because this is such a large course, if I forgot anything (e.g. a Github link), just email me and let me know.
- Due to the way Udemy now works, if you purchase the course on deeplearningcourses.com, I cannot give you access to the Udemy version. It hasn’t always been this way, and Udemy has tended to make changes over the years that negatively impact both me and you, unfortunately.
- If you don’t know how “VIP courses” work, check out my post on that here. Short version: deeplearningcourses.com always houses all the content (both VIP and non-VIP). Udemy will house all the content initially, but the VIP content is removed later on.
- Fine-tuning for question-answering
- Transformers theory
- Transformers implementations from scratch
So what are you waiting for? Get the VIP version of Transformers for Natural Language Processing NOW: