UPDATE: The opportunity to get the VIP version on Udemy has expired. However, the main part of the course (without the VIP parts) is now available at a new low price. Click here to automatically get the current lowest price: https://bit.ly/3nT5fTX
UPDATE 2: Some of you may see the full price of $199 USD without any discount. This is because promotions going forward will now be decided by Udemy, so you will only get what they give. Such is the downside of not getting the VIP version. From what I hear, promotions happen quite often, so you should not have to wait too long.
UPDATE 3: I’ve updated the above with an actual coupon code, so ALL students should see a discount.
UPDATE 4: For those of you waiting for me to finish the rest of the course (e.g. deep learning sections) that has now been done. I’ve also added a big handful of advanced notebooks to the VIP content! (see “part 6” below)
IMPORTANT INFO: For those of you who missed the VIP discount but still want access to the VIP content, scroll to the bottom of this post. For those who got the VIP version on Udemy and want to access the VIP content for free at its new permanent home, scroll to the bottom of this post.
“Wait a minute… don’t you already have like, 3 courses on NLP?”
Yes!
My first NLP course was released over 5 years ago. While there have been updates to it over the years, it has turned into a Frankenstein monster of sorts.
Therefore, the logical action was to simply start anew.
This course is another MASSIVE one – I say it’s basically 4 courses in 1 (not including the VIP section).
One of those “courses” (the ML part) is a revamp of my original 2016 NLP course. And therefore, this new course is actually a superset of NLP V1. The TL;DR: way more content, better organization.
Let’s get to the details:
Part 1: Vector models and text-preprocessing
Tokenization, stemming, lemmatization, stopwords, etc.
CountVectorizer and TF-IDF
Basic intro to word2vec and GloVe
Build a text classifier
Build a recommendation engine
Part 2: Probability models
Markov models and language models
Article spinner
Cipher decryption
Part 3: Machine learning
Spam detection with Naive Bayes
Sentiment analysis with Logistic Regression
Text summarization with TF-IDF and TextRank
Topic modeling with Latent Dirichlet Allocation and Non-negative Matrix Factorization*
Latent semantic indexing (LSI / LSA) with PCA / SVD*
VIP only: Applying LSI to text summarization, topic modeling, classification, and recommendations*
Part 4: Deep learning*
Embeddings
Feedforward ANNs
CNNs
RNNs / LSTMs
Part 5: Beginner’s Corner on Transformers with Hugging Face (VIP only)
Sentiment analysis revisit
Text generation revisit
Article spinning revisit
Question-answering
Zero-shot classification
Part 6: Even MORE bonus VIP notebooks (VIP only)
Stock Movement Prediction Using News
LSA / LSI for Recommendations
LSA / LSI for Classification (Feature Engineering)
LSA / LSI for Topic Modeling
LSA / LSI for Text Summarization (2 methods)
LSTM for Text Generation Notebook (i.e. the “decoder” part of an encoder-decoder network)
Masked language model with LSTM Notebook (revisiting the article spinner)
I’m sure many of you are most excited about the Transformers VIP section. Please note that this is not a full course on Transformers. As you know, I like to go very in-depth and as such, this is a topic which deserves its own course. This VIP section is a “beginner’s corner”-style set of lectures, which outlines the tasks that Transformers can do (listed above), along with code examples for each task. The Transformers “code” is very simple – basically just 1 or 2 lines. Don’t worry, the actual notebooks are much longer than that, and demonstrate real meaningful use-cases. The Transformer-specific part is just 1 or 2 lines – and that is great for practical purposes. It does not show you how to train or fine-tune a Transformer, only how to use existing models. If you just want to use Transformers and make use of these state-of-the-art models, but you don’t care about the nitty gritty details, this is perfect for you.
Is the VIP section only ideal for beginners? NO! Despite the name, this section will be useful for everyone, especially those who are interested in Transformers. This is quite a complex topic, and getting “good” with Transformers really requires a step-by-step approach. Think of this as the first step.
What is the “VIP version”? As usual, the VIP version of the course contains extra VIP content only available to those who purchase the course during the VIP period (i.e now). This content will be removed when it becomes a regular, non-VIP course, at which point I will make an announcement. All who sign up for the VIP version will retain access to the VIP content forever via my website, simply by letting me know via email you’d like access (you only need to email if I announce the VIP period is ending).
NOTE: If you are interested in Transformers, a lot of this course contains important prerequisites. The language models and article spinner from part 2 (“probability models”) are very important for understanding pre-training methods. The deep learning sections are very important for learning about embeddings and how neural networks deal with sequences.
NOTE: As per the last few releases, I’ve wanted to get the course into your hands as early as possible. Some sections are still in progress, specifically, those denoted with an asterisk (*) above. UPDATE: All post-release sections have been uploaded!
So what are you waiting for? Get the VIP version of Natural Language Processing (V2) NOW:
For those who missed the VIP version but still want it:
Yes, you can still get the VIP contents! They can now be purchased separately on deeplearningcourses.com.
Does this course replace “Natural Language Processing with Deep Learning in Python”, or “Deep Learning: Advanced NLP and RNNs”?
In fact, this course replaces neither of these more advanced NLP courses.
Let’s first consider “Natural Language Processing with Deep Learning in Python”.
This course covers more advanced topics, generally.
For instance, both variants of word2vec (skip-gram and CBOW) are discussed in detail and implemented from scratch. In the current course, only the very basic ideas are discussed.
Another word embedding algorithm called GloVe is taught in detail, along with a from-scratch implementation. In the current course, again it is only mentioned very briefly.
This course reviews RNNs, but goes into great detail on a completely new architecture, the “Recursive Neural Tensor Network”.
Essentially, this is a neural network structured like a tree, which is very useful for tasks such as sentiment analysis where negation of whole phrases may be desired (and easily accomplished with a tree structure).
How about “Deep Learning: Advanced NLP and RNNs”?
Again, there is essentially no overlap.
As the title suggests, this course covers more advanced topics. Like the previously mentioned course, it can be thought of as another sequel to the current course.
This course covers topics such as: bidirectional RNNs, seq2seq (for many-to-many tasks where the input length is not equal to the target length), attention (the central mechanism in transformers), and memory networks.
The Black Friday 2021 sale is on! I’m sending you links now which will give you the maximum possible discount during the Black Friday / Cyber Monday season (see below for specific dates).
For those students who are new (welcome!), you may not know that I have a whole catalog of machine learning and AI courses built up and continuously updated over the past 6 years, with separate in-depth courses covering nearly every topic in the field, including:
This is a MASSIVE (20 hours) Financial Engineering course covering the core fundamentals of financial engineering and financial analysis from scratch. We will go in-depth into all the classic topics, such as:
Exploratory data analysis, significance testing, correlations, alpha and beta
Time series analysis, simple moving average, exponentially-weighted moving average
Holt-Winters exponential smoothing model
ARIMA and SARIMA
Efficient Market Hypothesis
Random Walk Hypothesis
Time series forecasting (“stock price prediction”)
Modern portfolio theory
Efficient frontier / Markowitz bullet
Mean-variance optimization
Maximizing the Sharpe ratio
Convex optimization with Linear Programming and Quadratic Programming
Capital Asset Pricing Model (CAPM)
Algorithmic trading
In addition, we will look at various non-traditional techniques which stem purely from the field of machine learning and artificial intelligence, such as:
Regression models
Classification models
Unsupervised learning
Reinforcement learning and Q-learning
List of VIP content:
Classic Algorithmic Trading – Trend Following Strategy
Reinforcement Learning is the most general form of AI we know of so far – some speculate it is the way forward to mimic animal intelligence and attain “AGI” (artificial general intelligence).
This course covers:
The explore-exploit dilemma and the Bayesian bandit method
MDPs (Markov Decision Processes)
Dynamic Programming solution for MDPs
Monte Carlo Method
Temporal Difference Method (including Q-Learning)
Approximation Methods using RBF Neural Networks
Applying your code to OpenAI Gym with zero effort / code changes
Building a stock trading bot (different approach in each course!)
Tensorflow 2: Deep Learning and Artificial Intelligence VIP
Exclusive to deeplearningcourses.com only
===The complete Tensorflow 2 course has arrived===
Looking for the LOWEST PRICE POSSIBLE Udemy Coupons?
Please enjoy the below Black Friday coupons for the rest of my courses on Udemy.
The best part is, you don’t have to enter any coupon code at all. Simply clicking on the links below will automatically get you the best possible price.
*Note: a few of the courses below, marked with an asterisk (*) are not part of the Black Friday sale. However, if you purchase these courses at the current price, you will receive, upon request, complimentary access to the full VIP version of the course on deeplearningcourses.com. Just email me at [email protected] for free access with proof of purchase.
Support Vector Machines (SVMs) in-depth starting from linear classification theory to the maximum margin method, kernel trick, quadratic programming, and the SMO (sequential minimal optimization) algorithm
Learn how we went from the fundamental ANNs to many of the key technologies we use today, such as:
Batch / stochastic gradient descent instead of full gradient descent
(Nesterov) momentum, RMSprop, Adam, and other adaptive learning rate techniques
Dropout regularization
Batch normalization
Learn how deep learning is accelerated by GPUs (and how to set one up yourself)
Learn how deep learning libraries improve the development process with GPUs (faster training) and automatic differentiation (so you don’t have to write the code or derive the math yourself)
Apply deep learning to natural language processing (NLP)
Covers the famous word2vec and GloVe algorithms
See how RNNs apply to text problems
Learn about a neural network structured like a “tree” which we call recursive neural networks and a more powerful version: recursive neural tensor networks (RNTNs)
Learn how combining multiple machine learning models is better than just one
Covers fundamental ensemble approaches such as Random Forest and AdaBoost
Learn/derive the famous “bias-variance tradeoff” (most people can only discuss it at a high level, you will learn what it really means)
Learn about the difference between the “bagging” and “boosting” approaches
Remember, this is a very rare sale (only once per year!). If there’s anything you want or if you are on the fence and think you might be interested, get it NOW so that you don’t miss out!
Ever come across a machine learning / data science blog demonstrating how to predict stock prices using an autoregressive model, with past stock prices as input?
It’s been awhile, but I am finally continuing this YouTube mini-series I started awhile back, which goes over common mistakes in popular blogs on predicting stock prices with machine learning. This is the 2nd installment.
It is about why you shouldn’t use prices as inputs.
Time series analysis is becoming an increasingly important analytical tool.
With inflation on the rise, many are turning to the stock market and cryptocurrencies in order to ensure their savings do not lose their value.
COVID-19 has shown us how forecasting is an essential tool for driving public health decisions.
Businesses are becoming increasingly efficient, forecasting inventory and operational needs ahead of time.
Let me cut to the chase. This is not your average Time Series Analysis course. This course covers modern developments such as deep learning, time series classification (which can drive user insights from smartphone data, or read your thoughts from electrical activity in the brain), and more.
We will cover techniques such as:
ETS and Exponential Smoothing
Holt’s Linear Trend Model
Holt-Winters Model
ARIMA, SARIMA, SARIMAX, and Auto ARIMA
ACF and PACF
Vector Autoregression and Moving Average Models (VAR, VMA, VARMA)
Machine Learning Models (including Logistic Regression, Support Vector Machines, and Random Forests)
Deep Learning Models (Artificial Neural Networks, Convolutional Neural Networks, and Recurrent Neural Networks)
GRUs and LSTMs for Time Series Forecasting
We will cover applications such as:
Time series forecasting of sales data
Time series forecasting of stock prices and stock returns
Time series classification of smartphone data to predict user behavior
The VIP version of the course (obtained by purchasing the course NOW during the VIP period) will cover even more exciting topics, such as:
As always, please note that the VIP period may not last forever, and if / when the course becomes “non-VIP”, the VIP contents will be removed. If you purchased the VIP version, you will retain permanent access to the VIP content via my website, simply by letting me know via email you’d like access (you only need to email if I announce the VIP period is ending).
So what are you waiting for? Get the VIP version of Time Series Analysis NOW:
This is a MASSIVE (over 24 hours) Deep Learning course covering EVERYTHING from scratch. That includes:
Machine learning basics (linear neurons)
ANNs, CNNs, and RNNs for images and sequence data
Time series forecasting and stock predictions (+ why all those fake data scientists are doing it wrong)
NLP (natural language processing)
Recommender systems
Transfer learning for computer vision
GANs (generative adversarial networks)
Deep reinforcement learning and applying it by building a stock trading bot
IN ADDITION, you will get some unique and never-before-seen VIP projects:
Estimating prediction uncertainty
Drawing the standard deviation of the prediction along with the prediction itself. This is useful for heteroskedastic data (that means the variance changes as a function of the input). The most popular application where heteroskedasticity appears is stock prices and stock returns – which I know a lot of you are interested in.
It allows you to draw your model predictions like this:
Sometimes, the data is simply such that a spot-on prediction can’t be made. But we can do better by letting the model tell us how certain it is in its predictions.
Facial recognition with siamese networks
This one is cool. I mean, I don’t have to tell you how big facial recognition has become, right? It’s the single most controversial technology to come out of deep learning. In the past, we looked at simple ways of doing this with classification, but in this section I will teach you about an architecture built specifically for facial recognition.
You will learn how this can work even on small datasets – so you can build a network that recognizes your friends or can even identify all of your coworkers!
You can really impress your boss with this one. Surprise them one day with an app that calls out your coworkers by name every time they walk by your desk. 😉
Please note: The VIP coupon will work only for the next month (ending May 1, 2020). It’s unknown whether the VIP period will renew after that time.
After that, although the VIP content will be removed from Udemy, all who purchased the VIP course will get permanent free access on deeplearningcourses.com.
Minimal Prerequisites
This course is designed to be a beginner to advanced course. All that is required is that you take my free Numpy prerequisites to learn some basic scientific programming in Python. And it’s free, so why wouldn’t you!?
You will learn things that took me years to learn on my own. For many people, that is worth tens of thousands of dollars by itself.
There is no heavy math, no backpropagation, etc. Why? Because I already have courses on those things. So there’s no need to repeat them here, and PyTorch doesn’t use them. So you can relax and have fun. =)
Why PyTorch?
All of my deep learning courses until now have been in Tensorflow (and prior to that Theano).
So why learn PyTorch?
Does this mean my future deep learning courses will use PyTorch?
In fact, if you have traveled in machine learning circles recently, you will have noticed that there has been a strong shift to PyTorch.
Case in point: OpenAI switched to PyTorch earlier this year (2020).
Major AI shops such as Apple, JPMorgan Chase, and Qualcomm have adopted PyTorch.
PyTorch is primarily maintained by Facebook (Facebook AI Research to be specific) – the “other” Internet giant who, alongside Google, have a strong vested interest in developing state-of-the-art AI.
But why PyTorch for you and me? (aside from the fact that you might want to work for one of the above companies)
As you know, Tensorflow has adopted the super simple Keras API. This makes common things easy, but it makes uncommon things hard.
With PyTorch, common things take a tiny bit of extra effort, but the upside is that uncommon things are still very easy.
Creating your own custom models and inventing your own ideas is seamless. We will see many examples of that in this course.
For this reason, it is very possible that future deep learning courses will use PyTorch, especially for those advanced topics that many of you have been asking for.
Because of the ease at which you can do advanced things, PyTorch is the main library used by deep learning researchers around the world. If that’s your goal, then PyTorch is for you.
In terms of growth rate, PyTorch dominates Tensorflow. PyTorch now outnumbers Tensorflow by 2:1 and even 3:1 at major machine learning conferences. Researchers hold that PyTorch is superior to Tensorflow in terms of the simplicity of its API, and even speed / performance!
As we all know, the near future is somewhat uncertain. With an invisible virus spreading around the world at an alarming rate, some experts have suggested that it may reach a significant portion of the population.
Schools may close, you may be ordered to work from home, or you may want to avoid going outside altogether. This is not fiction – it’s already happening.
There will be little warning, and as students of science and technology, we should know how rapidly things can change when we have exponential growth (just look at AI itself).
Have you decided how you will spend your time?
I find moments of quiet self-isolation to be excellent for learning advanced or difficult concepts – particularly those in machine learning and artificial intelligence.
To that end, I’ll be releasing several coupons today – hopefully that helps you out and you’re able to study along with me.
Despite the fact that I just released a huge course on Tensorflow 2, this course is more relevant than ever. You might take a course that uses batch norm, adam optimization, dropout, batch gradient descent, etc. without any clue how they work. Perhaps, like me, you find doing “batch norm in 1 line of code” to be unsatisfactory. What’s really going on?
And yes, although it was originally designed for Tensorflow 1 and Theano, everything has been done in Tensorflow 2 as well (you’ll see what I mean).
Cutting-Edge AI: Deep Reinforcement Learning in Python
A lot of people think SVMs are obsolete. Wrong! A lot of you students want a nice “plug-and-play” model that works well out of the box. Guess what one of the best models is for that? SVM!
Many of the concepts from SVMs are extremely useful today – like quadratic programming (used for portfolio optimization) and constrained optimization.
Constrained optimization appears in modern Reinforcement Learning, for you non-believers (see: TRPO, PPO).
Well, I don’t need to tell you how popular GANs are. They sparked a mini-revolution in deep learning with the ability to generate photo-realistic images, create music, and enhance low-resolution photos.
Variational autoencoders are a great (but often forgotten by those beginner courses) tool for understanding and generating data (much like GANs) from a principled, probabilistic viewpoint.
Ever seen those cool illustrations where they can change a picture of a person from smiling to frowning on a continuum? That’s VAEs in action!
This is one of my favorite courses. Every beginner ML course these days teaches you how to plug into scikit-learn.
This is trivial. Everyone can do this. Nobody will give you a job just because you can write 3 lines of code when there are 1000s of others lining up beside you who know just as much.
It’s so trivial I teach it for FREE.
That’s why, in this course (a real ML course), I teach you how to not just use, but implement each of the algorithms (the fundamental supervised models).
At the same time, I haven’t forgotten about the “practical” aspect of ML, so I also teach you how to build a web API to serve your trained model.
This is the eventual place where many of your machine learning models will end up. What? Did you think you would just write a script that prints your accuracy and then call it a day? Who’s going to use your model?
The answer is, you’re probably going to serve it (over a server, duh) using a web server framework, such as Django, Flask, Tornado, etc.
Never written your own backend web server application before? I’ll show you how.
Alright, that’s all from me. Stay safe out there folks!
Note: these coupons will last 31 days – don’t wait!
In this article, I will teach you how to setup your NVIDIA GPU laptop (or desktop!) for deep learning with NVIDIA’s CUDA and CuDNN libraries.
The main thing to remember before we start is that these steps are always constantly in flux – things change and they change quickly in the field of deep learning. Therefore I remind you of my slogan: “Learn the principles, not the syntax“. We are not doing any coding here so there’s no “syntax” per se, but the general idea is to learn the principles at a high-level, don’t try to memorize details which may change on you and confuse you if you forget about what the principles are.
This article is more like a personal story rather than a strict tutorial. It’s meant to help you understand the many obstacles you may encounter along the way, and what practical strategies you can take to get around them.
There are about 10 different ways to install the things we need. Some will work; some won’t. That’s just how cutting-edge software is. If that makes you uncomfortable, well, stop being a baby. Yes, it’s going to be frustrating. No, I didn’t invent this stuff, it is not within my control. Learn the principles, not the syntax!
This article will be organized into the following sections:
If you’ve never setup your laptop for GPU-enabled deep learning before, then you might assume that there’s nothing you need to do beyond buying a laptop with a GPU. WRONG!
You need to have a specific kind of laptop with specific software and drivers installed. Everything must work together.
You can think of all the software on your computer as a “stack” of layers.
At the lowest layer, you have the kernel (very low-level software that interacts with the hardware) and at higher levels you have runtimes and libraries such as SQLite, SSL, etc.
When you write an application, you need to make use of lower-level runtimes and libraries – your code doesn’t just run all by itself.
So, when you install Tensorflow (as an example), that depends on lower-level libraries (such as CUDA and CuDNN) which interact with the GPU (hardware).
If any of the layers in your stack are missing (all the way from the hardware up to high-level libraries), your code will not work.
Low-Level = Hardware
High-Level = Libraries and Frameworks
Choosing your laptop
Not all GPUs are created equal. If you buy a MacBook Pro these days, you’ll get a Radeon Pro Vega GPU. If you buy a Dell laptop, it might come with an Intel UHD GPU.
These are no good for machine learning or deep learning.
You will need a laptop with an NVIDIA GPU.
Some laptops come with a “mobile” NVIDIA GPU, such as the GTX 950m. These are OK, but ideally you want a GPU that doesn’t end with “m”. As always, check performance benchmarks if you want the full story.
I would also recommend at least 4GB of RAM (otherwise, you won’t be able to use larger batch sizes, which will affect training).
In fact, some of the newer neural networks won’t even fit on the RAM to do prediction, never mind training!
One thing you have to consider is if you actually want to do deep learning on your laptop vs. just provisioning a GPU-enabled machine on a service such as AWS (Amazon Web Services).
These will cost you a few cents to a dollar per hour (depending on the machine type), so if you just have a one-off job to run, you may want to consider this option.
I already have a walkthrough tutorial in my course Modern Deep Learning in Python about that, so I assume if you are reading this article, you are rather interested in purchasing your own GPU-enabled computer and installing everything yourself.
Personally, I would recommend Lenovo laptops. The main reason is they always play nice with Linux (we’ll go over why that’s important in the next section). Lenovo is known for their high-quality and sturdy laptops and most professionals who use PCs for work use Thinkpads. They have a long history (decades) of serving the professional community so it’s nearly impossible to go wrong. Other brands generally have lots of issues (e.g. sound not working, WiFi not working, etc.) with Linux.
This one only has an i5 processor and 8GB of RAM, but on the plus side it’s cost-effective. Note that the prices were taken when I wrote this article; they might change.
This is the best option in my opinion. Better or equal specs compared to the previous two. i7 processor, 24GB of RAM (32GB would be ideal!), lots of space (1TB HD + 512GB SSD), and the same GPU. Bonus: it’s nearly the same price as the above (currently).
If you really want to splurge, consider one of these big boys. Thinkpads are classic professional laptops. These come with real beast GPUs – NVIDIA Quadro RTX 5000 with 16GB of VRAM.
You’ve still got the i7 processor, 16GB of RAM, and a 512GB NVMe SSD (basically a faster version of already-super-fast SSDs). Personally, I think if you’re going to splurge, you should opt for 32GB of RAM and a 1TB SSD.
If you’ve watched my videos, you might be wondering: what about a Mac? (I use a Mac for screen recording).
Macs are great in general for development, and they used to come with NVIDIA GPUs (although those GPUs are not as powerful as the ones currently available for PCs). Support for Mac has dropped off in the past few years, so you won’t be able to install say, the latest version of Tensorflow, CUDA, and CuDNN without a significant amount of effort (I spent probably a day and just gave up). And on top of that the GPU won’t even be that great. Overall, not recommended.
Choosing your Operating System
As I mentioned earlier, you probably want to be running Linux (Ubuntu is my favorite).
Why, you might ask?
“Tensorflow works on Windows, so what’s the problem?”
Remember my motto: “Learn the principles, not the syntax“.
What’s the principle here? Many of you probably haven’t been around long enough to know this, but the problem is, many machine learning and deep learning libraries didn’t work with Windows when they first came out.
So, unless you want to wait a year or more after new inventions and software are being made, then try to avoid Windows.
Don’t take my word for it, look at the examples:
Early on, even installing Numpy, Matplotlib, Pandas, etc. was very difficult on Windows. I’ve spent hours with clients on this. Nowadays you can just use Anaconda, but that’s not always been the case. At the time of this writing, things only started to shape up a few years ago.
Theano (the original GPU-enabled deep learning library) initially did not work on Windows for many years.
Tensorflow, Google’s deep learning library and the most popular today, initially did not work on Windows.
PyTorch, a deep learning library popular with the academic community, initially did not work on Windows.
OpenAI Gym, the most popular reinforcement learning library, only partially works on Windows. Some environments, such as MuJoCo and Atari, still have no support for Windows.
There are more examples, but these are the major historical “lessons” I point to for why I normally choose Linux over Windows.
One benefit of using Windows is that installing CUDA is very easy, and it’s very likely that your Windows OS (on your Lenovo laptop) will come with it pre-installed. The original use-case for GPUs was gaming, so it’s pretty user-friendly.
If you purchase one of the above laptops and you choose to stick with Windows, then you will not have to worry about installing CUDA – it’s already there. There is a nice user interface so whenever you need to update the CUDA drivers you can do so with just a few clicks.
Aside from the Python libraries below (such as Tensorflow / PyTorch) you need to install 2 things from NVIDIA first:
CUDA (already comes with Windows if you purchase one of the above laptops, Ubuntu instructions below)
CuDNN (you have to install it yourself, following the instructions on NVIDIA’s website)
DUAL-BOOTING:
I always find it useful to have both Windows and Ubuntu on-hand, and if you get the laptop above that has 2 drives (1TB HD and 512GB SSD) dual-booting is a natural choice.
These days, dual booting is not too difficult. Usually, one starts with Windows. Then, you insert your Ubuntu installer (USB stick), and choose the option to install Ubuntu alongside the existing OS. There are many tutorials online you can follow.
Hint: Upon entering the BIOS, you may have to disable the Secure Boot / Fast Boot options.
INSTALLING PYTHON:
I already have lectures on how to install Python with and without Anaconda. These days, Anaconda works well on Linux, Mac, and Windows, so I recommend it for easy management of your virtual environments.
Installing CUDA and CuDNN on Ubuntu and similar Linux OSes (Debian, Pop!_OS, Xubuntu, Lubuntu, etc.)
Ok, now we get to the hard stuff. You have your laptop and your Ubuntu/Debian OS.
Can you just install Tensorflow and magically start making use of your super powerful GPU? NO!
Now you need to install the “low-level” software that Tensorflow/Theano/PyTorch/etc. make use of – which are CUDA and CuDNN.
This is where things get tricky, because there are many ways to install CUDA and CuDNN, and some of these ways don’t always work (from my experience).
Examples of how things can “randomly go wrong”:
I installed CUDA on Linux Mint. After this, I was unable to boot the machine and get into the OS.
Pop!_OS (System76) has their own versions of CUDA and CuDNN that you can install with simple apt commands. Didn’t work. Had to install them the “regular way”.
Updating CUDA and CuDNN sucks. You may find the nuclear option easier (installing the OS and drivers from scratch)
Here is a method that consistently works for me:
Go to https://developer.nvidia.com/cuda-downloads and choose the options appropriate for your system. (Linux / x86_64 (64-bit) / Ubuntu / etc.). Note that Pop!_OS is a derivative of Ubuntu, as is Linux Mint.
You’ll download a .deb file. Do the usual “dpkg -i <filename>.deb” to run the installer. CUDA is installed!
Those instructions are subject to change, but basically you can just copy and paste what they give you (don’t copy the below, check the site to get the latest version):
If you decided you hate reinforcement learning and you’re okay with not being able to use new software until it becomes mainstream, then you may have decided you want to stick with Windows.
Luckily, there’s still lots you can do in deep learning.
As mentioned previously, installing CUDA and CuDNN on Windows is easy.
If you did not get a laptop which has CUDA preinstalled, then you’ll have to install it yourself. Go to https://developer.nvidia.com/cuda-downloads, choose the options appropriate for your system (Windows 10 / x86_64 (64-bit) / etc.)
This will give you a .exe file to download. Simply click on it and follow the onscreen prompts.
Unlike the other libraries we’ll discuss, there are different packages to separate the CPU and GPU versions of Tensorflow.
The Tensorflow website will give you the exact command to run to install Tensorflow (it’s the same whether you are in Anaconda or not).
It will look like this:
So you would install it using either:
pip install tensorflow
pip install tensorflow-gpu
Since this article is about GPU-enabled deep learning, you’ll want to install tensorflow-gpu.
UPDATE: Starting with version 2.1, installing “tensorflow” will automatically give you GPU capabilities, so there’s no need to install a GPU-specific version (although the syntax still works).
After installing Tensorflow, you can verify that it is using the GPU:
tf.test.is_gpu_available()
This will return True if Tensorflow is using the GPU.
Installing GPU-enabled PyTorch
Nothing special nowadays! Just do:
pip install torch
as usual.
To check whether PyTorch is using the GPU, you can use the following commands:
Luckily, Keras is just a wrapper around other libraries such as Tensorflow and Theano. Therefore, there is nothing special you have to do, as long as you already have the GPU-enabled version of the base library.
Therefore, just install Keras as you normally would:
pip install keras
As long as Keras is using Tensorflow as a backend, you can use the same method as above to check whether or not the GPU is being used.
Installing GPU-enabled Theano
For both Ubuntu and Windows, as always I recommend using Anaconda. In this case, the command to install Theano with GPU support is simply:
SIDE NOTE: Unfortunately, I will not provide technical support for your environment setup. You are welcome to schedule a 1-on-1 but availability is limited.
Disclaimer: this post contains Amazon affiliate links.
Yearly Black Friday sale is HERE! As I always tell my students – you never know when Udemy’s next “sale drought” is going to be – so if you are on the fence about getting a course, NOW is the time.
NOTE: If you are looking for the Tensorflow 2.0 VIP materials, as of now they can only be purchased here: https://deeplearningcourses.com/c/deep-learning-tensorflow-2 (coupon code automatically applied). The site contains only the VIP materials, and the main part of the course can be purchased on Udemy as per the link below. Therefore, if you want the “full” version of the course, each part now must be purchased separately.
What you’ll learn: Support Vector Machines (SVMs) in-depth starting from linear classification theory to the maximum margin method, kernel trick, quadratic programming, and the SMO (sequential minimal optimization) algorithm
Learn how we went from the fundamental ANNs to many of the key technologies we use today, such as:
Batch / stochastic gradient descent instead of full gradient descent
(Nesterov) momentum, RMSprop, Adam, and other adaptive learning rate techniques
Dropout regularization
Batch normalization
Learn how deep learning is accelerated by GPUs (and how to set one up yourself)
Learn how deep learning libraries improve the development process with GPUs (faster training) and automatic differentiation (so you don’t have to write the code or derive the math yourself)
Learn about classic clustering methods such as K-Means, Hierarchical Clustering, and Gaussian Mixture Models (a probabilistic approach to Cluster Analysis)
Apply clustering to real-world datasets such as organizing books, clustering Hillary Clinton and Donald Trump tweets, and DNA
Apply deep learning to natural language processing (NLP)
Covers the famous word2vec and GloVe algorithms
See how RNNs apply to text problems
Learn about a neural network structured like a “tree” which we call recursive neural networks and a more powerful version: recursive neural tensor networks (RNTNs)
PLEASE NOTE: VIP material will be removed from Udemy on November 27. If you signed up for the VIP version (using the VIP coupon) and want access beyond that point, you must email me at info [at] lazyprogrammer [dot] me.
If you want the VIP (full) version of the course beyond that date, you now need to purchase the “main” part and the “VIP” part separately. The “main” part can be purchased on Udemy and the “VIP” part can be purchased from: https://deeplearningcourses.com/c/deep-learning-tensorflow-2
—–
I am happy to announce my latest and most massive course yet – Tensorflow 2.0: Deep Learning and Artificial Intelligence.
Guys I am not joking – this really is my most massive course yet – check out the curriculum.
Many of you will be interested in the stock prediction example, because you’ve been tricked by marketers posing as data scientists in the past – I will demonstrate why their results are seriously flawed.
This is technically Deep Learning in Python part 12, but importantly this need not be the 12th deep learning course of mine that you take!
There are quite few important points to cover in this announcement, so let me outline what I will discuss:
A) What’s covered in this course
B) Why there are almost zero prerequisites for this course
C) The VIP content and near-term additions
D) The story behind this course (if you’ve been following my courses for some time you will be interested in this)
What’s covered in this course
As mentioned – this course is massive. It’s going to take you from basic linear models (the neuron) to ANNs, CNNs, and RNNs.
Thanks to the new standardized Tensorflow 2.0 API – we can move quickly.
The theme of this course is breadth, not depth. If you’re looking for heavy theory (e.g. backpropagation), well, I already have courses for those. So there’s no point in repeating that.
We will however go pretty in-depth to ensure that convolution (the main component of CNNs) and recurrent units (the main component of RNNs) are explained intuitively and from multiple perspectives.
These will include explanations and intuitions you have likely not seen before in my courses, so even if you’ve taken my CNN and RNN courses before, you will still want to see this.
There are many applications in this course. Here are a few:
– we will prove Moore’s Law using a neuron
– image classification with modern CNN design and data augmentation
– time series analysis and forecasting with RNNs
Anyone who is interested in stock prediction should check out the RNN section. Most RNN resources out there only look at NLP (natural language processing), including my old RNN course, but very few look at time series and forecasting.
And out of the ones that do, many do forecasting totally wrong!
There is one stock forecasting example I see everywhere, but its methodology is flawed. I will demonstrate why it’s flawed, and why stock prediction is not as simple as you have been led to believe.
There’s also a ton of Tensorflow-specific content, such as:
– Tensorflow serving (i.e. how to build a web service API from a Tensorflow model)
– Distributed training for faster training times (what Tensorflow calls “distribution strategies”)
– Low-level Tensorflow – this has changed completely from Tensorflow 1.x
– How to build your own models using the new Tensorflow 2.0 API
– Tensorflow Lite (how to export your models for mobile devices – iOS and Android) (coming soon)
– Tensorflow.js (how to export your models for the browser) (coming soon)
Why there are almost zero prerequisites for this course
Due to the new standardized Tensorflow 2.0 API, writing neural networks is easier than ever before.
This means that we’ll be able to blast through each section with very little theory (no backpropagation).
All you will need is a basic understanding of Python, Numpy, and Machine Learning, which are all taught in my free Numpy course.
As I always say, it’s free, so you have no excuses!
Tensorflow 2.0 however, does not invalidate or replace my other courses. If you haven’t taken them yet, you should take this course first for breadth, and then take the other courses which focus on individual models (CNNs, RNNs) for depth.
The VIP content and near-term additions
I had so much content in mind for this course, but I wanted to get this into your hands as soon as possible. With Tensorflow 2.0 due to be released any day now, I wanted to give you all a head start.
This field is moving so fast things were changing while I was making the course. Insane!
I’ll be adding more content in the coming weeks, possibly including but not limited to:
– Transfer Learning
– Natural Language Processing
– GANs
– Recommender Systems
– Reinforcement Learning
For this release, only the VIP version will be available for some time. That is why you do not see the usual Udemy discount.
You may be wondering: Which parts of the content are VIP content, and which are not?
This time, I wanted to do something interesting: it’s a surprise!
The VIP content will be added to a special section called the “VIP Section”, and this will be removed once the course becomes “Non-VIP”.
I will make an announcement well before that happens, so you will have the chance to download the VIP content before then, as well as get access to the VIP content permanently from deeplearningcourses.com.
The story behind this course
Originally, this course was going to be an RNN course only (hence why the RNN sections have so much more content – both time series and NLP).
The reason for this was, my original RNN course was tied to Theano and building RNNs from scratch.
In Tensorflow, building RNNs is completely different. This is unlike ANNs and CNNs which are relatively similar.
Thus, I could never reconcile the differences between the Theano approach and the Tensorflow approach in my original RNN course. So, I decided that simply making a new course for RNNs in Tensorflow would be best.
But lo and behold – Tensorflow was evolving so fast that a new version was about to be released – so I thought, it’s probably best to just cover everything in Tensorflow 2.0!
For the next week, all my Deep Learning and AI courses are available for just $9.99! (In addition to other courses on the site for the next few days)
For those of you who have been around for some time, you know that this sale doesn’t come around very often – just a few times per year. If you’ve been on the fence about getting a course, NOW is the time to do so. Get it now – save it for later.
For my courses, please use the coupons below (included in the links), or if you want, enter the coupon code: JUN2019.
As usual, if you want to know what order to take my courses in, check out the lecture “What order should I take your courses in?” in the Appendix of any of my courses (including the free Numpy course).
For prerequisite courses (math, stats, Python programming) and all other courses, follow the links at the bottom for sales of up to 90% off!
Since ALL courses on Udemy on sale, if you want any course not listed here, just click the general (site-wide) link, and search for courses from that page.
Into Yoga in your spare time? Photography? Painting? There are courses, and I’ve got coupons! If you find a course on Udemy that you’d like a coupon for, just let me know and I’ll hook you up!