NEW COURSE: Financial Engineering and Artificial Intelligence in Python

September 8, 2020

Financial Engineering and Artificial Intelligence in Python

VIP Promotion


The complete Financial Engineering course has arrived

Hello once again friends!

Today, I am announcing the VIP version of my latest course: Financial Engineering and Artificial Intelligence in Python.

If you don’t want to read my little spiel just click here to get your VIP coupon:

(as usual, this coupon lasts only 30 days, so don’t wait!)


This is a MASSIVE (18 hours) Financial Engineering course covering the core fundamentals of financial engineering and financial analysis from scratch. We will go in-depth into all the classic topics, such as:

  • Exploratory data analysis, significance testing, correlations, alpha and beta
  • Time series analysis, simple moving average, exponentially-weighted moving average
  • Holt-Winters exponential smoothing model
  • Efficient Market Hypothesis
  • Random Walk Hypothesis
  • Time series forecasting (“stock price prediction”)
  • Modern portfolio theory
  • Efficient frontier / Markowitz bullet
  • Mean-variance optimization
  • Maximizing the Sharpe ratio
  • Convex optimization with Linear Programming and Quadratic Programming
  • Capital Asset Pricing Model (CAPM)
  • Algorithmic trading

In addition, we will look at various non-traditional techniques which stem purely from the field of machine learning and artificial intelligence, such as:

  • Regression models
  • Classification models
  • Unsupervised learning
  • Reinforcement learning and Q-learning

We will learn about the greatest flub made in the past decade by marketers posing as “machine learning experts” who promise to teach unsuspecting students how to “predict stock prices with LSTMs”. You will learn exactly why their methodology is fundamentally flawed and why their results are complete nonsense. It is a lesson in how not to apply AI in finance.


As with my Tensorflow 2 release, some of the VIP content will be a surprise and will be released in stages. Currently, the entirety of the Algorithmic Trading sections are VIP sections. These include:


Classic Algorithmic Trading – Trend Following Strategy

You will learn how moving averages can be applied to do algorithmic trading.


Machine Learning-Based Trading Strategy

Forecast returns in order to determine when to buy and sell.


Reinforcement Learning-Based (Q-Learning) Trading Strategy

I give you a full introduction to Reinforcement Learning from scratch, and then we apply it to build a Q-Learning trader. Note that this is *not* the same as the example I used in my Tensorflow 2, PyTorch, and Reinforcement Learning courses. I think the example included in this course is much more principled and robust.


Please note: The VIP coupon will work only for the next month (ending Oct 8, 2020). It’s unknown whether the VIP period will renew after that time.

After that, although the VIP content will be removed from Udemy, all who purchased the VIP course will get permanent free access to these VIP contents on


Benefits of taking this course

  • Learn the knowledge you need to work at top tier investment firms
  • Gain practical, real-world quantitative skills that can be applied within and outside of finance
  • Make better decisions regarding your own finances


Personally, I think this is the most interesting and action-packed course I have created yet. My last few courses were cool, but they were all about topics which I had already covered in the past! GANs, NLP, Transfer Learning, Recommender Systems, etc etc. all just machine learning topics I have covered several times in different libraries. This course contains new, fresh content and concepts I have never covered in any of my courses, ever.

This is the first course I’ve created that extends into a niche area of AI application. It goes outside of AI and into domain expertise. An in-depth topic such as finance deserves its own course. This is that course. These are topics you will never learn in a generic data science or machine learning course. However, as a student of AI, you will recognize many of our tools and methods being applied, such as statistical inference, supervised and unsupervised learning, convex optimization, and optimal control. This allows us to go deeper than your run of the mill financial engineering course, and it becomes more than just the sum of its parts.

So what are you waiting for?

Go to comments

The complete PyTorch course for AI and Deep Learning has arrived

April 1, 2020

PyTorch: Deep Learning and Artificial Intelligence

VIP Promotion

The complete PyTorch course has arrived

Hello friends!

I hope you are all staying safe. Well, I’m sure you’ve heard enough about that so how about some different news?

Today, I am announcing the VIP version of my latest course: PyTorch: Deep Learning and Artificial Intelligence

[If you don’t want to read my little spiel just click here to get your VIP coupon:]

[The NEW VIP coupon for May 2 – June 2 2020 is:]

[The NEW VIP coupon for June 2 – July 3 2020 is:]

[The NEW VIP coupon for July 6 – August 6 2020 is:]

[The NEW VIP coupon for August 7 – September 7 2020 is:]

[The NEW VIP coupon for September 8 – October 8 2020 is:]

This is a MASSIVE (over 22 hours) Deep Learning course covering EVERYTHING from scratch. That includes:

  • Machine learning basics (linear neurons)
  • ANNs, CNNs, and RNNs for images and sequence data
  • Time series forecasting and stock predictions (+ why all those fake data scientists are doing it wrong)
  • NLP (natural language processing)
  • Recommender systems
  • Transfer learning for computer vision
  • GANs (generative adversarial networks)
  • Deep reinforcement learning and applying it by building a stock trading bot

IN ADDITION, you will get some unique and never-before-seen VIP projects:


Estimating prediction uncertainty

Drawing the standard deviation of the prediction along with the prediction itself. This is useful for heteroskedastic data (that means the variance changes as a function of the input). The most popular application where heteroskedasticity appears is stock prices and stock returns – which I know a lot of you are interested in.

It allows you to draw your model predictions like this:

Sometimes, the data is simply such that a spot-on prediction can’t be made. But we can do better by letting the model tell us how certain it is in its predictions.


Facial recognition with siamese networks

This one is cool. I mean, I don’t have to tell you how big facial recognition has become, right? It’s the single most controversial technology to come out of deep learning. In the past, we looked at simple ways of doing this with classification, but in this section I will teach you about an architecture built specifically for facial recognition.

You will learn how this can work even on small datasets – so you can build a network that recognizes your friends or can even identify all of your coworkers!

You can really impress your boss with this one. Surprise them one day with an app that calls out your coworkers by name every time they walk by your desk. 😉


Please note: The VIP coupon will work only for the next month (ending May 1, 2020). It’s unknown whether the VIP period will renew after that time.

After that, although the VIP content will be removed from Udemy, all who purchased the VIP course will get permanent free access on


Minimal Prerequisites

This course is designed to be a beginner to advanced course. All that is required is that you take my free Numpy prerequisites to learn some basic scientific programming in Python. And it’s free, so why wouldn’t you!?

You will learn things that took me years to learn on my own. For many people, that is worth tens of thousands of dollars by itself.

There is no heavy math, no backpropagation, etc. Why? Because I already have courses on those things. So there’s no need to repeat them here, and PyTorch doesn’t use them. So you can relax and have fun. =)


Why PyTorch?

All of my deep learning courses until now have been in Tensorflow (and prior to that Theano).

So why learn PyTorch?

Does this mean my future deep learning courses will use PyTorch?

In fact, if you have traveled in machine learning circles recently, you will have noticed that there has been a strong shift to PyTorch.

Case in point: OpenAI switched to PyTorch earlier this year (2020).

Major AI shops such as Apple, JPMorgan Chase, and Qualcomm have adopted PyTorch.

PyTorch is primarily maintained by Facebook (Facebook AI Research to be specific) – the “other” Internet giant who, alongside Google, have a strong vested interest in developing state-of-the-art AI.

But why PyTorch for you and me? (aside from the fact that you might want to work for one of the above companies)

As you know, Tensorflow has adopted the super simple Keras API. This makes common things easy, but it makes uncommon things hard.

With PyTorch, common things take a tiny bit of extra effort, but the upside is that uncommon things are still very easy.

Creating your own custom models and inventing your own ideas is seamless. We will see many examples of that in this course.

For this reason, it is very possible that future deep learning courses will use PyTorch, especially for those advanced topics that many of you have been asking for.

Because of the ease at which you can do advanced things, PyTorch is the main library used by deep learning researchers around the world. If that’s your goal, then PyTorch is for you.

In terms of growth rate, PyTorch dominates Tensorflow. PyTorch now outnumbers Tensorflow by 2:1 and even 3:1 at major machine learning conferences. Researchers hold that PyTorch is superior to Tensorflow in terms of the simplicity of its API, and even speed / performance!

Do you need more convincing?

Go to comments

How to Build Your Own Computer Science Degree

August 29, 2020

Note: You can find the video lecture for this article at



The following books can be used to study core computer science topics at the college / university level, to prepare yourself for machine learning, deep learning, artificial intelligence, and data science.

These are the books I recommend for building your own computer science degree. Remember! The goal is to do as many exercises as you can. It’s not to just watch 5 minute YouTube videos and then conclude “I understand everything! There’s no need for exercises!”

This quote from the video sums it up nicely: if you don’t find the problems, the problems will find you.


Calculus: Early Transcendentals by James Stewart

Introduction to Linear Algebra by Gilbert Strang

Introduction to Probability by Bertsekas and Tsitsiklis

Big Java by Cay Horstmann

Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein

Go to comments

New Exclusive Course: Linear Programming for Linear Regression in Python

July 14, 2020

If you’ve been to recently, you will have noticed that there is now a section for exclusive courses. These are courses that will *not* be on any other platforms, only

These are what I’ve been calling “mini-courses” during their development and that’s what they are in spirit. They are:

  • Lower cost
  • Shorter in duration

There won’t be any time spent on stuff like appendices which most of you have already seen and are mainly for beginners.

The point of these courses is to have a faster turn-around time on course development. Sometimes, there are topics I want to cover really quickly that won’t ever become a full-sized course. They will also be used to cover more advanced topics.

Unfortunately, a lot of students on other platforms (e.g. Udemy) are complete beginners who have no desire advance and gain actual skill. They take “marketer-taught” courses which leads to a complex which I call “confidence without ability”. Dealing with such students is draining.

These mini-courses will bring us back to the old days (many of you have been around since then!) where the material was more concise, straight-to-the-point, and didn’t need “beginner reminders” all over the place.

Given that these mini-courses are much simpler for me to make, I expect there to be many more in the future.

This first exclusive mini-course is on Linear Programming for Linear Regression.

Many students in my Linear Regression course often ask, “What if I want to use absolute error instead of squared error?” This course answers exactly that question and more.

The solution is based on Linear Programming (LP).

We will also cover 2 other common problems: maximum absolute deviation and positive-only (or negative-only) error.

These kinds of problems are often found in professional fields such as quantitative finance, operations research, and engineering.

Each of these problems can be solved using Linear Programming with the Scipy library.

BONUS FACT: I have a new pen and tablet set up so most of the derivations in this course are done by hand – really truly old-school like the Linear/Logistic Regression days!

Get the course here:


MATLAB for Students, Engineers, and Professionals in STEM

Another exclusive course which has already been on for some time is my original MATLAB course. This was the first course I ever made and is basically a collector’s item. The quality isn’t that great compared to what I am creating now, but obviously you will still learn a lot.

I’m including it in this newsletter to announce that I was able to dig up an extra section on probability that didn’t exist before. So the course now has 3 major sections:

  1. MATLAB basic operations and variables
  2. Signal processing with sound and images
  3. Probability and statistics

Get the course here:

Go to comments

Beginners: How to get an infinite amount of practice and exercise in machine learning

July 9, 2020

One of the most common questions I get from beginners in machine learning is, “how do I practice what I’ve learned?”

There are several ways to answer this.

First, let’s make an important distinction.

There’s a difference between putting in the work to understand an algorithm, and using that algorithm on data. We’ll call these the “learning phase” and the “application phase”.


Learning phase = Putting in the work to understand an algorithm
Application phase = Using that algorithm on data


Let’s take a simple example: linear regression.

In the learning phase, your tasks will include:

  • Being able to derive the algorithm from first principles (that’s calculus, linear algebra, and probability)
  • Implementing the algorithm in the language of your choice (it need not be Python)
  • Testing your algorithm on data to verify that it works

These are essential tasks in ensuring that you really understand an algorithm.

Doing these tasks are “exercises” which improve your general aptitude in machine learning, and will strengthen your ability to learn other algorithms in the future, such as logistic regression, neural networks, etc.

As my famous motto goes: “if you can’t implement it, then you don’t understand it”.

Interestingly, 5 years after I invented this motto, I discovered that the famous physicist Richard Feynman said a very similar thing!


In order to get an infinite amount of practice in this area, you should learn about various extensions on this algorithm, such as L1 and L2 regularization, using gradient descent instead of the closed-form solution, 2nd order methods, etc.

You might want to try implementing it in a different language. And finally, you can spend a lifetime exercising your ability to understand machine learning algorithms by learning about more machine learning algorithms in much the same way.

Believe me, 10 years down the line you may discover something new and interesting about even the simplest models like Linear Regression.


The second phase is the “application phase”.

Here is where your ability to exercise and practice is really infinite.

Let’s first remember that I don’t know you personally. I don’t know what you care about, what field you are in, or what your motivations for learning this subject are.

Therefore, I cannot tell you where to apply what you’ve learned: only you know that.

For example, if you are a computational biologist, then you can use this algorithm on problems specific to computational biology.

If you are a financial engineer, then you can use this algorithm on problems specific to financial engineering.

Of course, because I am not a computational biologist, I don’t know what that data looks like, what the relevant features are, etc.

I can’t help you with that.

The “interface” where I end and you begin is the algorithm.

After I teach you how and why the algorithm works and how to implement it in code, using it to further scientific knowledge in your own field of study becomes your responsibility.

One can’t expect me to be an expert computational biologist and an expert financial engineer and whatever else it is that you are an expert in.

Therefore, you can’t rely on me to tell you what datasets you might be interested in, what kinds of problems you’re trying to solve, etc.

Presumably, since you’re the expert, you should know that yourself!

If you don’t, then you are probably not the expert you think you are.

But therein lies the key.

Once you’ve decided what you care about, you can start applying what you’ve learned to those datasets.

This will give you an infinite amount of practice, assuming you don’t run out of things to care about.

If you don’t care about anything, well then, why are you doing this in the first place? Lol.

This also ties nicely into another motto of mine: “all data is the same”.

What does this mean?

Let’s recall the basic “pattern” we see when we use scikit-learn:

model = LinearRegression(), Y_train)

Does this code change whether (X, Y) represent a biology dataset or a finance dataset?

The answer is no!

Otherwise, no such library as Scikit-Learn could even exist!

“All data is the same” means that the same Linear Regression algorithm applies, no matter what field or what industry your data happens to come from.

There’s no such thing as “Linear Regression for biology” and “Linear Regression for finance”.

There’s only one linear regression that is the same linear regression no matter the dataset.

Thus, you learn the algorithm once, and you can apply it infinitely to any number of datasets!

Pretty cool huh?

But look, if you really have zero idea of what you care about, or your answer is “I care about machine learning”, then there are plenty of stock datasets that you can look up on your own.

These include Kaggle, the UCI repository, etc. There’s so much data out there, you will still have to pick and choose what to focus on first.

Again, you have to choose what you care about. Nobody else can possibly tell you that with any accuracy.



The “learning phase” above does not apply to situations where you’re learning an API (for example, Tensorflow 2, PyTorch, or even Scikit-Learn).


Well firstly, there’s nothing really to derive.

Secondly, it would be impossible for you to implement anything yourself without me showing you how first (at which point anything you type would amount to simply copying what I did).


Well, how would you know what to type if I didn’t show you?

Are you going to magically come up with the correct syntax for a library that you simply haven’t learned?

Obviously not. That would be a ludicrous idea.

In this case, the “learning phase” amounts to:

  • Understanding the syntax I’ve shown you
  • Being able to replicate that syntax on your own

This most closely represents what you will do in the “real-world”. In the real-world, you want to be able to write code fast and efficiently, rather than trying to remember which course and which lecture covered that exact syntax you’re thinking of.

Being able to write code on-the-fly makes you efficient and fast. Obviously, I don’t remember everything, but I know where to look when I need something. You have to find the right balance between memorizing and looking up. Just like how computer programs use caches and RAM for fast data retrieval instead of the hard drive.

Obviously, this will get better over time as you practice more and more.

It sounds overly simplistic, but it’s nothing more than repetition and muscle memory. I usually don’t explicitly commit to memorizing anything. I just write code and let it come naturally. The key is: I write code.

Obviously, watching me play tennis or reading books about tennis will not make you a better tennis player.

You must get on the court, pick up the tennis racket, and play actual tennis matches!

I can’t force you to do this. It’s the kind of thing which must be done of your own volition.

I mean, if you want to pay me consulting hours to call you and remind you to practice, I’d be very happy to oblige. =)

At this point, once you have completed the “learning phase” of learning an API, then the “application phase” described above still applies.

Go to comments

Deep Learning and Artificial Intelligence Newsletter

Get discount coupons, free machine learning material, and new course announcements