September 8, 2020
Financial Engineering and Artificial Intelligence in Python
The complete Financial Engineering course has arrived
Hello once again friends!
Today, I am announcing the VIP version of my latest course: Financial Engineering and Artificial Intelligence in Python.
If you don’t want to read my little spiel just click here to get your VIP coupon:
(as usual, this coupon lasts only 30 days, so don’t wait!)
This is a MASSIVE (18 hours) Financial Engineering course covering the core fundamentals of financial engineering and financial analysis from scratch. We will go in-depth into all the classic topics, such as:
- Exploratory data analysis, significance testing, correlations, alpha and beta
- Time series analysis, simple moving average, exponentially-weighted moving average
- Holt-Winters exponential smoothing model
- ARIMA and SARIMA
- Efficient Market Hypothesis
- Random Walk Hypothesis
- Time series forecasting (“stock price prediction”)
- Modern portfolio theory
- Efficient frontier / Markowitz bullet
- Mean-variance optimization
- Maximizing the Sharpe ratio
- Convex optimization with Linear Programming and Quadratic Programming
- Capital Asset Pricing Model (CAPM)
- Algorithmic trading
In addition, we will look at various non-traditional techniques which stem purely from the field of machine learning and artificial intelligence, such as:
- Regression models
- Classification models
- Unsupervised learning
- Reinforcement learning and Q-learning
We will learn about the greatest flub made in the past decade by marketers posing as “machine learning experts” who promise to teach unsuspecting students how to “predict stock prices with LSTMs”. You will learn exactly why their methodology is fundamentally flawed and why their results are complete nonsense. It is a lesson in how not to apply AI in finance.
As with my Tensorflow 2 release, some of the VIP content will be a surprise and will be released in stages. Currently, the entirety of the Algorithmic Trading sections are VIP sections. These include:
Classic Algorithmic Trading – Trend Following Strategy
You will learn how moving averages can be applied to do algorithmic trading.
Machine Learning-Based Trading Strategy
Forecast returns in order to determine when to buy and sell.
Reinforcement Learning-Based (Q-Learning) Trading Strategy
I give you a full introduction to Reinforcement Learning from scratch, and then we apply it to build a Q-Learning trader. Note that this is *not* the same as the example I used in my Tensorflow 2, PyTorch, and Reinforcement Learning courses. I think the example included in this course is much more principled and robust.
Please note: The VIP coupon will work only for the next month (ending Oct 8, 2020). It’s unknown whether the VIP period will renew after that time.
After that, although the VIP content will be removed from Udemy, all who purchased the VIP course will get permanent free access to these VIP contents on deeplearningcourses.com.
Benefits of taking this course
- Learn the knowledge you need to work at top tier investment firms
- Gain practical, real-world quantitative skills that can be applied within and outside of finance
- Make better decisions regarding your own finances
Personally, I think this is the most interesting and action-packed course I have created yet. My last few courses were cool, but they were all about topics which I had already covered in the past! GANs, NLP, Transfer Learning, Recommender Systems, etc etc. all just machine learning topics I have covered several times in different libraries. This course contains new, fresh content and concepts I have never covered in any of my courses, ever.
This is the first course I’ve created that extends into a niche area of AI application. It goes outside of AI and into domain expertise. An in-depth topic such as finance deserves its own course. This is that course. These are topics you will never learn in a generic data science or machine learning course. However, as a student of AI, you will recognize many of our tools and methods being applied, such as statistical inference, supervised and unsupervised learning, convex optimization, and optimal control. This allows us to go deeper than your run of the mill financial engineering course, and it becomes more than just the sum of its parts.
So what are you waiting for?
Go to comments
July 14, 2020
If you’ve been to deeplearningcourses.com recently, you will have noticed that there is now a section for exclusive courses. These are courses that will *not* be on any other platforms, only deeplearningcourses.com.
These are what I’ve been calling “mini-courses” during their development and that’s what they are in spirit. They are:
- Lower cost
- Shorter in duration
There won’t be any time spent on stuff like appendices which most of you have already seen and are mainly for beginners.
The point of these courses is to have a faster turn-around time on course development. Sometimes, there are topics I want to cover really quickly that won’t ever become a full-sized course. They will also be used to cover more advanced topics.
Unfortunately, a lot of students on other platforms (e.g. Udemy) are complete beginners who have no desire advance and gain actual skill. They take “marketer-taught” courses which leads to a complex which I call “confidence without ability”. Dealing with such students is draining.
These mini-courses will bring us back to the old days (many of you have been around since then!) where the material was more concise, straight-to-the-point, and didn’t need “beginner reminders” all over the place.
Given that these mini-courses are much simpler for me to make, I expect there to be many more in the future.
This first exclusive mini-course is on Linear Programming for Linear Regression.
Many students in my Linear Regression course often ask, “What if I want to use absolute error instead of squared error?” This course answers exactly that question and more.
The solution is based on Linear Programming (LP).
We will also cover 2 other common problems: maximum absolute deviation and positive-only (or negative-only) error.
These kinds of problems are often found in professional fields such as quantitative finance, operations research, and engineering.
Each of these problems can be solved using Linear Programming with the Scipy library.
BONUS FACT: I have a new pen and tablet set up so most of the derivations in this course are done by hand – really truly old-school like the Linear/Logistic Regression days!
Get the course here: https://deeplearningcourses.com/c/linear–programming-python
MATLAB for Students, Engineers, and Professionals in STEM
Another exclusive course which has already been on deeplearningcourses.com for some time is my original MATLAB course. This was the first course I ever made and is basically a collector’s item. The quality isn’t that great compared to what I am creating now, but obviously you will still learn a lot.
I’m including it in this newsletter to announce that I was able to dig up an extra section on probability that didn’t exist before. So the course now has 3 major sections:
- MATLAB basic operations and variables
- Signal processing with sound and images
- Probability and statistics
Get the course here: https://deeplearningcourses.com/c/matlab
Go to comments
July 9, 2020
One of the most common questions I get from beginners in machine learning is, “how do I practice what I’ve learned?”
There are several ways to answer this.
First, let’s make an important distinction.
There’s a difference between putting in the work to understand an algorithm, and using that algorithm on data. We’ll call these the “learning phase” and the “application phase”.
Learning phase = Putting in the work to understand an algorithm
Application phase = Using that algorithm on data
Let’s take a simple example: linear regression.
In the learning phase, your tasks will include:
- Being able to derive the algorithm from first principles (that’s calculus, linear algebra, and probability)
- Implementing the algorithm in the language of your choice (it need not be Python)
- Testing your algorithm on data to verify that it works
These are essential tasks in ensuring that you really understand an algorithm.
Doing these tasks are “exercises” which improve your general aptitude in machine learning, and will strengthen your ability to learn other algorithms in the future, such as logistic regression, neural networks, etc.
As my famous motto goes: “if you can’t implement it, then you don’t understand it”.
Interestingly, 5 years after I invented this motto, I discovered that the famous physicist Richard Feynman said a very similar thing!
In order to get an infinite amount of practice in this area, you should learn about various extensions on this algorithm, such as L1 and L2 regularization, using gradient descent instead of the closed-form solution, 2nd order methods, etc.
You might want to try implementing it in a different language. And finally, you can spend a lifetime exercising your ability to understand machine learning algorithms by learning about more machine learning algorithms in much the same way.
Believe me, 10 years down the line you may discover something new and interesting about even the simplest models like Linear Regression.
The second phase is the “application phase”.
Here is where your ability to exercise and practice is really infinite.
Let’s first remember that I don’t know you personally. I don’t know what you care about, what field you are in, or what your motivations for learning this subject are.
Therefore, I cannot tell you where to apply what you’ve learned: only you know that.
For example, if you are a computational biologist, then you can use this algorithm on problems specific to computational biology.
If you are a financial engineer, then you can use this algorithm on problems specific to financial engineering.
Of course, because I am not a computational biologist, I don’t know what that data looks like, what the relevant features are, etc.
I can’t help you with that.
The “interface” where I end and you begin is the algorithm.
After I teach you how and why the algorithm works and how to implement it in code, using it to further scientific knowledge in your own field of study becomes your responsibility.
One can’t expect me to be an expert computational biologist and an expert financial engineer and whatever else it is that you are an expert in.
Therefore, you can’t rely on me to tell you what datasets you might be interested in, what kinds of problems you’re trying to solve, etc.
Presumably, since you’re the expert, you should know that yourself!
If you don’t, then you are probably not the expert you think you are.
But therein lies the key.
Once you’ve decided what you care about, you can start applying what you’ve learned to those datasets.
This will give you an infinite amount of practice, assuming you don’t run out of things to care about.
If you don’t care about anything, well then, why are you doing this in the first place? Lol.
This also ties nicely into another motto of mine: “all data is the same”.
What does this mean?
Let’s recall the basic “pattern” we see when we use scikit-learn:
model = LinearRegression()
Does this code change whether (X, Y) represent a biology dataset or a finance dataset?
The answer is no!
Otherwise, no such library as Scikit-Learn could even exist!
“All data is the same” means that the same Linear Regression algorithm applies, no matter what field or what industry your data happens to come from.
There’s no such thing as “Linear Regression for biology” and “Linear Regression for finance”.
There’s only one linear regression that is the same linear regression no matter the dataset.
Thus, you learn the algorithm once, and you can apply it infinitely to any number of datasets!
Pretty cool huh?
But look, if you really have zero idea of what you care about, or your answer is “I care about machine learning”, then there are plenty of stock datasets that you can look up on your own.
These include Kaggle, the UCI repository, etc. There’s so much data out there, you will still have to pick and choose what to focus on first.
Again, you have to choose what you care about. Nobody else can possibly tell you that with any accuracy.
The “learning phase” above does not apply to situations where you’re learning an API (for example, Tensorflow 2, PyTorch, or even Scikit-Learn).
Well firstly, there’s nothing really to derive.
Secondly, it would be impossible for you to implement anything yourself without me showing you how first (at which point anything you type would amount to simply copying what I did).
Well, how would you know what to type if I didn’t show you?
Are you going to magically come up with the correct syntax for a library that you simply haven’t learned?
Obviously not. That would be a ludicrous idea.
In this case, the “learning phase” amounts to:
- Understanding the syntax I’ve shown you
- Being able to replicate that syntax on your own
This most closely represents what you will do in the “real-world”. In the real-world, you want to be able to write code fast and efficiently, rather than trying to remember which course and which lecture covered that exact syntax you’re thinking of.
Being able to write code on-the-fly makes you efficient and fast. Obviously, I don’t remember everything, but I know where to look when I need something. You have to find the right balance between memorizing and looking up. Just like how computer programs use caches and RAM for fast data retrieval instead of the hard drive.
Obviously, this will get better over time as you practice more and more.
It sounds overly simplistic, but it’s nothing more than repetition and muscle memory. I usually don’t explicitly commit to memorizing anything. I just write code and let it come naturally. The key is: I write code.
Obviously, watching me play tennis or reading books about tennis will not make you a better tennis player.
You must get on the court, pick up the tennis racket, and play actual tennis matches!
I can’t force you to do this. It’s the kind of thing which must be done of your own volition.
I mean, if you want to pay me consulting hours to call you and remind you to practice, I’d be very happy to oblige. =)
At this point, once you have completed the “learning phase” of learning an API, then the “application phase” described above still applies.
Go to comments