Lazy Programmer

Your source for the latest in deep learning, big data, data science, and artificial intelligence. Sign up now

Path to Mastering Artificial Intelligence for Business Applications

Artificial intelligence in business can take on various forms, depending on what your business does. Business applications can be separated into several subcategories.

I’ve decided to use the following subcategories in this article:

  • Recommender Systems
  • A/B Testing
  • Supervised Machine Learning
  • Unsupervised Machine Learning


Recommender Systems

Recommender Systems are all about recommending the best items (movies, advertisements, search results, products, etc.) to your users that maximizes some downstream metric (e.g. your revenue).

For example, Netflix would like to recommend movies to users that they are likely to watch and rate highly.


Google would like to provide search results to their users that they are likely to click on and find useful (according to their query).


Amazon would like to display products to their users that they are likely to purchase.


You can learn about Recommender Systems in my course, Recommender Systems and Deep Learning in Python.

Be advised: Recommender Systems are advanced. It would be extremely helpful, maybe even necessary, to study the other subcategories (A/B testing, supervised and unsupervised learning) first.

In my Recommender Systems course, we make use of concepts from all 3 of these areas, as well as deep learning (more on that below).


A/B Testing

A/B testing is similar. It’s all about finding quantitative ways to measure how much better one choice is compared to another.


  • Comparing different website designs
  • Comparing different logos
  • Comparing 2 different button designs (e.g. the “buy” button)
  • Comparing 2 different button texts (e.g. is “buy now” better than “add to cart”?)
  • Comparing 2 different landing pages
  • Comparing different banner advertisements
  • Calculating which news article headline leads to more clicks
  • And so on…

Any company these days is going to have a website. How do you make your site the best possible version of itself?

You do it through a process called A/B testing and you prove using data what works best and leads to the most revenue.

You can learn about A/B testing in my course, Bayesian Machine Learning in Python: A/B Testing.

The “Bayesian” part comes from the fact that traditional statistical tests are rigid and nonadaptive. You define your experiment beforehand and run it to completion. Using a Bayesian paradigm allows you to adapt your algorithm’s behavior in real-time. As more data is collected, the algorithm’s model gets better and better automatically.

Before diving into A/B testing, where you get to implement each of the algorithms you learn about in Python code, I strongly recommend my always-free course:

Deep Learning Prerequisites: The Numpy Stack in Python

Here, you learn about the basic tools which you will apply when you’re doing “actual” machine learning.


Supervised Machine Learning

Speaking of online advertising, one of the most common tasks for businesses to do is predicting what action a user will take when they see your website / landing page / advertisement.

Predicting what a user will do (classification) or perhaps how long they will do it for (regression) are examples of Supervised Machine Learning.

State-of-the-art methods for these types of tasks usually involve ensembles of decision trees – for example: Random Forest, AdaBoost, and XGBoost.

My course, Ensemble Machine Learning: Random Forest and AdaBoost, covers the fundamentals of ensemble methods.

But what are ensemble methods ensembles of?

Typically, decision trees are used, although it is not unheard of to use linear models too.


My course, Data Science: Supervised Machine Learning in Python, covers the standard classic machine learning models such as the Decision Tree, Naive Bayes, Perceptron, and K-Nearest Neighbor (KNN).

In order to understand the basic principles behind ensemble methods like the random forest, you’ll want a solid grounding on these classic algorithms first. Can’t build a forest if you don’t have trees!

In Supervised Machine Learning, I also definitively answer one of the most common questions I get, which is, “How do you use a model to make predictions after training it?” In this course, we build an end-to-end web service API that can accept an image and return its classification result.

Before diving into Supervised Machine Learning, where you get to implement each of the algorithms you learn about in Python code, I strongly recommend my always-free course:

Deep Learning Prerequisites: The Numpy Stack in Python

Here, you learn about the basic tools which you will apply when you’re doing “actual” machine learning.


Unsupervised Machine Learning

Supervised Machine Learning requires that your data is labeled. As a business, you may have the opportunity to collect massive amounts of unlabeled data (server logs, user demographics, images, documents, etc.).

In the case where there is no label attached to the data, you may opt to perform exploratory data analysis. This falls under Unsupervised Machine Learning.

Examples of unsupervised learning involve:

  • Clustering your data (e.g. separating your users into cohorts)
  • Building a probability model of your data (determine which observations are likely and unlikely)
  • Building a sequential probability model of your data – what sequence of actions is a user likely to take after landing on your site?

My course, Cluster Analysis: Unsupervised Machine Learning in Python, covers classic clustering algorithms such as K-Means Clustering, Gaussian Mixture Models (GMMs), and Hierarchical Clustering.

GMMs are much more general than just a clustering algorithm in that they build a probabilistic model of your data. They introduce the important concept of “hidden variables” or “latent variables”, along with one of the fundamental algorithms of probabilistic machine learning known as expectation-maximization.

My course, Unsupervised Machine Learning: Hidden Markov Models in Python, takes the idea of hidden variables and models sequences. Many real-world datasets (even ones you haven’t thought of!) can be modeled as a sequence:

  • User actions on a website
  • Language (NLP, or Natural Language Processing)
  • Financial data / stock returns
  • DNA / genes
  • Speech recognition

Before learning about HMMs (which are advanced), I always recommend learning about GMMs first (which are a little simpler).

Before diving into Cluster Analysis, where you get to implement each of the algorithms you learn about in Python code, I strongly recommend my always-free course:

Deep Learning Prerequisites: The Numpy Stack in Python

Here, you learn about the basic tools which you will apply when you’re doing “actual” machine learning.


The Rabbit Hole of Recommender Systems

At this point, I want to get back to Recommender Systems. To date, it is one of the most multi-disciplinary courses I’ve ever created.

Not only does it draw on principles from Markov models, Bayesian A/B testing, and Supervised Machine Learning, it also has a strong dependence on another aspect of Unsupervised Machine Learning known as “dimensionality reduction“.

In particular, it has a large overlap with the theoretical principles taught in my course, Unsupervised Deep Learning in Python (PCA, SVD, Autoencoders, and Restricted Boltzmann Machines).

In other words, Recommender Systems are a major application of all of those techniques.

But… what is Deep Learning? Where can you learn about the basics of Deep Learning before jumping into an advanced course like the one I just mentioned?

Deep Learning is Machine Learning using neural networks.

These days, to build neural networks, we use modern deep learning libraries such as Theano, Tensorflow, and PyTorch.

So where can you learn how to build a neural network using these modern libraries?

Well, I’m glad you asked!

I just so happen to have a course on that too.

Modern Deep Learning in Python

This course covers (as mentioned above) how to build neural networks in modern deep learning libraries such as Theano, Tensorflow, and PyTorch.

It also covers modern theoretical advancements, such as adaptive learning rate methods (such as RMSprop, Nesterov Momentum, and Adam), as well as modern regularization techniques such as Dropout and Batch Normalization.

These can all be thought of as “add-ons” to the vanilla backpropagation training algorithm.

Modern libraries like Theano, Tensorflow, and PyTorch do “automatic differentiation” and make use of the GPU to greatly speed up training time.

But wait!

What the heck is backpropagation? And how is a neural network “trained” in the first place?


This is where Data Science: Deep Learning in Python enters the picture.

This course goes over, in painstaking detail, how to train a neural network from basic first principles.

You’ll see how basic mathematics – matrices, vectors, and partial derivatives – form the basis of neural networks.

You’ll learn about what it means for a neural network to “make a prediction”, and also what it means to “train a neural network”.

You’ll learn how to visualize what a neural network does, and how to interpret what a neural network has learned.

A “neural network” implies a network of neurons.

At this point, you might be wondering, “what is a neuron anyway?”

You guessed it – I’ve covered this too!

Deep Learning Prerequisites: Logistic Regression in Python

A “neuron” is actually a classic machine learning model also known as Logistic Regression.

In this course, you’ll learn the ins and outs of linear classification and how to train a neuron – an algorithm known as gradient descent (like a baby version of backpropagation, in some sense).

What does it mean for a model to be “linear”?

Since you asked, I’ve got this covered too.

Deep Learning Prerequisites: Linear Regression in Python

You may have noticed that all of these courses have a heavy reliance on writing code.

A huge part (maybe the most important part) of learning how these models work, is learning how to implement them in Python code.

In particular, we make heavy use of libraries such as Numpy, Scipy, Matplotlib, and Pandas.

You can of course, learn how to use these libraries in my always-free course:

Deep Learning Prerequisites: The Numpy Stack in Python

Since a lot of people requested it, I also added a special section to the course that covers Machine Learning Basics, to answer questions such as “what is classification?” and “what is regression?”, as well as to gain a very rudimentary understanding of machine learning by using Scikit-Learn.

I consider my free Numpy course the basic starting point to deep learning and machine learning, no matter what field you want to end up specializing in, whether that be computer vision, natural language processing, or reinforcement learning.

These libraries are the basic tools (like the screwdriver, hammer, ruler, …) that you will use to build bigger and more complicated systems.

Keep in mind, there are many more topics in deep learning and artificial intelligence than what I listed here. For a full list of topics, and a guide for what order to learn them in, please see my handy visual guide: “What order should I take your courses in?”