Lazy Programmer

Your source for the latest in deep learning, big data, data science, and artificial intelligence. Sign up now

SQL for Marketers: Dominate Data Analytics, Data Science, and Big Data

March 19, 2016

This is an annoucement along with free and discount coupons for my new course, SQL for Marketers: Dominate data analytics, data science, and big data

book-cover

More and more companies these days are learning that they need to make DATA-DRIVEN decisions.

With big data and data science on the rise, we have more data than we know what to do with.

One of the basic languages of data analytics is SQL, which is used for many popular databases including MySQL, Postgres, Microsoft SQL Server, Oracle, and even big data solutions like Hive and Cassandra.

I’m going to let you in on a little secret. Most high-level marketers and product managers at big tech companies know how to manipulate data to gain important insights. No longer do you have to wait around the entire day for some software engineer to answer your questions – now you can find the answers directly, by yourself, using SQL.

Do you want to know how to optimize your sales funnel using SQL, look at the seasonal trends in your industry, and run a SQL query on Hadoop? Then join me now in my new class, SQL for marketers: Dominate data analytics, data science, and big data!

P.S. If you haven’t yet signed up for my newsletter at lazyprogrammer [dot] me, you’ll want to do so before Monday, especially if you want to learn more about deep learning, because I have a special announcement coming up that will NOT be announced on Udemy.

Here’s the coupons:

FREE coupon for early early birds:

EARLYBIRD (Sold out)

If the first coupon has run out, you may still use the 2nd coupon, which gives you 70% off:

EARLYBIRD2

#aws #big data #cassandra #Data Analytics #ec2 #hadoop #Hive #Microsoft SQL Server #MySQL #Oracle #Postgres #S3 #spark #sql #sqlite

Go to comments


How to run distributed machine learning jobs using Apache Spark and EC2 (and Python)

April 5, 2015

This is the age of big data.

Sometimes sci-kit learn doesn’t cut it.

In order to make your operations and data-driven decisions scalable – you need to distribute the processing of your data.

Two popular libraries that do such distributed machine learning are Mahout (which uses MapReduce) and MLlib (which uses Spark, which is sometimes considered as a successor to MapReduce).

What I want to do with this tutorial is to show you how easy it is to do distributed machine learning using Spark and EC2.

When I started a recent project of mine, I was distraught at how complicated a Mahout setup could be.

I am not an ops person. I hate installing and configuring things. For something like running distributed k-means clustering, 90% of the work could go into just setting up a Hadoop cluster, installing all the libraries your code needs to run, making sure they are the right versions, etc…

The Hadoop ecosystem is very sensitive to these things, and sometimes MapReduce jobs can be very hard to debug.

With Spark, everything is super easy. Installing Spark and Hadoop is tedious but do-able. Spinning up a cluster is very easy. Running a job is very easy. We will use Python, but you can also use Scala or Java.

Outline of this tutorial:

  1. Install Spark on a driver machine.
  2. Create a cluster.
  3. Run a job.

 

1. Install Spark

I used an Ubuntu instance on EC2. I’m assuming you already know how to set up a security group, get your PEM, and SSH into the machine.

image

Once you’ve spun up your AMI, we can begin installing all the stuff we’ll need.

To make this even easier you can probably do this on your local machine, but if for some reason you’re using Windows or you don’t want to mess up your local machine, then you’ll want to do this.

First, set your AWS ID and secret environment variables.

export AWS_ACCESS_KEY_ID=…
export AWS_SECRET_ACCESS_KEY=…

Now install Java:

sudo apt-get update
sudo apt-get install default-jdk maven
export MAVEN_OPTS=”-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m”

For the last line, we will need this RAM available to build Spark, if I remember correctly.

Now download and build Spark:

wget http://mirror.cc.columbia.edu/pub/software/apache/spark/spark-1.3.0/spark-1.3.0.tgz
tar -xf spark-1.3.0.tgz
cd spark-1.3.0
mvn -DskipTests clean package

By the time you read this a new version of Spark may be available. You should check.

 

2. Create a Cluster

Assuming you are in the Spark folder now, it is very easy to create a cluster to run your jobs:

./ec2/spark-ec2 -k “Your Key Pair Name” -i /path/to/key.pem -s <number of slaves> launch <cluster name> —copy-aws-credentials -z us-east-1b

I set my zone as “us-east-1b” but you can set it to a zone of your choice.

When you’re finished, don’t forget to tear down your cluster! On-demand machines are expensive.

./spark-ec2 destroy <cluster name>

For some reason, numpy isn’t installed when you create a cluster, and the default Python distribution on the m1.large machines is 2.6, while Spark installs its own 2.7. So, even if you easy_install numpy on each of the machines in the cluster, it won’t work for Spark.

You can instead copy the library over to each cluster machine from your driver machine:

scp -i /path/to/key.pem /usr/lib/python2.7/dist-packages/numpy* root@<cluster-machine>:/usr/lib/python2.7/dist-packages/
scp -r -i /path/to/key.pem /usr/lib/python2.7/dist-packages/numpy root@<cluster-machine>:/usr/lib/python2.7/dist-packages/

You can easily write a script to automatically copy all this stuff over (get the machine URLs from the EC2 console).

 

3. Run a Job

Spark gives you a Python shell.

First, go to your EC2 console and find the URL for your cluster master. SSH into that machine (username is root).

cd spark
MASTER=spark://<cluster-master-ip>:7077 ./pyspark

Import libraries:

from pyspark.mllib.clustering import KMeans
from numpy import array

Load your data:

data = sc.textFile(“s3://<my-bucket>/<path>/*.csv”)

Note 1: You can use a wildcard to grab multiple files into one variable – called an RDD – resilient distributed dataset.

Note 2: Spark gives you a variable called ‘sc’, which is an object of type SparkContext. It specifies the master node, among other things.

Maybe filter out some bad lines:

data = data.filter(lambda line: ‘ERROR’ not in line)

Turn each row into an array / vector observation:

data = data.map(lambda line: array([float(x) for x in line.split()]))

Train your model:

clusters = KMeans.train(parsedData, 2, maxIterations=20,
runs=1, initializationMode=”k-means||”)

Save some output:

sc.parallelize(clusters.centers).saveAsTextFile(”s3://…./output.csv”)

You can also run a standalone Python script using spark-submit instead of the shell.

./bin/spark-submit —master spark://<master-ip>:7077 myscript.py

Remember you’ll have to instantiate your own SparkContext in this case.

 

Future Improvements

The goal of this tutorial is to make things easy.

There are many areas for improvement – for instance – on-demand machines on Amazon are the most expensive.

Spark still spins up “m1.large” instances, even though EC2′s current documentation recommends using the better, faster, AND cheaper “m3.large” instance instead.

At the same time, that custom configuration could mean we can’t use the spark-ec2 script to spin up the cluster automatically. There might be an option there to choose. I didn’t really look.

One major reason I wrote this tutorial is because all the information in it is out there in some form, but it is disparate and some of it can be hard to find without knowing what to search for.

So that’s it. The easiest possible way to run distributed machine learning.

How do you do distributed machine learning?

#apache #aws #big data #data science #ec2 #emr #machine learning #python #spark

Go to comments


Automation: For loops in bash (for loops on the command line)

September 23, 2013

If you have to run a script that processes data for a particular file for a particular day, i.e. your file is on hadoop with the date in the path, like this:

/path/to/my/data/2013/09/23/*

And you have multiple days to process, don’t run them manually. Use a for loop instead:

for day in {19..22}
do
  ./my_script.sh 2013/09/$day
done
#command line

Go to comments


Pig Error -> ERROR 1070: Could not resolve count using imports

April 11, 2013

Or:

ERROR 1070: Could not resolve sum using imports

COUNT() and SUM() are case sensitive, you need to capitalize them.

#hadoop #pig

Go to comments


How to kill a hadoop job

April 10, 2013

Hadoop job -list

List running hadoop jobs and their job ids.

Hadoop job –kill <JobID>

#hadoop

Go to comments