Jupyter Notebooks with PySpark on AWS EMR

One of the biggest, most time-consuming parts of data science is analysis and experimentation. One of the most popular tools to do so in a graphical, interactive environment is Jupyter.

Combining Jupyter with Apache Spark (through PySpark) merges two extremely powerful tools. AWS EMR lets you set up all of these tools with just a few clicks. In this tutorial I’ll walk through creating a cluster of machines running Spark with a Jupyter notebook sitting on top of it all.

Summary

  1. Create an EMR cluster with Spark 2.0 or later with this file as a bootstrap action: Link.
  2. Add this as a step: Link.
  3. SSH in to the head/master node and run pyspark with whatever options you need.
  4. Open up port 8888 (make sure it’s allowed in the security group) of your head/master node in a web browser and you’re in Jupyter!

Step by Step Screenshots

Find them here: https://github.com/mikestaszel/spark-emr-jupyter/tree/master/screenshots

Creating an EMR Cluster

When creating your EMR cluster, all you need to do is add a bootstrap action file that will install Anaconda and Jupyter Spark extensions to make job progress visible directly in the notebook. Add this as a bootstrap action: https://github.com/mikestaszel/spark-emr-jupyter/blob/master/emr_bootstrap.sh

Running Jupyter through PySpark

When your cluster is ready, you need to run a step that will tell PySpark to launch Jupyter when you run it. You’ll need to copy this file into an S3 bucket and reference it in the step: https://github.com/mikestaszel/spark-emr-jupyter/blob/master/jupyter_step.sh

To do this, add a step to the cluster with the following parameters:

JAR location: s3://[region].elasticmapreduce/libs/script-runner/script-runner.jar

Arguments: s3://[your-bucket]/jupyter_step.sh

Run PySpark

You’re ready to run PySpark! You can go ahead and run something like “pyspark –master yarn” with any options you need (for example in a tmux session on your master node). You should see the Jupyter notebook server start and print out an address and authentication token.

In your browser, open up port 8888 of the head node and paste in the authentication key and you’re all set! You can create a notebook or upload one. You don’t need to initialize a SparkSession – one is automatically created for you, named “spark”. Make sure your security group firewall rules allow access to port 8888!

One last thing to keep in mind is that your notebooks will be deleted when you terminate the cluster, so make sure to download anything you need! There are some Jupyter plugins you can try if you want to store notebooks in S3, but that’s another blog post.

 

Apache Spark Cluster with Vagrant

New tiny GitHub project: https://github.com/mikestaszel/spark_cluster_vagrant

Over the past few weeks I’ve been working on benchmarking Spark as well as learning more about setting up clusters of Spark machines both locally and on cloud providers.

I decided to work on a simple Vagrantfile that spins up a Spark cluster with a head node and however many worker nodes desired. I’ve seen a few of these but they either used some 3rd party box, had an older version of Spark, or only spun up one node.

By running only one command I could have a fully-configured Spark cluster ready to use and test. Vagrant also easily extends beyond simple Virtualbox machines to many providers, including AWS EC2 and DigitalOcean and this Vagrantfile can be extended to provision clusters on those providers.

Check it out here: https://github.com/mikestaszel/spark_cluster_vagrant