Installing Spark on Ubuntu in 3 Minutes

One thing I hear often from people starting out with Spark is that it’s too difficult to install. Some guides are for Spark 1.x and others are for 2.x. Some guides get really detailed with Hadoop versions, JAR files, and environment variables.

So here’s yet another guide on how to install Apache Spark, condensed and simplified to get you up and running with Apache Spark 2.3.1 in 3 minutes or less.

All you need is a machine (or instance, server, VPS, etc.) that you can install packages on (e.g. “sudo apt” works). If you need one of those, check out DigitalOcean. It’s much simpler than AWS for small projects.

First, log in to the machine via SSH.

Now, install OpenJDK 8 (Java):

sudo apt update && sudo apt install -y openjdk-8-jdk-headless python

Next, download and extract Apache Spark:

wget http://www-us.apache.org/dist/spark/spark-2.3.1/spark-2.3.1-bin-hadoop2.7.tgz && tar xf spark-2.3.1-bin-hadoop2.7.tgz

Set up environment variables to configure Spark:

echo 'SPARK_HOME=$HOME/spark-2.3.1-bin-hadoop2.7' >> ~/.bashrc
echo 'PATH=$PATH:$SPARK_HOME/bin' >> ~/.bashrc
echo 'export PYSPARK_PYTHON=python3' >> ~/.bashrc
source ~/.bashrc

That’s it – you’re all set! You’ve installed Spark and it’s ready to go. Try out “pyspark”, “spark-submit” or “spark-shell”.

Try running this inside “pyspark” to validate that it worked:

spark.createDataFrame([{"hello": x} for x in range(1000)]).count() # hopefully this equals 1000
 

Apache Spark on Google Colaboratory

Google recently launched a preview of Colaboratory, a new service that lets you edit and run IPython notebooks right from Google Drive – free! It’s similar to Databricks – give that a try if you’re looking for a better-supported way to run Spark in the cloud, launch clusters, and much more.

Google has published some tutorials showing how to use Tensorflow and various other Google APIs and tools on Colaboratory, but I wanted to try installing Apache Spark. It turned out to be much easier than I expected. Download the notebook and import it into Colaboratory or read on…

… 

 

Jupyter Notebooks with PySpark on AWS EMR

One of the biggest, most time-consuming parts of data science is analysis and experimentation. One of the most popular tools to do so in a graphical, interactive environment is Jupyter.

Combining Jupyter with Apache Spark (through PySpark) merges two extremely powerful tools. AWS EMR lets you set up all of these tools with just a few clicks. In this tutorial I’ll walk through creating a cluster of machines running Spark with a Jupyter notebook sitting on top of it all.

… 

 

Fluent Python – Microreview

If you really want to get into the details of Python and learn about how the language was built and how some of its internals are implemented, Fluent Python is the book for you.

It’s a great book to refresh your knowledge of coroutines, asyncio, and other Python goodies.

 

Flask Web Development – Miguel Grinberg Microreview

If you’re just getting started with Flask or you want to learn about the innards of Django (yep, that’s right), “Flask Web Development” is the perfect place to start. This book dives right in with creating a full web application, including Jinja templates, authentication, building a REST API, forms, databases, security, and deployment to Heroku using Git. This book will get you up and running with Flask and then quickly go into detail on how to build a full web application.

However, in my opinion, Flask should be used for small applications, but this book goes into full detail about creating a half-Django for a full web application.

With that in mind, this book is great for learning about Django – how would you implement CSRF token checks? How would you set up database migrations from scratch? How would you handle forms? Django does all of that, but hides it all from developers. This book goes into full detail reimplementing a lot of what Django gives you out-of-the-box, which is great.

Overall I highly recommend “Flask Web Development” if you’re learning either Flask, Django, or just web-backend development in general. Don’t just use what Django gives you out of the box and ignore how it’s implemented. This book will answer questions like “Why does my Django app need a SECRET_KEY? What is this CSRF error I keep seeing? How do database migrations work? How do I write my own mail handler?”, making you a better Django developer.

Get it here: http://a.co/73ERCK9

 

Flask Quick Startup Project

I like to start my projects using Flask and Python because it’s fast and quick for most things, yet lightweight.

By default, Flask doesn’t give you much in terms of test frameworks, application settings, deployment, or running the application in production. I always end up making a skeleton that does some of these things, so I decided to put together a GitHub repository with a skeleton Flask project that does it for me.

Have a look here: https://github.com/mikestaszel/flask_startup