Flask Web Development – Miguel Grinberg Microreview

If you’re just getting started with Flask or you want to learn about the innards of Django (yep, that’s right), “Flask Web Development” is the perfect place to start. This book dives right in with creating a full web application, including Jinja templates, authentication, building a REST API, forms, databases, security, and deployment to Heroku using Git. This book will get you up and running with Flask and then quickly go into detail on how to build a full web application.

However, in my opinion, Flask should be used for small applications, but this book goes into full detail about creating a half-Django for a full web application.

With that in mind, this book is great for learning about Django – how would you implement CSRF token checks? How would you set up database migrations from scratch? How would you handle forms? Django does all of that, but hides it all from developers. This book goes into full detail reimplementing a lot of what Django gives you out-of-the-box, which is great.

Overall I highly recommend “Flask Web Development” if you’re learning either Flask, Django, or just web-backend development in general. Don’t just use what Django gives you out of the box and ignore how it’s implemented. This book will answer questions like “Why does my Django app need a SECRET_KEY? What is this CSRF error I keep seeing? How do database migrations work? How do I write my own mail handler?”, making you a better Django developer.

Get it here: http://a.co/73ERCK9

Apache Spark Cluster with Vagrant

New tiny GitHub project: https://github.com/mikestaszel/spark_cluster_vagrant

Over the past few weeks I’ve been working on benchmarking Spark as well as learning more about setting up clusters of Spark machines both locally and on cloud providers.

I decided to work on a simple Vagrantfile that spins up a Spark cluster with a head node and however many worker nodes desired. I’ve seen a few of these but they either used some 3rd party box, had an older version of Spark, or only spun up one node.

By running only one command I could have a fully-configured Spark cluster ready to use and test. Vagrant also easily extends beyond simple Virtualbox machines to many providers, including AWS EC2 and DigitalOcean and this Vagrantfile can be extended to provision clusters on those providers.

Check it out here: https://github.com/mikestaszel/spark_cluster_vagrant

Hello, Startup – Microreview

I just finished reading “Hello, Startup” by Yevgeniy Brikman, a book written for programmers about starting a startup. All the basics are covered, including hiring, teamwork, startup culture, and development methodology while scaling a startup. It’s a nice quick read (I skimmed through the chapters about development, programming, databases, and other technical chapters, but I found the other content to be a great place to start learning about what it takes to build a startup.

Check it out here (also available on Safari Books): http://www.hello-startup.net

Flask Quick Startup Project

I like to start my projects using Flask and Python because it’s fast and quick for most things, yet lightweight.

By default, Flask doesn’t give you much in terms of test frameworks, application settings, deployment, or running the application in production. I always end up making a skeleton that does some of these things, so I decided to put together a GitHub repository with a skeleton Flask project that does it for me.

Have a look here: https://github.com/mikestaszel/flask_startup

DiskDict – Python dictionaries stored on disk

This weekend while running a rather large Python job, I ran into a memory error. It turned out that a dictionary I was populating could potentially become too big to fit into RAM. This is where DiskDict saved me some time.

https://github.com/AWNystrom/DiskDict/

It’s definitely not the best way to solve an issue, but in this case I was working with a limited system where rewriting the surrounding code would have been intrusive. Plus, the job didn’t have time constraints, so DiskDict was a decent workaround.

Wanted to share because it proved useful to me!

EC2 + Route53 for Dynamic DNS

Recently I ran into a problem while working with Amazon EC2 servers. Servers without dedicated elastic IP addresses would get a different IP address every time they were started up! This proved to be a challenge when trying to SSH in to the servers.

How can I have a dynamic domain name that always points to my EC2 server?

Amazon’s Route53 came to mind. Route53, however, does not have a simple way to point a subdomain directly to an EC2 instance. You can set up load balancers between Route53 and your instance, but that’s a hassle. You can also set up an elaborate private network with port forwarding – yuck.

I wanted a simple way to set a Route53 subdomain’s A record to point to an EC2 instance’s public IP address, on startup.

Enter go-route53-dyn-dns. This is a simple Go project that solves this problem. It is a small binary that reads a JSON configuration file and updates Route53 with an EC2 instance’s public IP address.

Included in the GitHub README.md file is how to set everything up.

The project is here: go-route53-dyn-dns.

Containers – The Future?

Docker has been around for years now, LXC has been around even longer, and chroots, jails, and zones are even older than that.

I never really understood the benefit of Docker as opposed to something like LXC up until a few minutes ago.

I installed Fedora 23 Server on my DigitalOcean droplet to try out Cockpit, and one of the features included is the ability to manage Docker containers.

I decided to download, install, and start the Ghost container, just to see how it works. I was surprised when everything just worked right the first time, in two clicks!

In these two clicks, Docker downloaded an image, extracted it, started a process in the container, and forwarded the correct ports. Inside this container was NodeJS, NPM, and probably 50 dependencies for Ghost. I didn’t have to install any of that – all I had to do was click – it was instant.

I’ve gone through various Docker tutorials and my general impression up until this point has been “wow, that’s a whole lot of commands to echo ‘Hello World’ to the console. Configuring port forwarding, the network, Internet access, learning how to create Dockerfiles and commit them just seemed like a LOT of work with very little benefit.

Now I’m convinced that even though it may take some extra work to package up a container and manage it, it’s beneficial in the long run when your project can tell people that installing your software takes one or two clicks.