Hello world! I’ve now moved from WordPress to Ghost to Jekyll and now back to WordPress.
Recently I ran into a problem while working with Amazon EC2 servers. Servers without dedicated elastic IP addresses would get a different IP address every time they were started up! This proved to be a challenge when trying to SSH in to the servers.
How can I have a dynamic domain name that always points to my EC2 server?
Amazon’s Route53 came to mind. Route53, however, does not have a simple way to point a subdomain directly to an EC2 instance. You can set up load balancers between Route53 and your instance, but that’s a hassle. You can also set up an elaborate private network with port forwarding – yuck.
I wanted a simple way to set a Route53 subdomain’s
A record to point to an EC2 instance’s public IP address, on startup.
Enter go-route53-dyn-dns. This is a simple Go project that solves this problem. It is a small binary that reads a JSON configuration file and updates Route53 with an EC2 instance’s public IP address.
Included in the GitHub
README.md file is how to set everything up.
The project is here: go-route53-dyn-dns.
This one is a note to self that I ran into on Ubuntu 14.04 recently. Logrotate seems to kill ProFTPD every so often.
I found the solution here: http://stackoverflow.com/questions/23666697/proftpd-killed-signal-15-error-how-to-fix-logrotate-restart-error
Docker has been around for years now, LXC has been around even longer, and chroots, jails, and zones are even older than that.
I never really understood the benefit of Docker as opposed to something like LXC up until a few minutes ago.
I installed Fedora 23 Server on my DigitalOcean droplet to try out Cockpit, and one of the features included is the ability to manage Docker containers.
I decided to download, install, and start the Ghost container, just to see how it works. I was surprised when everything just worked right the first time, in two clicks!
In these two clicks, Docker downloaded an image, extracted it, started a process in the container, and forwarded the correct ports. Inside this container was NodeJS, NPM, and probably 50 dependencies for Ghost. I didn’t have to install any of that – all I had to do was click – it was instant.
I’ve gone through various Docker tutorials and my general impression up until this point has been “wow, that’s a whole lot of commands to echo ‘Hello World’ to the console. Configuring port forwarding, the network, Internet access, learning how to create Dockerfiles and commit them just seemed like a LOT of work with very little benefit.
Now I’m convinced that even though it may take some extra work to package up a container and manage it, it’s beneficial in the long run when your project can tell people that installing your software takes one or two clicks.
Time and time again, developers stumble upon APIs using OAuth. I’ve recently added Fitbit integration to an application I’m working on.
FitBit’s API uses OAuth v1 for authentication, and using OAuth with Django was really straightforward. Here’s what I did:
You’ll need the following packages:
Before I dive in to the code, I’ll give an overview. My application has
urls.py entries for
/fitbit/ for requesting the request token and storing the OAuth credentials. I store the credentials in a
FitBitAPI model (
ForeignKey to a Django User and
CharFields for the OAuth key and OAuth secret. Whenever I need to make authenticated API calls, I can just pull the key and secret for each user right from the database.
You just need 2 entries for OAuth v1 to work:
from django.conf.urls import patterns, url from fitbit_api import views urlpatterns = patterns('', url(r'^request_request_token', views.request_request_token, name='fitbit_api_request_request_token'), url(r'^store_credentials', views.store_credentials, name='fitbit_api_store_credentials'), )
Again, really simple:
from django.db import models from django.contrib.auth.models import User class FitBitAPI(models.Model): user = models.ForeignKey(User) access_token = models.CharField(max_length=128, default='') access_token_secret = models.CharField(max_length=128, default='') def __unicode__(self): return self.user.email
This is where the action happens.
from django.shortcuts import redirect from django.conf import settings from django.contrib import messages from fitbit_api.models import FitBitAPI from requests_oauthlib import OAuth1Session def request_request_token(request): oauth = OAuth1Session(settings.FITBIT_KEY, client_secret=settings.FITBIT_SECRET) fetch_response = oauth.fetch_request_token('https://api.fitbit.com/oauth/request_token') resource_owner_key = fetch_response.get('oauth_token') resource_owner_secret = fetch_response.get('oauth_token_secret') credentials = FitBitAPI.objects.create(user=request.user, access_token=resource_owner_key, access_token_secret=resource_owner_secret) return redirect('https://www.fitbit.com/oauth/authorize?oauth_token=%s' % resource_owner_key) def store_credentials(request): oauth = OAuth1Session(settings.FITBIT_KEY, client_secret=settings.FITBIT_SECRET) oauth_response = oauth.parse_authorization_response(request.build_absolute_uri()) verifier = oauth_response.get('oauth_verifier') oauth = OAuth1Session(settings.FITBIT_KEY, client_secret=settings.FITBIT_SECRET, resource_owner_key=credentials.access_token, resource_owner_secret=credentials.access_token_secret, verifier=verifier) oauth_tokens = oauth.fetch_access_token('https://api.fitbit.com/oauth/access_token') resource_owner_key = oauth_tokens.get('oauth_token') resource_owner_secret = oauth_tokens.get('oauth_token_secret') credentials.access_token = resource_owner_key credentials.access_token_secret = resource_owner_secret credentials.save() return redirect('/') # all done!
That’s all there is to it! Just make sure when you register your application you set the callback URL to be one that makes
store_credentials() run, in this case
LXC – Linux containers – are a relatively new technology available on Linux. LXC is similar to virtualization (VMWare, KVM, Parallels…), but it is much closer to the concept of BSD “jails”. There are some advantages to using LXC over virtualization:
- No overhead. LXC is just a container, isolating users, processes, and files, but not emulating a processor, network cards, sound, etc. The end result is no overhead in using LXC containers at all.
Instant-on, instant-off, instant-setup. Starting a container takes less than a second, as does shutting down. Once you download the initial OS image (see below), setting up new containers takes seconds. No installation procedures to go through!
Extremely easy to set up, use, and expand on. On Ubuntu 12.04 (and later), installation consists of one command. Setting up your first container is also 1 command. Starting that container – also 1 command. No installation, setting up users, restarting your machine, kernel modules, downloading ISOs, etc.
Docker uses LXC under the hood. I haven’t used Docker much, but it’s becoming really popular.
I use LXC all the time for development work. Whenever I need a clean Ubuntu installation to run tests on (great for making sure your setup process actually works!), try things out (different databases, ideas), or installing things I know I won’t need for a long time (as soon as I’m done with school, TeXLive is going to be removed with 1 command!).
Here’s how to get started on Ubuntu 12.04 and later.
sudo apt-get install lxc lxc-templates debootstrap
Setting up an Ubuntu Container
sudo lxc-create -t ubuntu -n my-first-container -- -r precise
You probably guessed – “my-first-container” is the name of the container, running Ubuntu Precise (12.04).
This command will download the latest Ubuntu 12.04 packages and install them. It will also cache the image for instant creation later (just use the same command for more containers). It will also set up simple NAT networking.
Using the Container
sudo lxc-start -n my-first-container
That’s all! You’ll be asked to log in. The Ubuntu container has a default username “ubuntu” and password “ubuntu”. Check the machine’s IP address with the command:
Stopping the Container
sudo shutdown -h now
Maybe “halt” or “poweroff” work, but I’ve aliased this and it’s muscle-memory for me.
Deleting (Destroying) a Container
sudo lxc-destroy -n my-first-container
That’s all there is to it! The image for creation of future containers won’t be erased, but this container will be.
I spent a bit of time in my Introduction to Parallel Programming class a while ago working on a “cache-conscious” version of Radix sort. “Cache-conscious” algorithms are ones that take into account the size of the CPU’s cache.
The algorithm is an implementation of a paper presented in class – available here.
Radix sort works by partitioning the input into “buckets” based on individual digits of each significant position.
The “cache-conscious” implementation of the algorithm performs a traditional radix sort if all of the numbers in the current bucket fit into the CPU’s cache – otherwise, the numbers are further partitioned.
The code is written in C, and compiles using GCC, Clang, or GCC with OpenMP. See the project here.