This is a very highly opinionated guide to using containers, my goal here is to guide you through what seems to be the best way to deploy network applications for personal projects using containers. The only problem is that I don't have a clue how to do any of that myself, but since the best way to learn something really well is to teach someone else that's exactly what I'm going to do. I'll be breaking this down into a series of fairly small posts so that I can hopefully start and finish each in the same day or at least in the same weekend.
My motivation here is to end up with way to deploy a couple of side projects that I'm working on in such a way that I can host them cheaply in a way that I could fairly easily scale them up if any were to actually gain traction.
The apps that I'll be deploying will all likely be written in rust since I've fallen in love with rust and that's what most of my projects are written in. However, 95% of what will be in this series of posts should apply to deploying applications written in any language since that is one of the biggest benefits of deploying apps using containers.
A container is a lightweight version of a traditional virtual machine. Like virtual machines it allows you to run several seemingly distinct instances of an operating system on a single host system but in a much more light weight fashion. Containers lack some of the benefits of traditional VMs, mostly in regards to security and the level of isolation they're able to provide between different containers/VMs running on the same host.
The downsides, however, can often be acceptable due to the many upsides that containers offer, especially when used to deploy several services that are part of the same system. Containers take far fewer resources to run than a full VM, they operate using the host's kernel directly meaning you're not wasting cpu time or space in ram running multiple kernels or low-level system processes. Containers also offer the ability to separate multiple applications, including their operating system level dependencies, from each other and from the host operating system. This allows you to run a standard operating system across all the hosts you manage or even an OS managed and automatically updated by someone else without needing to worry about what system dependencies are installed. This is because when you define a container you will define all the dependencies that need to be included in the image.
I'll admit I don't yet understand how those system dependencies then get updated. I assume you need to update the actual image since updating those dependencies without testing them sounds like a recipe for downtime. But, having the rest of the OS automatically update still seems like a big win to me.
Okay, so how do we container?
"Docker is the world's leading software containerization platform", as docker's website immediately informs you. Docker isn't really a single tool, but rather a whole suite of interrelated tools that you can pick an choose between depending on your needs. Primarily when people talk about "docker" they are referring to "Docker Engine" as far as I can tell. Docker Engine is the bit of software that allows you to build 'docker images' and then run those images setting up the proper connections for mounting parts of your file system and hooking up to the host's networking, presumably among other things.
Although docker is definitely the biggest name in containers right now, it's not the only game in town.
CoreOS is another amalgamation of tools for working with containers. (Surprisingly CoreOS isn't an OS but it has an OS called "container linux", not sure what's up with that.) CoreOS's tool for directly running containers is rkt (pronounced "rocket"). From what I can tell (based almost solely off CoreOS's marketing and docs) rkt seems like a lighter weight alternative to docker (ie. docker engine), it takes advantage of more of what the os already does rather than trying to take care of everything in itself. Unlike docker, rkt isn't a daemon that runs the containers, it's simply a program that sets them up, allowing the existing init system to then manage the running of the container.
Now, I'm going to make some, hopefully reasonable, but probably mostly arbitrary choices about which technologies we're going to use.
First, we're going to run [container linux] as the base os. I'll be deploying on Digital Ocean since they offer it as an off the shelf image (which, interestingly, they just call CoreOS) and have low and straightforward pricing.
I'm using version 1185.5.0 (stable).
I'm going to run container images using rkt instead of docker since rkt can run it's own native image format as well as supporting automatic conversion from docker's image format and the new OCI image format so we'll have flexibility down the line. (rkt is also already working on native support for OCI images.) I've also seen some comments about rkt being more reliable and stable than docker, I have no idea if that's true but I'm apparently listening to it.
I'm using version 1.14.0.
Even though I'm using rkt, I'm going to just use docker images. This is mostly because I can then use GitLab's free private container registry. Oh, by the way....
Everything is going into GitLab.com so I can use the free CI/CD and the container registry, after all this is about trying to do this as cheaply as possible for things that I'm hoping to make money off of so I don't want them to be on a public registry. (And as far as I can tell I do actually need one.)
That's as much as I know so far. In the next post we'll be setting up a trivial web-app for testing with and getting it pushed into gitlab.