Think inside the box

4 Ops things that I am excited about today.

It used to be hard to actually run code in production. I needed to rent a rack server from a company, login into the machine, set it up, then periodically deploy new application version. Deploying often meant overriding the existing running application, taking a server offline. Long, cumbersome, and error prone. No wonder we did deploys once a month!

Here are the 4 things that completely changed the way I, a developer, view code deployment and running the production servers.

1 - Docker

Docker images and containers are doing to software development what inter-modal container has done for the global shipping industry. Read the excellent The Box by Marc Levinson to learn the historical context and draw the parallels between the software and the shipping industry.

Shipping containers waiting to be transported

With Docker, each piece of software can be placed into a standard "box", isolated from its environment and can be "shipped" to the production server. There the code, still in its container, runs on top of a very light virtual machine. Same container can be executed locally by the developer, removing the classic "It works on my machine" excuse.

Additional services an application might require, like a specific version of a database are also containerized. Place the database into its own container and link the two containers together. Now you have a pair that can just work together, unaffected by whatever else is installed or running on the host machine.

No wonder Docker icon is a whale carrying a bunch of shipping containers!

2 - Kubernetes

Managing a few Docker containers on my machine is simple. But what about actually shipping them off to the "cloud" to be executed? What about creating "stacks" for testing and production? A "stack" encompasses all the services necessary to run an application container and whatever containers it needs. Even a small application once it reaches the cloud might need certain common utilities - a load balancer, multiple "application worker" instances, a proxy, etc.

Well, all the major cloud-hosting providers: AWS, Google, Microsoft allow you to run your Docker containers on their cloud. From your stand point the offerings are almost identical, the difference is only in the specific additional APIs and services the providers offer. If your application, running inside a Docker container does not use any special API, then all clouds look the same.

Which means you can shop around; moving from one cloud to another or running in several clouds simultaneously for redundancy has never been easier. A cloud provider cannot even raise prices, because you can just take your containers and leave.

But you still need an easy way to deploy multiple containers at once. This is where Kubernetes come into play. It is a tool that simplifies setting up and working with an entire "stack" of Docker containers.

Do you want a new installation of your entire application at a push of a button (or better - a single command from CI)? Just use Kubernetes command, and you could controls tens of thousands of individual application containers at once!

Below is a great video showing a simple Node server running inside a single Docker container first, and then deployed into a cluster on Google cloud platform using Kubernetes.

Or if you prefer to read, here is an alternative tutorial

3 - Dokku

I used to make apps to run on Heroku platform. I liked it because

  • it is simple - just push the code to remote "heroku" Git server
  • it is powerful - Heroku has 100s of additional services you can add to your application with a push of a button
  • it has small free service tier - which is great if you do not care about the application being available right away when you need it

The last point is important - having a Heroku server (a "dyno" as they call it) available online 24x7 is going to cost you. A lot. Like $50 per month PER APP a lot. Which is expensive!

Today, I am running 10s of applications online 24x7 - and it all together costs me $10 per month. All thanks to a tiny open source project called Dokku. Dokku takes a plain Ubuntu machine, like the one you rent from AWS or DigitalOcean and makes it into your own private "Heroku". Then you can push code (just like Heroku), run multiple application on the same box (just like Heroku does under the hood), connect services (via plugins), etc, etc.

Under the hood Dokku uses Docker - the "D" in "Dokku" - to isolate each application. Thus any project that wraps itself into a Docker container can be executed there. Proxies, servers, databases, everything under the sun can be just configured and running on a single server, costing you very little to run.

I have described how to setup a Dokku server with running applications from scratch in this blog post. Do not be intimidated by the long post, the post goes into details on many additional services you might not need right away.

4 - Zeit Now

The last tool I am excited about gives a slightly different advantage, yet again it relies on Docker containers running in the cloud Zeit's Now hides the actual servers from you. If you have a Nodejs project or a project with a valid Dockerfile, just call the now command from the terminal - and in a couple of seconds that application will be running in the cloud.

1
2
3
4
5
6
7
8
▲ ~/my-proj $ ls
package.json index.js lib static
▲ ~/my-proj $ now
> Ready! https://my-proj-hj1v2m.now.sh
(copied to clipboard) [440ms]
> Upload [====================] 100% 5.7s
> Sync complete (1.38MB) [5702ms]
▲ ~/my-proj $

The now deployment has a very cool property - you get a fresh instance of your app running in the cloud every time you deploy! This allows you to test each deploy separately using the unique url that appears during the deploy.

Once you confirm that a particular deploy url like https://my-proj-hj1v2m.now.sh works, you can point your DNS at that url and that rolls your application. Pretty cool - no messing with live application to do testing and upgrade - just get a brand new deployed instance, test it and if it works, update the DNS, and then take down the old instance.

Bonus

There is another deploy model that I like that skips the entire server deployment and hosting step. For very light weight code I can skip running a server continuously and instead provide a simple function that will be executed only when there is a request to that function via HTTP.

There are several choices: AWS Lambda, webtask.io, but I am interested in the new tool that just appeared: stdlib.com. You can namespace your functions and call them from anywhere - either via HTTP request or using a NodeJS wrapper package.

A typical lightweight function written on stdlib.com to add two number would look like this

1
2
3
4
5
6
7
// function name (user/namespace/name) = gleb/math/add
module.exports = (params, callback) => {
// params has keys: {args, flags, vflags, remoteAddress}
let a = parseInt(params.args[0]) || 0;
let b = parseInt(params.args[1]) || 0;
return callback(null, `${a} + ${b} = ${a + b}`);
}

Calling this function via HTTP would be just a POST request http POST https://f.stdlib.com/gleb/math/add?args0=2&args1=3

Using the same function from Node would be just

1
2
3
4
5
6
7
8
9
const stdlib = require('stdlib.com');
stdlib.f('gleb/math/add')(
{
args: [2, 3],
},
(err, response) => {
// response should be "2 + 3 = 5"
}
)

The best thing - your function can use any public NPM module, so there is almost nothing these functions cannot do - yet I do not have to allocate any servers, maintain them, scale them up if demand goes up, etc.

Pretty cool, right?