Moving from Docker to rkt

Adriaan de Jonge
10 min readJul 11, 2016

Even the coolest products and services come with vendor lock-in. And no matter how enthusiastic I have been about Docker in the last three years, at some point this vendor lock-in starts to hurt. The good news is that competition is well on its way to becoming a viable alternative. Perhaps even a better alternative in some regards. This article takes a look at CoreOS’s rkt (pronounced: “rock-it”) and why you need to start investigating rkt now

Update (20 October 2017): Please take note of all the updates throughout this article. A number of flaws have been resolved over the last year. I chose to include updates rather than changing the original text.

What is wrong with Docker?

If you are anything like me, you may still be so enthusiastic about Docker that you’re blindsided to some of its flaws. Don’t get me wrong, I am not saying Docker is bad. But it is not perfect either. So let’s take a look at some of these flaws.

Docker starts behaving like The Old Microsoft

Once companies realize they have a monopoly, they start behaving like monopolists. Just like Microsoft once practically killed Netscape by including Internet Explorer with Windows, now Docker is trying to defeat Kubernetes, Mesos/Marathon and Nomad by including Swarm into the Docker Core.

Docker deserves credit for simplifying Swarm and making it easily accessible to the average Docker user. Just as much as Microsoft deserves credit for the improvements they introduced in Internet Explorer 6.0 (True story!). The biggest issue in MSIE 6 is that it made Microsoft so dominant that they stopped innovating for five years.

Please realize the consequence of including Swarm in the core of Docker… Imagine your next assignment for a client that wants to move to Docker: When it comes to selecting the best scheduler, you first have to argue why Swarm is insufficient, before you can even start talking about Kubernetes, Mesos/Marathon or Nomad. Swarm is already there anyway so why would you need to install another scheduler on top of it? Guilty until proven innocent — a reversed burden of proof.

I hope Docker will do a better job improving Swarm than Microsoft did with MSIE. But even now, Kubernetes has a head start on Swarm in many ways. And as Kubernetes keeps developing, Swarm is falling further behind. It is good Docker created an easy to use scheduler for Docker but in my opinion, they should have kept it separate from the Docker core.

Update (4 September 2017): To be fair, it must be noted that Docker appears to have taken this feedback seriously. They have separated the core of Docker into the Moby Project (see also this introduction blog) which can be re-packaged/re-assembled by the community in any way they see fit.

Update (20 October 2017): Moreover, Docker now also packages Kubernetes with their product. See this blog for more detail.

“Docker’s architecture is fundamentally flawed”

At the heart of Docker is a daemon process that is the starting point of everything Docker does. The docker executable is merely a REST client that requests the Docker daemon to do its work. Critics of Docker say this is not very Linux-like.

Where it starts hurting is if you use an init system like systemd. Since systemd was not designed for Docker specifically, when you try to start a Docker process, you actually start a Docker client process that in turn requests the Docker daemon to start the actual Docker container. There is a risk that the Docker client fails while the actual Docker container keeps running. In such a situation, systemd concludes that the process has stopped and restarts the Docker client — in turn possibly creating a second container (yes, you can work around this but that is besides the point).

About a year ago, I was investigating systemd as a simple Docker scheduler. I ran into the same risks of combining Docker with systemd. Back then, I considered these as downsides of systemd rather than downsides of Docker. In my view back then, systemd was not designed to be “Docker-native”.

Now, I learned that perhaps Docker is the issue rather than systemd. As mentioned, Docker is not very Linux-like and because of this generic Linux tools do not play nice with Docker. Who is at fault, relative newcomer Docker or the Linux tools that have been around for years? Some say that “Docker’s architecture is fundamentally flawed” — this is a statement of Alex Polvi, CEO of CoreOS.

The Docker build process is stuck in second gear

One of the nice features of Docker was the introduction of Dockerfiles that you can maintain in version control to reproduce Docker images. The only issue here is that the Dockerfile syntax has been frozen in Docker’s roadmap for a long time now. This means that the Dockerfile format has not evolved with the insights in the Docker community for at least a year and a half.

Of course, at some point we need a stable format. But only when the format has fully matured. Just to give you an example where Dockerfiles are lacking, consider Kelsey Hightower’s statement in his blog 12 Fractured Apps:

“Remember, ship artifacts not build environments.”

Fact is that the Dockerfile format does not support this separation very well. Consider for example building a Golang executable: the build environment is several hundreds of megabytes while the resulting executable may be as small as a couple of megabytes only. Sure, you can work around this by building the image on your local system. But you cannot elegantly build a small Golang image from a single Dockerfile using an Automated Build in Docker Hub. Issues proposing to extend the Dockerfile format to better support this principle have been on hold for years because of the freeze.

Update (11 May 2017): The above is FIXED with Multi-Stage Builds. See my new blog “Simplify the Smallest Possible Docker Image

How does rkt improve the situation?

The short answer is that rkt now provides a viable alternative to Docker. It has a more Linux-like architecture. And a strong competitor will keep the monopolist sharp.

The long answer:

Late 2014, CoreOS announced Rocket — later abbreviated and renamed as rkt — as a competing container platform. While rkt got a lot of attention in the first weeks after the announcement, it became silent for some time. CoreOS continued developing rkt into a viable alternative to Docker and only came back in the spotlight with the release of rkt 1.0 in February 2016.

Now that Docker announced its inclusion of Swarm into Docker Engine 1.12, it is time to start looking seriously at rkt as an alternative to Docker. Can it replace Docker now or in future? Is it difficult to switch between Docker and rkt?

Let’s take a look at rkt and find out…

rkt can run Docker images

Consider you want to replace your staging and production systems with rkt while keeping all your development systems as is… In this case you can replace Docker with rkt on your runtime systems only. It is not precisely a drop-in replacement. After all, the architecture is different — but we’ll get to that. On the other hand, it is not too difficult to learn the new command line instructions for running a Docker container on rkt.

Open a CoreOS instance on your favorite cloud provider and type:

rkt run --insecure-options=image --port=80-tcp:80 docker://nginx

or replace nginx with your favorite Docker image. Under the hood, rkt converts the Docker image to Application Container (appc) format.

This means that you don’t need Docker to run Docker images! And it means that you can reuse anything you created with Docker without the least bit of migration pain — other than learning rkt’s command line syntax.

Let’s take a look at the individual parts of this command:

--insecure-options=image

If you leave this part out, rkt will refuse to start your image as it cannot find a signature (.asc file) to check the integrity of the Docker image. rkt is built secure by default. In this case, it means that you can save yourself some typing on the command line by providing the proper signatures.

--port=80-tcp:80

Just like in Docker, you need to explicitly expose a port to the outside world. Unlike Docker, rkt ports are named rather than numbered. If this were a native rkt image (or appc image to be precise) it would have probably read:

--port=http:80

However, since Docker does not have a concept of named ports, the exposed ports are automatically named as <number>-<protocol>. This also means that if you can only expose ports that have been explicitly specified using the EXPOSE keyword in the Dockerfile.

docker://nginx

The image is expected in the default Docker repository (Docker Hub). Instead of this example, there can be longer URLs pointing to either alternative Docker repositories (docker://…) or to generic websites containing appc images (https://…) but then you are not running Docker images anymore.

rkt has a simpler architecture

In rkt, there is no daemon process. The rkt command line tool does all the work. See the below picture borrowed from the CoreOS website:

Process model for rkt vs Docker

Unlike Docker, this means that if you use systemd to start rkt containers, you are actually monitoring the container process rather than monitoring a client process that connects to a daemon that in turn (directly or indirectly — depending on your Docker version) starts the container.

On the flip side of this, you cannot type

rkt run -d    ...etc

and daemonize the process like the Docker client. Instead, you’d have to run an init system to daemonize the process. For example, you can run this:

systemd-run --slice=machine rkt run   ...etc

Also, you cannot run the rkt command from a remote machine like the docker client. On the plus side, you can consider this as a security feature as well.

rkt follows an open standard for images

Right now, rkt allows you to use Application Container (appc) or Docker images. In the near future, the Open Container Initiative (OCI) format will be added to this, but we’ll come to that.

The advantage of having an open standard for container images is that it allows the open source community to provide multiple ways of building images. So you are not tied to the Dockerfile format only.

The default way to build an appc image is using a command line tool called acbuild. Honestly, it is a matter of taste whether you’d prefer acbuild over the Dockerfile format. The advantage is that it stays closer to Linux principles by nature.

The bigger advantage is that the open format allows for alternative build mechanisms. For example, consider the all-round build tool dgr or the Golang specific build tool goaci.

Once the OCI format comes available, there will be even more possibilities but at the time of writing this article, we’ll have to wait for this.

rkt: Are we there yet?

If you want to start using rkt for real right now, there are still a few bumps in the road. Although you can navigate around these, it must be said that at the time of writing this article it may still be early to go all-in with rkt. But that should be a temporary problem.

The OCI image format is not ready yet

As mentioned, rkt supports the Docker image format and can interact with Docker repositories. If you are okay to stick with the Docker image format, it’ll work fine.

However, if you are a bit of a purist — like me — you may not appreciate the fact that you cannot use named ports yet. So you’ll want to use appc containers. And you can use appc containers.

But how do you upload your appc container to the CoreOS alternative to Docker Hub — quay.io? Personally I haven’t found a way to do so…

The nice thing about the appc format is that you are not tied to a particular repository like Docker. Instead, you have the possibility to host the image on a regular http-server or local file system. You can enrich the meta data in HTML files containing meta-tags.

Still, rkt can only really take off once the OCI image format becomes sufficiently mature and well-adopted to be accepted as standard by everyone — including Docker.

Nomad & K8S support not fully mature

In order to run containers in production, at some point you need a scheduler to control what runs where. Both Kubernetes and Nomad have support for rkt already. The catch for now is that the support is not yet as mature as you might hope.

The Kubernetes support for rkt is labeled under active development. It has minimal documentation and a list of known issues. The Nomad support is labeled experimental and does not support dynamic ports.

Update (4 September 2017): If you want to run Kubernetes without Docker, you may also like to learn about the cri-o project. This project is part of an official Kubernetes incubator and based on runc. At the time of writing, it appears to have more traction than rktnetes.

A bit less portable to other platforms

The good news is that rkt is not only for CoreOS. You can install rkt on multiple well-known Linux distributions, including Debian, Ubuntu and Fedora.

If you want to run rkt on your Apple/Windows development machine, you can of course do so running on top of virtualization layers like VirtualBox. However, the facilities are not yet as nice as Docker’s toolbox — especially the latest beta versions of Docker for Mac/Windows which gives you a native-like feel. Admitted, this may be a slight downside to rkt’s architecture where the rkt executable does all the work rather than being a thin REST client.

If you want to do your development on non-Linux machines, the best way is still to work with Docker images locally and convert these to appc images when you go to staging and production.

Conclusion

Although it is still early, rkt has now become a viable alternative to Docker. If you don’t need all the dynamic features of Kubernetes and Nomad and the more static options like systemd and fleet sufficiently meet your scheduling requirements then you can already move your staging and production servers to rkt right now.

Give it a little more time and there will be true interoperability between Docker and other container platforms in the form of OCI images. At the same time, allow Kubernetes and Nomad support for rkt to evolve a bit and then the Docker and rkt container platforms will be as good as interchangeable.

Is there a strong need to abandon Docker as fast as we can? No, not really. Despite the flaws mentioned in this article, Docker is still an innovative platform with a large ecosystem of development tools, schedulers and orchestrators. The important thing is to have more than one option to choose from. And now Docker has a serious competitor to keep them sharp!

--

--

Adriaan de Jonge

All things AWS (i.e. Serverless, Containers). The postings on this site are my own and don’t necessarily represent my employer’s position.