Secure, Efficient Docker-in-Docker with Nestybox

Cesar Talledo
9 min readSep 14, 2019

--

Docker containers are great at running application micro-services. But can you run Docker itself inside a Docker container? And can you do so securely?

This article describes Docker-in-Docker, the use cases for it, pros & cons of existing solutions, and how Nestybox has developed a new solution that allows you to run Docker-in-Docker securely and efficiently, without using privileged containers.

Docker users (e.g., app developers, QA engineers, and DevOps) will find this article useful.

TL;DR

If you want to see how easy it is to deploy Docker-in-Docker securely using a Nestybox system container, check this screencast (best viewed on a big screen):

In the rest of the article, we explain what Docker-in-Docker is, when it’s useful, some current problems with it, and how Nestybox has developed a solution that solves these problems.

If you want a quick summary, go to the end of this article.

What is Docker-in-Docker?

Docker-in-Docker is just what it says: running Docker inside a Docker container. It implies that the Docker instance inside the container would be able to build and run containers.

Use Cases

So when would running Docker-in-Docker be useful? Turns out there are several valid scenarios.

DinD in CI pipelines is the most common use case. It shows up when a Docker container is tasked with building or running Docker containers. For example, in a Jenkins pipeline, the Jenkins agent may be a Docker container tasked with running other Docker containers. This requires Docker-in-Docker.

But CI is not the only use case. Another common use case is software developers that want to play around with Docker containers in a sandbox environment, isolated from their host environment where they do real work.

Yet another use case is a system admin on a shared host that wants to allow users on the host to deploy Docker containers. Currently, this requires giving users the equivalent of “root” privileges on the system (e.g., by adding users to the “docker” group), which is not acceptable from a security perspective. In this case, giving each user an isolated environment inside of which they can deploy their own Docker containers in total isolation from the rest of the host would be ideal.

For all of the above, Docker-in-Docker is a great solution as it provides a lighter-weight, easier-to-use alternative to a virtual machine (VM).

DinD and DooD

Currently, there are two well-known options to run Docker inside a container:

  • Running the Docker daemon inside a container (DinD).
  • Running only the Docker CLI in a container, and connecting it to the Docker daemon on the host. This approach has been nicknamed Docker-out-of-Docker (DooD).

I’ll briefly describe each of these approaches and their respective benefits and drawbacks.

I will then describe how Nestybox offers a solution that overcomes the current shortcomings of both of these.

DinD

In the DinD approach, the Docker daemon runs inside a container and any containers it creates exist inside said container (i.e., inner containers are “nested” inside the outer container). The figure below illustrates this.

DinD has gotten a bad rap in the past, not because the use cases for it are invalid but rather due to technical problems in getting it to work. This blog article by Jérôme Petazzoni (until recently a developer at Docker) describes some of these problems and even recommends that Docker-in-Docker be avoided.

But things have improved since that blog was written (back in 2015). In fact, Docker (the company) officially supports DinD and maintains a DinD container image.

But there’s a catch, however: running Docker’s DinD image requires that the outer container be configured as a “privileged” container, as shown in the figure above.

Running a privileged container is risky at best. It’s equivalent to giving the container root access to your machine (i.e., it has full privileges, access to all host devices, access to all kernel settings, etc.) For example, from within a privileged container, you can easily reboot the host (!) with:

$ echo 1 > /proc/sys/kernel/sysrq && echo b > /proc/sysrq-trigger

Because of this, running privileged containers should be avoided in general (for the same reason you wouldn’t log in as root in your host for your daily work). It’s a non-starter in systems where the workloads running inside the container are untrusted.

Another problem with this solution is that it leads to Docker “volume sprawl”. Each time a DinD container is created, Docker implicitly creates a volume in the host to store the inner Docker images. When the container is destroyed, the volume remains, wasting storage in the host.

There is plenty of pain out there with Docker’s DinD solution, especially in CI/CD use cases. The need for privileged containers is causing heartburn.

As explained later, however, Nestybox has now developed a solution that allows running DinD efficiently and without using privileged containers, and one that overcomes the inner Docker image cache problems as well as others.

DooD

Due to the problems with DinD, an alternative approach is commonly used. It’s called “Docker-out-of-Docker” (DooD).

In the DooD approach, only the Docker CLI runs in a container and connects to the Docker daemon on the host. The connection is done by mounting the host’s Docker daemon socket into the container that runs the Docker CLI. For example:

$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock docker

In this approach, containers created from within the Docker CLI container are actually sibling containers (spawned by the Docker daemon in the host). There is no Docker daemon inside a container and thus no container nesting. The figure below illustrates this.

This approach has some benefits but also important drawbacks.

One key benefit is that it bypasses the complexities of running the Docker daemon inside a container and does not require a privileged container.

It also avoids having multiple Docker image caches in the system (since there is only one Docker daemon on the host), which may be good if your system is constrained on storage space.

But it has important drawbacks too.

The main drawback is that it results in poor context isolation because the Docker CLI runs within a different context than the Docker daemon. The former runs within the container’s context; the latter runs within host’s context. This leads to problems such as:

  • Permission problems: the user in the Docker CLI container may not have sufficient permissions to access the Docker daemon on the host via the socket. This is a common problem causing headaches, in particular in CI/CD scenarios such as Jenkins + Docker.
  • Container naming collisions: if the container running the Docker CLI creates a container named some_cont, the creation will fail if some_cont already exists on the host. Avoiding such naming collisions may not always trivial depending on the use case.
  • Mount paths: if the container running the Docker CLI creates a container with a bind mount, the mount path must be relative to the host (as otherwise, the host Docker daemon on the host won’t be able to perform the mount correctly).
  • Port mappings: if the container running the Docker CLI creates a container with a port mapping, the port mapping occurs at the host level, potentially colliding with other port mappings.

This approach is also not a good idea if the containerized Docker is orchestrated by Kubernetes. In this case, any containers created by the containerized Docker CLI will not be encapsulated within the associated Kubernetes pod, and will thus be outside of Kubernetes’ visibility and control.

Finally, there are security concerns too: the container running the Docker CLI can manipulate any containers running on the host. It can remove containers created by other entities on the host, or even create un-secure privileged containers and put the host at risk.

Depending on your use case and environment, these drawbacks may void the use of this approach.

Solution: DinD with Nestybox System Containers

As described above, both the Docker DinD image and DooD approaches have some important drawbacks.

Nestybox offers an alternative solution that overcomes these drawbacks: run Docker-in-Docker using “system containers”. In other words, use Docker to deploy a system container, and run Docker inside the system container.

A Nestybox system container is a container designed to run system-level software in it (like systemd and Docker) as well as applications. You deploy it with Docker, just like any other Docker container. You only need to point Docker to the Nestybox container runtime “Sysbox”, which you need to download and install in your machine. For example:

$ docker run --runtime=sysbox-runc -it my-dind-image

More info on system containers can be found in this Nestybox blog post.

Within a Nestybox system container, you are able to run Docker inside the container easily and securely with total isolation between the Docker inside the system container and the Docker on the host. No need for unsecure privileged containers anymore, as shown below:

The Sysbox container runtime takes care of setting up the system container such that Docker can run inside the container as if it were running on a physical host or VM (e.g., with a dedicated image cache, using its fast storage drivers, etc).

This solution avoids the issues with DooD and enables use of DinD securely.

And it’s efficient: the Docker inside the system container uses it’s fast image storage driver and the volume sprawl problem described earlier is solved.

The screencast video at the beginning of this article shows the solution at work. There are written instructions for it in Sysbox Quickstart Guide, as well as in the Nestybox blog site.

The system container image that you deploy is fully configurable by you.

For example, you can choose to use Docker’s official DinD image and deploy it using the Docker’s official instructions, except that you simply replace the “— privileged” flag with “ — runtime=sysbox-runc” flag in the “docker run” command:

$ docker run --runtime=sysbox-runc --name some-docker -d \
--network some-network --network-alias docker \
-e DOCKER_TLS_CERTDIR=/certs \
-v some-docker-certs-ca:/certs/ca \
-v some-docker-certs-client:/certs/client \
docker:dind

Alternatively you can create a system container image that works as a Docker sandbox, inside of which you can run Docker (both the CLI and the daemon) as well as any other programs you want (e.g., systemd, sshd, etc.). This Nestybox blog article has examples.

Fundamentally, this solution allows you to run one or more Docker instances on the same machine, securely and totally isolated from each other, thus enabling the use cases we mentioned earlier in this article. And without resorting to heavier VMs for the same purpose.

In a Nutshell

  • There are valid use cases for running Docker-in-Docker (DinD).
  • Docker’s officially supported DinD solution requires a privileged container. It’s not ideal. It may be fine in trusted scenarios, but it’s risky otherwise.
  • There is an alternative that consists of running only the Docker CLI in a container and connecting it with the Docker daemon on the host. It’s nicknamed Docker-out-of-Docker (DooD). While it has some benefits, it also has several drawbacks which may void it’s use depending on your environment.
  • Nestybox system containers offer a new alternative. They support running Docker-in-Docker securely, without using privileged containers and with total isolation between the Docker in the system container and the Docker on the host. It’s very easy to use as shown above.

Nestybox is looking for early adopters to try our system containers. Download the software for free. Give it a shot, we think you’ll find it very useful.

Some useful links:

Originally published at https://blog.nestybox.com on September 14, 2019.

--

--