A System Administrator’s Guide to Containers

Anybody who’s a part of the IT industry will have come across the word “container” during the course of his/her work. After all, it’s one of the most overused terms right now, which also indicates different things for different people depending on the context. Standard Linux containers are nothing more than regular processes running on a Linux-based system. This process category is separate from other process groups thanks to Linux security limitations, resource limitations, and namespaces.

Identifying the Right Processes

When you boot one of the current crop of Linux systems and view a process with cat /proc/PID/cgroup, it immediately becomes known that this process occurs in a cgroup. Once you take a closer look at /proc/PID/status, you begin to notice capabilities. Then you can view SELinux labels by checking out /proc/self/attr/current. Also, seeing /proc/PID/ns gives you a listed view of namespaces the process is currently in.

Thus, if a container gets defined as a process that has resource constraints, namespaces, and Linux security constraints, it can be argued that each process on the Linux system is present in a container. This is precisely the reason why Linux is often considered containers and vice versa.

Container Components

The term “container runtimes” refer to tools used for modifying resource limitations, namespaces, and security, and also for launching the container. The idea of “container image” was initially introduced by Docker, and pertains to a regular TAR file comprised of two units:

  • JSON file: This determines how the rootfs must be run, including the entrypoint or command required to run in rootfs once the container starts, the container’s working directory, environment variables to establish for that particular container, along with a couple of other settings.
  • Rootfs: This is the container root filesystem which serves as a directory on the system resembling the regular root (/) of the OS.
  • What happens is, Docker starts to “tar up” the rootfs while the JSON file develops the base image. The user is now able to install extra content in the rootfs, thereby forming a fresh JSON file, and then tar the variation between the actual image and the new picture with the updated JSON file. Thus, a layered image is created.

Building Blocks of Containers

The tools that are commonly used for forming container images are known as container image builders. In some cases, container engines are responsible for this task, but numerous standalone tools can also be found for creating container images. These container images or tarballs are taken by Docker, and then moved to a web service. This enables them to be later pulled by Docker, which also develops a protocol for pulling them and dubs the web service as container registry.

The term “container engines” are programs capable of pulling container images from the container registries and then reassembling them onto the container storage. If that’s not all, container engines are also responsible for launching container runtimes.

The container storage is generally a COW or copy-on-write layered filesystem. Once the container image gets pulled down from the container registry, the first thing that needs to be done is untar the rootfs so it can be placed on disk. In the event that multiple layers are present in the image, every single layer gets downloaded and then stored on a separate layer of the COW filesystem. This means every single layer contains a separately stored layer, which increases sharing for the layered images. Container engines tend to support multiple kinds of container storage, such as zfs, btrfs, overlay, aufs, and devicemapper.

Once the container engine has completed downloading the container image to the container storage, it must form a container runtime configuration. This runtime configuration is a combination of input from the user or caller as well as the content from container image specification. The container runtime configuration’s layout as well as the exploded rootfs are often standardized by the OCI standards body.

The container engine releases a container runtime that is capable of reading the container runtime specification, modifying the Linux cgroups in the process along with Linux namespaces and security limitations. Afterward, the container command gets launched to form the PID 1 of the container. By now, the container engine is able to relay stdout/stdin back the caller while gaining control over the container.

Please keep in mind that several of the container runtimes get introduced for using various parts of the Linux so the containers can be isolated. This allows users to run containers with KVM separation. They are also able to apply hypervisor strategies. Due to the availability of a standard runtime specification, the tools may be launched by a single container engine. Even Windows may use the OCI Runtime Specification to launch Windows containers.

Container orchestrators are a higher level. These tools help coordinate the execution of containers on various different modes. They interact with the container engines for managing containers. Orchestrators are responsible for telling container engines to start the containers and wire networks together. They can monitor the containers and introduce ones as the load expands.

Benefits of Containers

Containers provide numerous benefits to enable the DevOps workflows, including:

  • Simpler updates
  • A simple solution for consistent development, testing and production environments
  • Support for numerous frameworks

When the user writes, tests and deploys an application within the containers, the environment stays the same at various parts of the delivery chain. This means collaboration between separate teams becomes easier since they all work in the same containerized environment.

When software needs to be continuously delivered, it requires application updates to roll out on a constant, streamlined schedule. This is possible with containers as applying updates becomes easier. Once the app gets distributed into numerous microservices, every single one gets hosted in a different container. If a part of the app gets updated by restarting the container, the rest of it remains uninterrupted.

When performing DevOps, it helps to have the agility to switch conveniently between various deployment platforms or programming frameworks easily. Containers provide the agility since they are comparatively agnostic towards deployment platforms and programming languages. Nearly any kind of app may be run inside the container, irrespective of the language it’s written in. What’s more, containers may be moved easily between various kinds of host systems.

Concluding Remarks

There are plenty of reasons why containers simplify the DevOps. Once system administrators understand the basic nature of the containers, they can easily use that information when planning a migration at the organization.