A Practical Introduction to Container Terminology

You might think containers seem like a pretty straightforward concept, so why do I need to read about container terminology? In my work as a container technology evangelist, I’ve encountered misuse of container terminology that causes people to stumble on the road to mastering containers. Terms like containers and images are used interchangeably, but there are important conceptual differences. In the world of containers, repository has a different meaning than what you’d expect. Additionally, the landscape for container technologies is larger than just docker. Without a good handle on the terminology, It can be difficult to grasp the key differences between docker and (pick your favorites, CRI-O, rkt, lxc/lxd) or understand what the Open Container Initiative is doing to standardize container technology.

Background

It is deceptively simple to get started with Linux Containers. It takes only a few minutes to install a container engine like docker and run your first commands. Within another few minutes, you are building your first container image and sharing it. Next, you begin the familiar process of architecting a production-like container environment, and have the epiphany that it’s necessary to understand a lot of terminology and technology behind the scenes. Worse, many of the following terms are used interchangeably… often causing quite a bit of confusion for newcomers.

  • Container
  • Image
  • Container Image
  • Image Layer
  • Registry
  • Repository
  • Tag
  • Base Image
  • Platform Image
  • Layer

Understanding the terminology laid out in this technical dictionary will provide you a deeper understanding of the underlying technologies. This will help you and your teams speak the same language and also provide insight into how to better architect your container environment for the goals you have. As an industry and wider community, this deeper understanding will enable us to build new architectures and solutions. Note, this technical dictionary assumes that the reader already has an understanding of how to run containers. If you need a primer, try starting with  A Practical Introduction to Docker Containers on the Red Hat Developer Blog.

Continue reading “A Practical Introduction to Container Terminology”

Share

Cockpit: Your entrypoint to the Containers Management World

Containers are one of the top trend today. Starting working or playing with them could be really hard also if you’ve well understood the theory at their base.

With this article I’ll try to show you some useful tips and tricks to start into containers world, thanks also to the great web interface provided by the Cockpit project.

cockpit--capture-15-cockpit-project-http___cockpit-project-org_

Cockpit overview

Cockpit is an interactive server admin interface.  You’ll find below some a of its great features:

  • Cockpit comes “out of the box” ready for the admin to interact with the system immediately, without installing stuff, configuring access controls, making choices, etc.
  • Cockpit has (as near as makes no difference) zero memory and process footprint on the server when not in use. The job of a server is not to show a pretty UI to admins, but to serve stuff to others. Cockpit starts on demand via socket activation and exits when not in use.
  • Cockpit does not take over your server in such a way that you can then only perform further configuration in Cockpit.
  • Cockpit itself does not have a predefined template or state for the server that it then imposes on the server. It is imperative configuration rather than declarative configuration.
  • Cockpit dynamically updates itself to reflect the current state of the server, within a time frame of a few seconds.
  • Cockpit is firewall friendly: it opens one port for browser connections: by default that is 9090.
  • Cockpit can look different on different operating systems, because it’s the UI for the OS, and not a external tool.
  • Cockpit is pluggable: it allows others to add additional UI pieces.

Continue reading “Cockpit: Your entrypoint to the Containers Management World”

Share

Docker project: Can you have overlay2 speed and density with devicemapper? Yep.

It’s been a while since our last deep-dive into the Docker project graph driver performance.  Over two years, in fact!  In that time, Red Hat engineers have made major strides in improving container storage:

All of that, in the name of providing enterprise-class stability, security and supportability to our valued customers.

As discussed in our previous blog, there are a particular set of behaviors and attributes to take into account when choosing a graph driver.  Included in those are page cache sharing, POSIX compliance and SELinux support.

Reviewing the technical differences between a union filesystem and devicemapper graph driver as it relates to performance, standards compliance and density, a union filesystem such as overlay2 is fast because

  • It traverses less kernel and devicemapper code on container creation (devicemapper-backed containers get a unique kernel device allocated at startup).
  • Containers sharing the same base image startup faster because of warm page cache
  • For speed/density benefits, you trade POSIX compliance and SELinux (well, not for long!)

There was no single graph driver that could give you all these attributes at the same time — until now.

How we can make devicemapper as fast as overlay2

With the industry move towards microservices, 12-factor guidelines and dense multi-tenant platforms, many folks both inside Red Hat as well as in the community have been discussing read-only containers.  In fact, there’s been a –read-only option to both the Docker project, and kubernetes for a long time.  What this does is create a mount point as usual for the container, but mount it read-only as opposed to read-write.  Read-only containers are an important security improvement as well as they reduce the container’s attack surface.  More details on this can be found in a blog post from Dan Walsh last year.

When a container is launched in this mode, it can no longer write to locations it may expect to (i.e. /var/log) and may throw errors because of this.  As discussed in the Processes section of 12factor.net, re-architected applications should store stateful information (such as logs or web assets) in a stateful backing service.  Attaching a persistent volume that is read-write fulfills this design aspect:  the container can be restarted anywhere in the cluster, and its persistent volume can follow it.

In other words, for applications that are not completely stateless an ideal deployment would be to couple read-only containers with read-write persistent volumes.  This gets us to a place in the container world that the HPC (high performance/scientific computing) world has been at for decades:  thousands of diskless, read-only NFS-root booted nodes that mount their necessary applications and storage over the network at boot time.  No matter if a node dies…boot another.  No matter if a container dies…start another.

Continue reading “Docker project: Can you have overlay2 speed and density with devicemapper? Yep.”

Share

Introducing atomic scan – Container vulnerability detection

In the world of containers, there is a desperate need to be able to scan container images for known vulnerabilities and configuration problems, and as we proliferate containers and bundled applications into the enterprise, many groups and companies have started to build container scanning tools. Even Red Hat has been building a scanning tool based on the tried and true OpenSCAP project, but there were several problems we saw time and again.

The problems with existing container scanning tools included:

  • Either these tools were required to be installed on the host, or if they were installed via a container they would have to be very privileged; therefore, the containers themselves could use that authority on the host machine.
  • They were being hard coded to only support a single container framework – docker.
  • We saw some very slow implementations, where the container scanning tools were actually copying all of the content out of the container image, then scanning the content, and finally removing the content.  You can image how inefficient this could be for hundreds or thousands of images or containers.

We decided we wanted to build a tool, atomic scan, that understood the underlying architecture and could download and run containers with scanning tools in them. But, we also didn’t want the scanning tools to be privileged; the atomic tool would be able to mount up read only rootfs from the host’s file system. These mounted file systems could then be passed onto the scanning container, along with a writeable directory for the scanner to place its output.

Continue reading “Introducing atomic scan – Container vulnerability detection”

Share

A Practical Introduction to Docker Container Terminology

February 2018 – A completely revised and updated version of this article has been published.  See A Practical Introduction to Container Terminology. The update includes coverage of container technologies beyond docker, such as CRI-O, rkt, lxc/lxd, and as well information on the Open Container Initiative (OCI).

 


 

January 13th, 2016

Background

When discussing an architecture for containerization, it’s important to have a solid grasp on the related vocabulary. One of the challenges people have is that many of the following terms are used interchangeably… often causing quite a bit of confusion for newcomers.

  • Container
  • Image
  • Container Image
  • Image Layer
  • Index
  • Registry
  • Repository
  • Tag
  • Base Image
  • Platform Image
  • Layer

 

The goal of this article is to clarify these terms, so that we can speak the same language and develop solutions and architectures leveraging the value of containers. Note that I am going to assume that you know how to run basic docker commands, but if you need a primer, I recommend starting with: A Practical Introduction to Docker Containers.

Continue reading “A Practical Introduction to Docker Container Terminology”

Share

Imagine this – the life of an image

Imagine this: deploy an application from code-commit to qa, validate through automated testing, and then push the same image into production with no manual intervention, no outage, no configuration changes, and with full audibility through change records. A month-and-a-half ago, we formed a tiger team and gave them less than 90 days to do it. How? Build an end-to-end CI/CD environment leveraging RHEL Atomic 7.1 as the core platform and integrating with key technologies like git, Jenkins, packer.io, in a hybrid deployment model and in accordance with our enterprise standards. Oh, and make sure we don’t care if we lose a couple of the nodes in the cluster when we’re running the application in production.

Disruptive technology that spawns disruptive business architecture. And it all starts with imagining the life of this thing called an image.

Continue reading “Imagine this – the life of an image”

Share

Introducing the "rhel-tools" for RHEL Atomic Host

RH_Icon_Container_with_App_FlatThe rise of the purpose-built Linux distribution

Recently, several purpose-built distributions have been created specifically to run Linux containers.  There seem to be more popping up every day.  For our part, in April 2014 at the Red Hat Summit, Red Hat announced its intention to deliver a purpose-built, container-optimized version of Red Hat Enterprise Linux 7 called RHEL Atomic Host.  After over a year in the making, we are excited that launch day has finally come!

What’s important to know about Red Hat Enterprise Linux Atomic Host, you ask?  Well, plenty…but for the sake of this blog, I’ll stick to areas I know as a performance engineer:

  • RHEL Atomic leverages years of engineering effort that went into RHEL7.
  • It uses the same exact kernel as RHEL7.
  • Significantly reduced on-disk and in-memory footprint.
  • Utilizes OSTree technology for upgrades and rollbacks.
  • Optimized device-mapper container storage performance out of the box.
  • Optimized container scalability out of the box.
  • Includes purpose-built rhel-tools container for system administration tasks

Continue reading “Introducing the "rhel-tools" for RHEL Atomic Host”

Share

Announcement: RHEL Atomic Host now generally available

Atomic_host_logoRed Hat “has announced the general availability of Red Hat Enterprise Linux 7 Atomic Host, an operating system optimized for running the next generation of applications with Linux containers.”

“Based on RHEL, Red Hat Enterprise Linux Atomic Host enables enterprises to embrace a container-based architecture to take advantage of the benefits of development and deployment flexibility and simplified maintenance, without sacrificing performance, stability, security, or the value of Red Hat’s vast certified ecosystem.

“For building and maintaining container infrastructure, Red Hat Enterprise Linux 7 Atomic Host provides many benefits, including:

Continue reading “Announcement: RHEL Atomic Host now generally available”

Share