Container Images Compliance – what we built at ManageIQ to remove a security pain point – part 2

Part 2 of 2

In part one of this blog post, we mentioned a pain point in Container based environments. We introduced SCAP as a means to measure compliance in computer systems and introduced ManageIQ as a means of automating Cloud & Container based workflows.

Continue reading “Container Images Compliance – what we built at ManageIQ to remove a security pain point – part 2”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Container Images Compliance – what we built at ManageIQ to remove a security pain point – part 1

Part 1 of 2

“Docker is about running random crap from the Internet as root on your host”  – Dan Walsh

Continue reading “Container Images Compliance – what we built at ManageIQ to remove a security pain point – part 1”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Cockpit: Your entrypoint to the Containers Management World

Containers are one of the top trend today. Starting working or playing with them could be really hard also if you’ve well understood the theory at their base.

With this article I’ll try to show you some useful tips and tricks to start into containers world, thanks also to the great web interface provided by the Cockpit project.

cockpit--capture-15-cockpit-project-http___cockpit-project-org_

Cockpit overview

Cockpit is an interactive server admin interface.  You’ll find below some a of its great features:

  • Cockpit comes “out of the box” ready for the admin to interact with the system immediately, without installing stuff, configuring access controls, making choices, etc.
  • Cockpit has (as near as makes no difference) zero memory and process footprint on the server when not in use. The job of a server is not to show a pretty UI to admins, but to serve stuff to others. Cockpit starts on demand via socket activation and exits when not in use.
  • Cockpit does not take over your server in such a way that you can then only perform further configuration in Cockpit.
  • Cockpit itself does not have a predefined template or state for the server that it then imposes on the server. It is imperative configuration rather than declarative configuration.
  • Cockpit dynamically updates itself to reflect the current state of the server, within a time frame of a few seconds.
  • Cockpit is firewall friendly: it opens one port for browser connections: by default that is 9090.
  • Cockpit can look different on different operating systems, because it’s the UI for the OS, and not a external tool.
  • Cockpit is pluggable: it allows others to add additional UI pieces.

Continue reading “Cockpit: Your entrypoint to the Containers Management World”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Take advantage of your Red Hat Developers membership and download RHEL today at no cost.

Container Orchestration Specification for better DevOps

The world is moving to microservices, where applications are composed of a complex topology of components, orchestrated into a coordinated topology.

Microservices have become increasingly popular as they increase business agility and reduce the time for changes to be made. On top of this, containers make it easier for organizations to adopt microservices.

Increasingly, containers are the runtimes used for composition, and many excellent solutions have been developed to handle container orchestration such as: Kubernetes/OpenShift; Mesos and its many frameworks like Marathon; and even Docker Compose, Swarm and SwarmKit are trying to address these issues.

But at what cost?

We’ve all experienced that moment when we’ve been working long hours and think “yes, that feature is ready to ship”. We release it into our staging environment and bang, nothing works, and we don’t really know why. What if you could consistently take the same topology you ran in your development workspace, and run it in other, enterprise grade, environments such as your staging or production, and expect it to always JUST WORK?

Continue reading “Container Orchestration Specification for better DevOps”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Docker project: Can you have overlay2 speed and density with devicemapper? Yep.

It’s been a while since our last deep-dive into the Docker project graph driver performance.  Over two years, in fact!  In that time, Red Hat engineers have made major strides in improving container storage:

All of that, in the name of providing enterprise-class stability, security and supportability to our valued customers.

As discussed in our previous blog, there are a particular set of behaviors and attributes to take into account when choosing a graph driver.  Included in those are page cache sharing, POSIX compliance and SELinux support.

Reviewing the technical differences between a union filesystem and devicemapper graph driver as it relates to performance, standards compliance and density, a union filesystem such as overlay2 is fast because

  • It traverses less kernel and devicemapper code on container creation (devicemapper-backed containers get a unique kernel device allocated at startup).
  • Containers sharing the same base image startup faster because of warm page cache
  • For speed/density benefits, you trade POSIX compliance and SELinux (well, not for long!)

There was no single graph driver that could give you all these attributes at the same time — until now.

How we can make devicemapper as fast as overlay2

With the industry move towards microservices, 12-factor guidelines and dense multi-tenant platforms, many folks both inside Red Hat as well as in the community have been discussing read-only containers.  In fact, there’s been a –read-only option to both the Docker project, and kubernetes for a long time.  What this does is create a mount point as usual for the container, but mount it read-only as opposed to read-write.  Read-only containers are an important security improvement as well as they reduce the container’s attack surface.  More details on this can be found in a blog post from Dan Walsh last year.

When a container is launched in this mode, it can no longer write to locations it may expect to (i.e. /var/log) and may throw errors because of this.  As discussed in the Processes section of 12factor.net, re-architected applications should store stateful information (such as logs or web assets) in a stateful backing service.  Attaching a persistent volume that is read-write fulfills this design aspect:  the container can be restarted anywhere in the cluster, and its persistent volume can follow it.

In other words, for applications that are not completely stateless an ideal deployment would be to couple read-only containers with read-write persistent volumes.  This gets us to a place in the container world that the HPC (high performance/scientific computing) world has been at for decades:  thousands of diskless, read-only NFS-root booted nodes that mount their necessary applications and storage over the network at boot time.  No matter if a node dies…boot another.  No matter if a container dies…start another.

Continue reading “Docker project: Can you have overlay2 speed and density with devicemapper? Yep.”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Take advantage of your Red Hat Developers membership and download RHEL today at no cost.


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Docker Logo

Keep it small: a closer look at Docker image sizing

A recent blog post, 10 things to avoid in docker containers, describes ten scenarios you should avoid when dealing with docker containers. However, recommendation #3 – Don’t create large images and the sentence “Don’t install unnecessary packages or run “updates” (yum update) that download files to a new image layer” has generated quite a few questions.  Some of you are wondering how a simple “yum update” can create a large image. In an attempt to clarify the point, this post explains how docker images work, some solutions to maintain a small docker image, yet still keep it up to date.

Continue reading “Keep it small: a closer look at Docker image sizing”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Red Hat Software Collections 2.0 Docker images, Beta release

I’m very happy to announce that Docker images based on collections from Red Hat Software Collections (RHSCL) 2.0 are in beta testing.  The images are available from the Red Hat Container Registry, and we’ve got the set of collections for language, databases and web servers covered – a complete list is below.

If you’ve not tried out the Docker package from RHEL7 Extras, you need to enable the Extras channel, install the docker page, and start the docker service; an extended guide for RHEL Docker is available here.  Once you are set up, pulling the RHSCL Docker images is very simple… for example, you can fetch the Python 3.4 image as follows:

Continue reading “Red Hat Software Collections 2.0 Docker images, Beta release”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Take advantage of your Red Hat Developers membership and download RHEL today at no cost.

Containers in the enterprise – Are you ready for this?

So here’s are deal: We’ve created what we’re calling “PaaS-Containers” in our IT production environment. It consists of core technologies like RHEL Atomic Host, Kubernetes, and Docker along with supporting CI/CD components like Jenkins together as part of an offering that supports the end-to-end automated deployments of applications from a code-commit event through automated testing and roll-out through multiple environments (dev, QA, stage, prod). Oh, did I mention that it’s also integrated with our enterprise logging and monitoring as well as our change management process and tooling so that we have a complete audit trail?

Everyone wants to jump on the bandwagon – they see the benefits of rapid deployment, atomicity, enabling business capabilities faster through technology. But as we learned in the 90-day initiative to get it stood up and an existing application deployed on it, all applications aren’t ready for containers and some may never be based on their current architecture.

Here’s what we think about the deployment options in an enterprise context that allows us to enable innovation while managing enterprise risk…

Continue reading “Containers in the enterprise – Are you ready for this?”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Imagine this – the life of an image

Imagine this: deploy an application from code-commit to qa, validate through automated testing, and then push the same image into production with no manual intervention, no outage, no configuration changes, and with full audibility through change records. A month-and-a-half ago, we formed a tiger team and gave them less than 90 days to do it. How? Build an end-to-end CI/CD environment leveraging RHEL Atomic 7.1 as the core platform and integrating with key technologies like git, Jenkins, packer.io, in a hybrid deployment model and in accordance with our enterprise standards. Oh, and make sure we don’t care if we lose a couple of the nodes in the cluster when we’re running the application in production.

Disruptive technology that spawns disruptive business architecture. And it all starts with imagining the life of this thing called an image.

Continue reading “Imagine this – the life of an image”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Take advantage of your Red Hat Developers membership and download RHEL today at no cost.

Introducing the Atomic command

RH_Icon_Container_with_App_Flat

Introducing the Atomic command

‘/usr/bin/atomic’

The atomic command defines the entrypoint for Project Atomic hosts.  On an
Atomic Host, there are at least two distinct software delivery vehicles; Docker (often used in combination with the traditional RPM/yum/dnf), and rpm-ostree to provide atomic upgrades of the host system.

The goal of the atomic command is to provide a high level, coherent entrypoint to the system, and fill in gaps in Linux container implementations.

For Docker, atomic can make it easier to interact with special kinds of containers, such as super-privileged debugging tools and the like.

The atomic host subcommand wraps rpm-ostree, click to read more …

The atomic command is also available on non Atomic Platforms.  You can run it on a standard RHEL, Fedora and Centos OS.  We would like to see it get used on other platforms in the future.

Container Image as a software delivery mechanism

The most exciting things about Docker:

Docker images are a new way of delivering software applications to a host operating system.

But, in a lot of ways Docker falls short as a software delivery mechanism.

The atomic command is our effort to close the gaps.

Red Hat has been using RPM to ship software for over 18 years now.  We can look at some of its features to see the short comings.

One of the big features of RPM that Docker image is missing is the mechanism to launch the application at boot time.  When I install the httpd package using yum/rpm it includes the systemd unit file used to launch it at boot time.

We would like to be able to ship one object that a user could easily install.  Currently when you want to install software using Docker images there are two different objects,  You do a Docker pull to install the image, then you build a systemd unit file to run the container, or you execute a complex Docker or Kubernetes command to have the software run as a Docker service.  Bottom line is the developer needs to not only create the Docker image, but he also needs to publish recommended installation procedures. The installation procedures would be easy to misconfigure and we could end up with a support headache.

Software Installation

We need a way for a container to ‘install’ itself in the system to be automatically restarted.

I introduced the concept of something I called a Super Privileged Container back in November.

Introducing a *Super* Privileged Container Concept

The idea here is to allow a container the ability to see and potentially to manipulate the container hosts. One large use of this would be to allow a container to install itself.

Problem Statement

I am developing a new Apache PHP based application, that I would like to allow my customers to install.  I want to use a systemd unit file.  And I would like to allow my customers to install multiple copies of my application that they could customize to run for different accounts.

The current method for users would be to write up a complicated installation procedure where each user could cut and paste a systemd unit file, and copy it into /etc/sytemd/system/APP.service.  They could download content and setup volume mounts for log directories, data directories and config directories. Perhaps even setup some default data.

The LABEL Patch

We worked with upstream Docker community for over a year to get the LABEL patch, into Docker, which finally got merged into Docker-1.6 package.  This patch allows a software developer to add additional JSON Fields (LABEL) into the Docker Image.  For example I could add a LABEL field to my Docker file like the following.

LABEL INSTALL="docker run --rm --privileged -v /:/host -e HOST=/host -e LOGDIR=${LOGDIR} -e CONFDIR=${CONFDIR} -e DATADIR=${DATADIR} --name NAME -e NAME=NAME -e IMAGE=IMAGE IMAGE /bin/install.sh"

Note that this docker run command is actually a Super Privileged Container command, in that it is privileged and the host OS is mounted at /host within the container.

If an application developer added an LABEL INSTALL line like that in a container image named apache_php, users could examine the docker image for the install line and execute the command.  One problem is the command has NAME and IMAGE embedded in it rather then the container name and the image name.

Better yet the user could let the atomic command do install the application for him.

atomic install apache_php

The atomic command will:

  • The atomic command pulls the apache_php image from a registry if it is not currently installed.
  • The atomic command reads the LABEL INSTALL line from the container image json file.
  • The atomic command replaces any IMAGE values that it sees with the image name installed.  This means the -e IMAGE=IMAGE IMAGE will get substituted with -e IMAGE=apache_php apache_php.
  • The atomic command also allows you to specify a name for your container, and defaults to the image name if the container name is not specified.  In this case since the user did not specify the container name, atomic command will replace NAME with apache_php.  (–name apache_php -e NAME=apache_php).
  • The atomic command generates three directory names and sets passes them in as environment variables ‘LOGDIR=/var/log/NAME, DATADIR=/var/lib/NAME and CONFDIR=/etc/NAME’, where NAME is substituted with the container name (apache_php).  These directories can be used by the container installation procedure to create initial content and eventually could be volume mounted from the host.

If a user wanted to install multiple copies he could just execute

atomic install -n customer1 apache_php
atomic install -n customer2 apache_php

And two containers would be installed and ready to run.

Notice the LABEL INSTALL line in my example executes the /bin/install.sh script that we packaged into the container.  This allows the developer to imbed his installation script into the container image.  Since we are running the container as a SPC we are volume mounting / at /host within the container.

We would like /host to become the standard location for mounting / into a container.  We set the environment variable $HOST within the container to point at /host.  This allows the application developer to write scripts that install content ralative to $HOST/PATH. For example the install.sh might be creating a systemd unit file on the host in /etc/systemd/system/APP.service.  If the install.sh creates the file in ${HOST}/etc/sytemd/system/APP.service, the script would work in an SPC container where -e HOST=/host or if he ran it directly on his test machine outside a container and $HOST would not be set.

The install.sh could also use the $CONFDIR, $LOGDIR and $DATADIR to setup additional content for the container.

Here is my example

/bin/install.sh

#!/bin/sh
# Make Data Dirs
mkdir -p ${HOST}/${CONFDIR} ${HOST}/${LOGDIR}/httpd ${HOST}/${DATADIR}

# Copy Config
cp -pR /etc/httpd ${HOST}/${CONFDIR}

# Create Container
chroot ${HOST} /usr/bin/docker create -v /var/log/${NAME}/httpd:/var/log/httpd:Z -v /var/lib/${NAME}:/var/lib/httpd:Z --name ${NAME} ${IMAGE}

# Install systemd unit file for running container
sed -e "s/NAME/${NAME}/g" etc/systemd/system/httpd_template.service > ${HOST}/etc/systemd/system/httpd_${NAME}.service

# Enabled systemd unit file
chroot ${HOST} /usr/bin/systemctl enable /etc/systemd/system/httpd_${NAME}.service

Notice how the install script is creating directories on the host for the container.  Also it modifies the systemd httpd template file below into a systemd unit file and enables the service.

/etc/systemd/system/httpd_template.service

[Unit]
Description=The Apache HTTP Server for NAME
After=Docker.service

[Service]
ExecStart=/usr/bin/docker start NAME
ExecStop=/usr/bin/docker stop NAME
ExecReload=/usr/bin/docker exec -t NAME /usr/sbin/httpd $OPTIONS -k graceful

[Install]
WantedBy=multi-user.target

When the installation is done the service is ready to run.  And will run on reboot.

Software Removal

The atomic command can also be used to uninstall software.  It will use the LABEL UNINSTALL option if available.

In our example we will use a LABEL like:

LABEL UNINSTALL="docker run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=IMAGE -e NAME=NAME IMAGE /bin/uninstall.sh

Then the user can execute the following uninstall command:

atomic uninstall apache_php

The atomic command will execute the /bin/uninstall.sh script

/bin/uninstall.sh

#!/bin/sh
chroot ${HOST} /usr/bin/systemctl disable /etc/systemd/system/httpd_${NAME}.service
rm -f ${HOST}/etc/systemd/system/httpd_${NAME}.service

Notice the script disables the service and then remove the unit file.

Finally the atomic uninstall will attempt to docker rm a container name if the name is specified or default to the image name.

If the container name is the same as the image name atomic uninstall with also docker rmi the container image.

How do I run the application?

Problem Statement

  • My application is nicely rolled into a container images.
  • My application run mostly confined but needs additional privileges?
    — How do I tell the user to run it?</b></p>

Lets look at an example

The FreeIPA team has been experimenting with running the IPA daemons as separate containers.  One daemon they currently use is ntpd.  The nptd container needs to run with --cap_add SYS_TIME, in order to adjust the system hosts time.  The ntpd container developer has to tell users to run the container with the following command.

Docker run -d -n ntpd --cap_add SYS_TIME ntpd

The atomic command supports another label LABEL RUN, which the application developer can use to define how his application can be run.

FROM rhel7
RUN yum -y install ntpd; yum -y clean all
LABEL RUN=&amp;quot;docker run -d -n NAME --cap_add SYS_TIME IMAGE&amp;quot;
CMD /usr/bin/ntpd

Now if the user examined the Docker image, he would know exactly how to run the container. He could inspect the installed image and then cut and paste the image line.   We automate this process by adding the atomic run command.  A user only needs to execute the following command:

atomic run ntpd

This will do a Docker pull of the ntpd container package onto your host and then execute the label RUN command, if it exists.  If the command does not exist the command will default to a

Docker create -ti -n ntpd ntpd

  • This gives us the ability to define how a specific container expects to be run. Specifically this includes the privilege level required, as well as special mounts and host access, etc.

Other features of the atomic command.

One of the key features of rpm is to list information about a package.

# rpm -qi Docker
Name        : Docker
Version     : 1.5.0
Release     : 25.git5ebfacd.fc23
Architecture: x86_64
Install Date: Thu 26 Mar 2015 03:05:47 PM EDT
Group       : Unspecified
Size        : 21735169
License     : ASL 2.0
Signature   : (none)
Source RPM  : Docker-1.5.0-25.git5ebfacd.fc23.src.rpm
Build Date  : Thu 26 Mar 2015 01:01:50 AM EDT
Build Host  : buildhw-05.phx2.fedoraproject.org
Relocations : (not relocatable)
Packager    : Fedora Project
Vendor      : Fedora Project
URL         : http://www.Docker.com
Summary     : Automates deployment of containerized applications
Description :
Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.

Docker containers can encapsulate any payload, and will run consistently on and between virtually any server. The same container that a developer builds and tests on a laptop will run at scale, in production*, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above.

We want to be able to include data similar to this in the container image.  We again take advantage of the LABEL patch, and add data like the following to the Dockerfile.

LABEL Name=apache_php
LABEL Version=1.0
LABEL Vendor=”Red Hat” License=GPLv3
LABLE Description=”
The Apache PHP Application is a example of using atomic command to install a service onto a machine.”

atomic info apache_php
Name         : apache_php
Version      : 1.0
Vendor       : Red Hat
License      : GPLv3
INSTALL      : docker run --rm --privileged -v /:/host -e HOST=/host -e LOGDIR= -e CONFDIR= -e DATADIR= -e IMAGE=IMAGE -e NAME=NAME IMAGE /bin/install.sh
UNINSTALL    : docker run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=IMAGE -e NAME=NAME IMAGE /bin/uninstall.sh

Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!