Imagine this: deploy an application from code-commit to qa, validate through automated testing, and then push the same image into production with no manual intervention, no outage, no configuration changes, and with full audibility through change records. A month-and-a-half ago, we formed a tiger team and gave them less than 90 days to do it. How? Build an end-to-end CI/CD environment leveraging RHEL Atomic 7.1 as the core platform and integrating with key technologies like git, Jenkins, packer.io, in a hybrid deployment model and in accordance with our enterprise standards. Oh, and make sure we don’t care if we lose a couple of the nodes in the cluster when we’re running the application in production.
Disruptive technology that spawns disruptive business architecture. And it all starts with imagining the life of this thing called an image.
Why the image? Of course there’s the image that I get from the Docker registry (call it a “base” certified image); then there’s a layer on top of that where support hooks are added for security and operations teams (call it the “blessed” image that I can use in the enterprise); finally there’s the application code that is layered on top that gets deployed (the “augmented” image). Three layers and images for an application in an enterprise environment to ensure supportability and security.
That’s fine for the containers, but what about the hosts themselves? Patch a host with zero outage. Lose a host and don’t blink an eye. The platform of RHEL Atomic built on self-healing and auto-scaling IaaS. This, of course means host images on our IaaS platform.
At this point your head is either spinning or maybe you’re just having visions of Matryoshka dolls.
But take ½ a step back and now imagine what happens as we update or patch these various images and require rebuilds and re-deployments. RHEL Atomic uses OSTree to update rather than a typical yum update. So we’ll need to rebuild those images each time we patch and redeploy to help avoid any outage and to have a consistent environment with traceability. When there’s a patch to a certified image, like for Red Hat JBoss EAP 6 we pull a new base image, bless it, and then kick off a new build of the augmented image and then deploy. At each of these steps we’ll want to validate what’s been built and keep track that the actions have occurred. Build, test, deploy, build test deploy, over and over again.
In product lifecycle management there is the beginning of life, middle of life, and end of life. With the proliferation of containers and image-based deployments, these concepts are particularly germane once again. Certain actions that need to be performed at different points in the lifecycle from the birth to death.
How did we imagine this? Take a look at the following model that illustrates all of the functions and processes that are important in how we choose to support images in our environment. This is the Image Lifecycle (with the standard, Beginning of Life, Middle of Life, and End of Life). Now we know the features and functions we need to implement irrespective of image type. This helps us to focus on the specific implementation details for each image.
Why care? In enterprise environments, there are security and operational standards, traceability, and audibility requirements that certain applications must meet. Understanding how and where images intersect with these constraints and requirements helps us to leverage disruptive technology faster and drive business change.