Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Convergence, Immutability, and Image-based Deployments

 

January 22, 2014
Jay Clark
Related topics:
DevOps
Related products:
Developer Tools

Share:

    As our industry continues to adopt lean methodologies in an effort to improve the workflow of product deliverables, it's important that the products developed using these patterns are reliable. When speaking from an application infrastructure perspective, or the Ops side of DevOps, this means that we must continue to improve resiliency, predictability, and consistency, alongside streamlining our development workflows to allow for failing fast, and failing often.When faced with a critical incident, it's dissatisfying to find that the root cause was an environment delta that only affected a subset of your infrastructure.  You begin asking questions like, "Why aren't all our nodes configured with the same parameters? Why aren't we running the same package versions on all of our nodes? Why is the staging environment different from production?"

     

    Around many IT shops, the concept of deploying gold-master machine images would seemly solve a lot of these problems when faced with the aforementioned questions. By using 'blessed' machine images across your environments, you can ensure that the application stack is consistent across each of your node clusters, and having tested those stacks in the lower-level environments, confidence is raised as to how these systems will perform when presented with production loads. As an added bonus, the common use-case of image-based deployments starts the application node builds 'further down the road', rather than starting stack configurations with nodes that merely have base OS installs.

    Let's merge this idea with the concept of configuration management. Inside (and out) of the DevOps paradigm, many shops build 'infrastructure as code', leveraging configuration management tools like Puppet, Chef, or Salt. While the use of such tools draws parallel to image-based deployments, configuration tools are meant to maintain the desired node state; when applying configuration management code to base OS installs, you arrive at the desired state by building up the node's packages and configuration parameters to meet the configuration requirements. This takes time, though eventually you end up with an application infrastructure that is standardized across your environments.

    To increase the speed of rolling out application nodes, you have the ability to combine these two concepts by applying configuration management code to machine instances that have 'most' of the application stack previously installed and configured.  This way, there is less puppet code to apply (for example) to a node when prepping it for deployment to your infrastructure.

    A marriage made in Eden, right? Well, let's consider this scenario:

    Your team has deployed nodes from machine images into all of the required environments. The state of these nodes is kept consistent with configuration management. The deployments are faster than they were previously because now your images are built with most of the required elements to deploy your applications. You've conducted test-driven deployments, so the number of nodes you need at varying points of your application infrastructure is well established, so the infrastructure holds up well under production load.

    Next, new features are implemented in your application code, which constitute changes to the infrastructure. No problem - the instances you've deployed run $CONFIG_MGMT_TOOL, so it's simple to account for the changes in the infrastructure as required by the application. Changes are tested and pushed through the environments, and (therefore) the nodes that exist in them. This happens on a regulated basis, and your nodes accept these changes without errors. After months of operating with nodes that have been in production, the time comes when you need to spin up a new instance. Your images are now months old, and your development pipeline has been designed to simply modify the existing images in place. How confident are you that new nodes can spin up without encountering any issues, and how fast will those nodes be placed into your environment and become useful inside your application's infrastructure?  This is the quandary of convergence.

    In a PuppetCamp London 2013 presentation entitled "Taking Control of Chaos with Docker and Puppet", presented by Tomas Doran at Yelp, an argument that supports convergence and immutability is discussed. Doran argues that, "you're doing it wrong, unless you converge in exactly one puppet run...you should never keep a machine... (and) unless you regularly rebuild, you don't know that you can rebuild..." For me, this presentation summarizes the case for building immutable instances in your continuous deployment work stream.  By continuously rebuilding nodes from scratch, you prove that your configuration management system is capable of building clean instances every time, and if you are dynamically growing and shrinking your infrastructure, you remain confident that new nodes will deploy and immediately begin adding value. Comedic divergence - that's +10 points for using a derivative of the buzzword 'value-add', if you're keeping score :-).

    Managing machine images (and the instances spawned from them) is a topic worthy of an introspective article alone. Even as I'm writing this article, in my mind the topic being addressed here is not whether machine images are useful inside your architecture; it is the idea that immutable node instances can be the deploy-able artifacts delivered to production, and how expensive taking that approach will be.

    The new maxim being heard around DevOps shops is, "Containerization is the new virtualization". VMs are currently our industry standard, but we now know that VMs are 'batteries included' with heavy burden. Deploying VMs comes with the fixed cost of managing the entire OS, when perhaps the service(s) in use on that VM are minimal. Ah ha! Now, it makes sense. Containers are lightweight, they provide the appropriate level of isolation, and the convergence problem is minimized, since you're not wasting cycles deploying entire machines full of configurations and services unusable or unimportant to the mission.

    Technologies like Docker provide us with a way to package and ship isolated and purposeful applications/services inside of a lightweight container, though as cutting edge and 'sexy' as this technology is, the jury is still out on how well it measures up in production environments.

    To close, it's my view (as an Operations professional) that currently in our industry, it's time to validate the viability of incorporating machine images, deploy LINUX containers, and ultimately solve the environmental deltas problem with a solution that conforms to our cycle time goals. As we adopt the DevOps culture within our development organizations, CI/CD certainly should not translate to: "Let's deploy nodes to our application's infrastructure more quickly, and more often, without taking stability, standardization, and operational confidence into consideration."

    Last updated: September 26, 2024

    Recent Posts

    • How to enable Ansible Lightspeed intelligent assistant

    • Why some agentic AI developers are moving code from Python to Rust

    • Confidential VMs: The core of confidential containers

    • Benchmarking with GuideLLM in air-gapped OpenShift clusters

    • Run Qwen3-Next on vLLM with Red Hat AI: A step-by-step guide

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue