Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Application lifecycle management for container-native development

 

June 11, 2019
Benjamin Holmes
Related topics:
CI/CDContainersDevOpsOpen source
Related products:
Red Hat OpenShift Container Platform

Share:

    Container-native development is primarily about consistency, flexibility, and scalability. Legacy Application Lifecycle Management (ALM) tooling often is not, leading to situations where it:

    • Places artificial barriers on development speed, and therefore time to value,
    • Creates single points of failure in the infrastructure, and
    • Stifles innovation through inflexibility.

    Ultimately, developers are expensive, but they are the domain experts in what they build. With development teams often being treated as product teams (who own the entire lifecycle and support of their applications), it becomes imperative that they control the end-to-end process on which they rely to deliver their applications into production. This means decentralizing both the ALM process and the tooling that supports that process. In this article, we'll explore this approach and look at a couple of implementation scenarios.

    Move from centralised ALM to decentralised ALM
    Figure 1 - Move from centralised ALM to decentralised ALM

    Pipelines

    Although this approach places more control in the hands of developers, it also means that they are directly responsible for what they ship. To cope with this, the automation of application delivery becomes even more critical than in a non-containerized world. It turns the domain of trust on its head—as an organization, you should now trust the process of container delivery, rather than the content of the container itself.

    This is a more factory-oriented approach that allows organizations to scale their application delivery without incurring a significant governance overhead when applied to every single project running in the container platform. Red Hat has found through previous container-related projects that an efficient way to address this is via pipelines, although the same net result can be achieved with many popular build automation tools.

    To this end, we often recommend the creation of a Development Community of Practice (name subject to local influences) that would own pipeline development within the container platform. The Development Community of Practice would consist of representatives of development teams working on the platform and seek to drive standards around technologies and approaches, while also serving as a forum for knowledge transfer and enablement.

    The Development Community of Practice would create a library of pipeline steps (A Shared Library in Jenkins terms) that could be used to create either technology-specific (Java, .NET, Node.js, etc.) reference pipelines for users with limited interest in directly engaging with the platform or bespoke pipelines that would cater for specific use cases.

    This Shared Library would be derived in the following manner:

    • Capture the critical steps on the application delivery pathway for a given technology stack. Undertake this activity with Platform, Development, Business, and Security stakeholders to agree on a mutual definition of "minimal good."
    • Create Steps in the Shared Library to meet all of the requirements captured as part of the discovery assessment.
    • Actively audit those steps to prove compliance to interested parties. This could include things like automated test result capture, container CVE scanning, code coverage/quality assessments, and automated approvals.
    • Create reference pipelines that meet the definition of "minimal good" using this library of steps.
    • Open/Inner Source the Shared Library to the wider development community within the organization to allow stakeholders to extend, customize, and contribute repeatable steps and further reference pipelines that increase the capabilities of the Library within the environment.
    • Ensure both steps and reference pipelines are documented. Good documentation of the Shared Library is critical to driving adoption. A perfectly implemented, poorly documented solution is of no practical use to anyone. The steps are now part of the platform infrastructure and should be treated as such.

    The use of pipelines in this manner allows platform providers to drive two distinct behaviors:

    1. Users who have no interest or requirement to interact directly with the container platform can utilize build automation directly. A pipeline allows them to take source code and deliver production-ready container images via the requisite governance step gates with near-zero interaction with the container platform.
    2. Users who have an understanding of pipelines and containerization technologies who want or need to add their own bespoke steps on top of the core steps are perfectly capable of doing so. They must ensure that these steps also meet the governance requirements set out by the Development Community of Practice and associated stakeholders.

    These are not one-time actions. Management and maintenance of the pipeline lifecycle are just as critical as management and maintenance of the applications themselves.

    • The Development Community of Practice must continuously evaluate the requirements of the development community and improve and refine the pipelines and steps as required. The development community should also be permitted to fork, adapt, and push changes back to the Shared Library.
    • For Platform providers, the responsibility then becomes more about providing the technologies and capabilities that these pipelines rely on in a containerized manner. They must also ensure these capabilities are kept up-to-date, and the lifecycle of those containers is managed accordingly.
    • The Business must manage changing application requirements effectively and understand and accept the dependencies these changes may create in the automation solution.
    • Security teams must continuously assess new practices, requirements, standards, and technologies, and work with the Business, the Developers, and the Platform providers to implement these as required, in a sensible and controlled manner.

    All of these aspects rely on constant communication and a continuous feedback cycle between all stakeholders of understanding the environment, implementing changes, and reviewing both the effects of the changes on the pipelines and the use of the Shared Library as a whole.

    Shared Library Lifecycle
    Figure 2 - Shared Library Lifecycle

    Ending up in a “Conway’s Law” situation is a total waste of time and effort for all concerned. However, committing to standards-based good practice around pipelines and container-native development provides developers with a path of least resistance between their source code and production and allows every stakeholder to recognize the benefits of containerization quickly.

    Disconnected Environments

    In a disconnected environment, it is advisable to follow the principles of decentralized ALM as much as is practical. However, compromises will always be made. A key compromise often centers around Dependency Management—how do you ensure that an application has all of its build and runtime dependencies available to it in a container platform with no direct connection to the public internet?

    As with fully connected environments, it is good practice to use a dependency management solution (e.g., Sonatype Nexus, or JFrog Artifactory) to present dependencies to automated build processes in disconnected environments.

    Once the content is in, developers can then stand up their own dependency management solutions for their projects, talking back to the centralized instance. This approach allows them to skirt the obvious pitfalls of a centralized single source of truth for dependencies in the container platform.

    Typically, we talk about a "hard disconnect" (whereby there is no physical connection at all to the public internet) or a "soft disconnect" (whereby access to the public internet is heavily restricted to certain hosts or protocols). In either scenario, no direct curation of content should be required. Step gates built into the pipeline would ideally be configured to automatically scan application dependencies and fail in the case of vulnerabilities, errata, or license concerns being discovered.

    Hard disconnect scenario

    In this scenario, content is downloaded on a public internet connection and then uploaded to the chosen dependency management solution in the disconnected environment, where it can be resolved by the automation tooling within the environment. This process is likely to be manual and time-consuming.

    Hard Disconnect Scenario
    Figure 3 - Hard Disconnect Scenario

    Soft disconnect scenario

    In this scenario, the dependency management solution is allowed to proxy through or is whitelisted to directly allow access to the public internet. This process permits a single controlled connection to the repositories containing the dependencies. This scenario is a lot more flexible, as no manual interaction is required to provide content to the environment.

    Soft Disconnect Scenario
    Figure 4 - Soft Disconnect Scenario

    Conclusion

    Sorry, I lied. There is no conclusion. You are creating the building blocks on which your container-native developments will rely on for their entire lifecycle—and that lifecycle should be under constant review.

    However, by decentralizing your automation dependencies and open sourcing the means by which you interact with those dependencies, you are giving the development communities the means to scale their activities to meet the ever-changing needs of all stakeholders concerned, without falling victim to the legacy of traditional approaches.

    Last updated: February 11, 2024

    Recent Posts

    • Meet the Red Hat Node.js team at PowerUP 2025

    • How to use pipelines for AI/ML automation at the edge

    • What's new in network observability 1.8

    • LLM Compressor: Optimize LLMs for low-latency deployments

    • How to set up NVIDIA NIM on Red Hat OpenShift AI

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue