Application lifecycle management for container-native development
Container-native development is primarily about consistency, flexibility, and scalability. Legacy Application Lifecycle Management (ALM) tooling often is not, leading to situations where it:
- Places artificial barriers on development speed, and therefore time to value,
- Creates single points of failure in the infrastructure, and
- Stifles innovation through inflexibility.
Ultimately, developers are expensive, but they are the domain experts in what they build. With development teams often being treated as product teams (who own the entire lifecycle and support of their applications), it becomes imperative that they control the end-to-end process on which they rely to deliver their applications into production. This means decentralizing both the ALM process and the tooling that supports that process. In this article, we’ll explore this approach and look at a couple of implementation scenarios.
Although this approach places more control in the hands of developers, it also means that they are directly responsible for what they ship. To cope with this, the automation of application delivery becomes even more critical than in a non-containerized world. It turns the domain of trust on its head—as an organization, you should now trust the process of container delivery, rather than the content of the container itself.
This is a more factory-oriented approach that allows organizations to scale their application delivery without incurring a significant governance overhead when applied to every single project running in the container platform. Red Hat has found through previous container-related projects that an efficient way to address this is via pipelines, although the same net result can be achieved with many popular build automation tools.
To this end, we often recommend the creation of a Development Community of Practice (name subject to local influences) that would own pipeline development within the container platform. The Development Community of Practice would consist of representatives of development teams working on the platform and seek to drive standards around technologies and approaches, while also serving as a forum for knowledge transfer and enablement.
The Development Community of Practice would create a library of pipeline steps (A Shared Library in Jenkins terms) that could be used to create either technology-specific (Java, .NET, Node.js, etc.) reference pipelines for users with limited interest in directly engaging with the platform or bespoke pipelines that would cater for specific use cases.
This Shared Library would be derived in the following manner:
- Capture the critical steps on the application delivery pathway for a given technology stack. Undertake this activity with Platform, Development, Business, and Security stakeholders to agree on a mutual definition of “minimal good.”
- Create Steps in the Shared Library to meet all of the requirements captured as part of the discovery assessment.
- Actively audit those steps to prove compliance to interested parties. This could include things like automated test result capture, container CVE scanning, code coverage/quality assessments, and automated approvals.
- Create reference pipelines that meet the definition of “minimal good” using this library of steps.
- Open/Inner Source the Shared Library to the wider development community within the organization to allow stakeholders to extend, customize, and contribute repeatable steps and further reference pipelines that increase the capabilities of the Library within the environment.
- Ensure both steps and reference pipelines are documented. Good documentation of the Shared Library is critical to driving adoption. A perfectly implemented, poorly documented solution is of no practical use to anyone. The steps are now part of the platform infrastructure and should be treated as such.
The use of pipelines in this manner allows platform providers to drive two distinct behaviors:
- Users who have no interest or requirement to interact directly with the container platform can utilize build automation directly. A pipeline allows them to take source code and deliver production-ready container images via the requisite governance step gates with near-zero interaction with the container platform.
- Users who have an understanding of pipelines and containerization technologies who want or need to add their own bespoke steps on top of the core steps are perfectly capable of doing so. They must ensure that these steps also meet the governance requirements set out by the Development Community of Practice and associated stakeholders.
These are not one-time actions. Management and maintenance of the pipeline lifecycle are just as critical as management and maintenance of the applications themselves.
- The Development Community of Practice must continuously evaluate the requirements of the development community and improve and refine the pipelines and steps as required. The development community should also be permitted to fork, adapt, and push changes back to the Shared Library.
- For Platform providers, the responsibility then becomes more about providing the technologies and capabilities that these pipelines rely on in a containerized manner. They must also ensure these capabilities are kept up-to-date, and the lifecycle of those containers is managed accordingly.
- The Business must manage changing application requirements effectively and understand and accept the dependencies these changes may create in the automation solution.
- Security teams must continuously assess new practices, requirements, standards, and technologies, and work with the Business, the Developers, and the Platform providers to implement these as required, in a sensible and controlled manner.
All of these aspects rely on constant communication and a continuous feedback cycle between all stakeholders of understanding the environment, implementing changes, and reviewing both the effects of the changes on the pipelines and the use of the Shared Library as a whole.
Ending up in a “Conway’s Law” situation is a total waste of time and effort for all concerned. However, committing to standards-based good practice around pipelines and container-native development provides developers with a path of least resistance between their source code and production and allows every stakeholder to recognize the benefits of containerization quickly.
Everything you need to grow your career.
With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development.SIGN UP
In a disconnected environment, it is advisable to follow the principles of decentralized ALM as much as is practical. However, compromises will always be made. A key compromise often centers around Dependency Management—how do you ensure that an application has all of its build and runtime dependencies available to it in a container platform with no direct connection to the public internet?
As with fully connected environments, it is good practice to use a dependency management solution (e.g., Sonatype Nexus, or JFrog Artifactory) to present dependencies to automated build processes in disconnected environments.
Once the content is in, developers can then stand up their own dependency management solutions for their projects, talking back to the centralized instance. This approach allows them to skirt the obvious pitfalls of a centralized single source of truth for dependencies in the container platform.
Typically, we talk about a “hard disconnect” (whereby there is no physical connection at all to the public internet) or a “soft disconnect” (whereby access to the public internet is heavily restricted to certain hosts or protocols). In either scenario, no direct curation of content should be required. Step gates built into the pipeline would ideally be configured to automatically scan application dependencies and fail in the case of vulnerabilities, errata, or license concerns being discovered.
Hard disconnect scenario
In this scenario, content is downloaded on a public internet connection and then uploaded to the chosen dependency management solution in the disconnected environment, where it can be resolved by the automation tooling within the environment. This process is likely to be manual and time-consuming.
Soft disconnect scenario
In this scenario, the dependency management solution is allowed to proxy through or is whitelisted to directly allow access to the public internet. This process permits a single controlled connection to the repositories containing the dependencies. This scenario is a lot more flexible, as no manual interaction is required to provide content to the environment.
Sorry, I lied. There is no conclusion. You are creating the building blocks on which your container-native developments will rely on for their entire lifecycle—and that lifecycle should be under constant review.
However, by decentralizing your automation dependencies and open sourcing the means by which you interact with those dependencies, you are giving the development communities the means to scale their activities to meet the ever-changing needs of all stakeholders concerned, without falling victim to the legacy of traditional approaches.