Over the past few decades, application development has been evolving from bare metal hosting to virtualization to containers, leading to the adoption of the Kubernetes orchestration platform. This article traces these developments and explains how Red Hat OpenShift provides the next level of application support.
There has been an explosion in the modernization of application development and deployment over the past few years. Several publications such as Forbes and Business Wire quoted IDC's prediction that between 2018 to 2023 more than 500 million logical applications will be developed, which is equal to the number of applications built over the previous 40 years. In addition, businesses expect faster changes to applications.
Virtualization and the cloud
A great shift has taken place during the past 20 years from physical servers to virtual machines (VMs) and cloud hosting for applications. Although some improvements to deployment models accompanied the shifts, organizations were focused primarily on exploiting the agility in the infrastructure to make operations and deployment easier and less costly.
Therefore, the evolution in infrastructure left challenges in achieving organizations' goal of more rapid development and deployment of applications. Some of the major challenges were:
- Lead time was very high when bringing application changes from development to production.
- Modifying an existing feature or adding a new feature could have ripple effects throughout the application code.
- Even small changes to an application could lead to regression-related failures.
- Deploying a change required retesting and redeploying the entire application.
- The application life cycle spawned complex change processes when moving from development to quality assurance (QA), integration testing, user acceptance testing, and production.
The primary reasons for all these challenges were tight coupling within application and dependencies on elements of application infrastructures such as hardware, the operating system version, operating system packages, and libraries.
The release of Docker in 2013 revolutionized application development and delivery through the use of containers. The underlying technologies had been available in Unix and Linux for quite some time (usually under different names), but Docker made it easy for the first time to create containers by hiding the complexity of the underlying operating system.
A container packages an application binary and all its dependencies to facilitate uploading and running it in different environments. A container is generally much lighter-weight than a VM because the container leaves out much of the software infrastructure, leaving the host system to provide it.
A few important advantages of containers:
- Focus on the application layer.
- Decoupling the application from the underlying computing infrastructure.
- Facilitating agile deployment and scaling.
- Improving application portability across any kind of infrastructure—physical, virtual, or cloud.
- Including all the dependencies of the applications: code dependencies, libraries, versions, etc.
A modern service can now be designed as a collection of containers, packaging each container with all its dependencies in a container and shipping it to any underlying container-ready platform. The combined application is loosely coupled to the underlying infrastructure and runs seamlessly. Thus, containers help to solve a lot of challenges discussed earlier.
Kubernetes and other management requirements
Running a large number of containers that have to communicate closely, while terminating and reappearing rapidly, calls for a management platform that can schedule, orchestrate, and deploy these containers at scale. Kubernetes became the de facto standard for container management after Google released that open source platform in 2014.
Even though Kubernetes meets a lot of the requirements for deploying containers, still more support is needed by today's enterprises. A more complete enterprise platform calls for an ecosystem of components and tooling to provide the following features:
- Compatibility: Components must work together, as well as with other infrastructure investments and with cloud providers.
- Certification: The enterprise platform must be supported & certified on various underlying infrastructures providers like Bare metal, Virtualization & Cloud.
- Observability: Monitoring and logging are important for troubleshooting and debugging.
- Operational tooling: Updates & Upgrades of the platform & applications must be rolled out in an automated fashion.
- Automation: All steps in the software supply chain, from development to production, should be automated through CI/CD, DevOps, and GitOps.
- Developer tooling: CI/CD should incorporate the developer experience, including the integrated development environment (IDE), application testing, and application builds. Onboarding developers into a standardized environment should be fast and smooth.
- Security: A well-integrated security plan should control access at network, application, and Kubernetes cluster levels.
- Integrated software defined networking (SDN): This policy layer should be integrated with existing underlay networking.
- Integrated storage and data service (SDS): The platform should offer a certified, integrated, automated system for persistent application data.
- Fleet management: It should be possible to manage the life cycle of multiple Kubernetes clusters from a single pane of glass.
Figure 1 shows the requirements for a model enterprise Kubernetes platform on the left, organized by the categories on the right. Such a platform lets organizations develop and deploy containerized applications faster.
In theory, you could build this platform within your own organization by integrating the multiple different open source solutions available in the containers/Kubernetes ecosystem from the Cloud Native Computing Foundation (CNCF), or by finding cloud providers with each service. The drawback of this do-it-yourself (DIY) approach is that it makes you invest enormous resources and people in building the enterprise Kubernetes platform and managing the lifecycle (patching, security, updates, integration) of individual components, rather than focusing on modernizing and developing new cloud native applications that can differentiate you from your competitors.
Adopting an enterprise-ready Kubernetes solution supported by a major enterprise could be a better approach.
Red Hat OpenShift
OpenShift is a highly secure, hybrid cloud Kubernetes platform supported by Red Hat. Built around full-stack automated operations and a consistent experience across all environments, OpenShift optimizes developer productivity and enables innovation without limits.
OpenShift includes all the capabilities listed in the previous section for an enterprise Kubernetes model. Figure 2 illustrates how OpenShift components map to development requirements.
Figure 3 displays the layers and key services in OpenShift.
OpenShift meets the demand for containerized applications
OpenShift provides a consistent cloud-based experience across different infrastructure environments, helping organizations deliver their business goals by modernizing their services and applications. There is an enormous demand for the rapid development of containerized, cloud-based applications. Red Hat OpenShift is a key platform to meet this demand.
Check out the Red Hat OpenShift Sandbox platform for developers, testers, and operations. Build, test, and deploy applications using container technology. It's a great place to get started with Kubernetes for free.