Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Modernization: A reference approach

Where to begin and how

April 12, 2024
Yashwanth Maheshwaram
Related topics:
KubernetesMicroservicesApplication modernizationVirtualization
Related products:
Red Hat Advanced Cluster Management for KubernetesRed Hat Advanced Cluster Security for KubernetesRed Hat build of QuarkusRed Hat OpenShiftRed Hat OpenShift VirtualizationRed Hat Service Interconnect

Share:

    As organizations grow, their software needs to evolve as well. Initially, a single team often handles end-to-end development of a monolithic application. However, as successful products expand, complexity increases with multiple teams, stakeholders, and integrations. Decision-making becomes slower, requiring extensive agreement.

    As development teams adapt to these demands, aligning architecture with the team structure becomes crucial. A hybrid cloud microservices architecture becomes invaluable.

    A hybrid cloud environment combines on-premise servers with cloud services, allowing independent deployment, scaling, and movement of services. This flexibility simplifies cost and availability management, enabling tailored deployment and scaling strategies.

    This series explores how Red Hat OpenShift technologies can aid the transition from monolith to microservices, simplifying the process and ensuring a robust and secure application. It follows a fictional bicycle sales company, Pedal, which initially managed sales and service booking through a monolithic application. Now, they're transitioning to a microservices architecture in a hybrid cloud environment.

    Refactor, replatform, or rehost?

    There are several methods to migrate an existing application to the cloud, each with its own advantages and disadvantages. Three popular approaches are refactoring, replatforming, and rehosting.

    Refactoring

    Refactoring entails moving services to the cloud while optimizing them for their new environment. This simultaneous approach is more intricate than the conventional “lift and shift'” method of cloud migration (also known as rehosting, which we’ll get to briefly). Yet, unlike simply lifting and shifting, the goal of refactoring is to deploy an application with cloud-native attributes originally designed for the cloud. There’s no need for heavyweight wrappers around services to get them running, as would be the case with migrating to cloud-based virtual machines. Although refactoring may take more developer and operations time to implement, it’s a cost-effective and flexible solution. 

    Replatforming

    Replatforming involves less change than refactoring. Instead of altering the code’s architecture, it primarily focuses on modifying resource interfaces. For instance, replatforming might involve migrating to a cloud-based elastic database service from a fixed-size self-managed host, while the core business logic and application structure remain largely unchanged. Replatforming can be a balanced, cost-effective method for cloud migration, especially for organizations just starting the process. Also, it doesn’t prevent conducting deeper refactoring later on.

    Rehosting

    Rehosting, also known as “lift and shift,” is the most conservative approach. It aims to transfer an on-premise environment into the cloud with as few changes as possible. Cloud providers typically offer specialized tools facilitating this migration. Rehosting can be a good approach if:

    • You want to test what’s involved in moving to the cloud.
    • You wish to retire the service’s current hardware.
    • You need to migrate a large number of similar services.

    However, rehosting doesn’t yield many benefits associated with modern application hosting, such as improved resilience and the ability to independently scale parts of the application.

    Replatforming Pedal

    Our example application, Pedal, began as a monolithic Java application hosted on premise. As the business grew, the limitations of this monolithic architecture became evident, especially its hampered scalability and flexibility. Pedal needed a strategy to harness the benefits of the cloud without completely overhauling its existing system.

    Replatforming is the ideal approach for Pedal, striking a balance between maintaining its existing application’s core structure and enhancing its capabilities through cloud services. The first step involves migrating from its on-premise deployment to a more cloud-friendly environment using Red Hat OpenShift. Thanks to Red Hat OpenShift’s container orchestration capabilities, Pedal can manage its application more efficiently, ensuring better scalability and reliability.

    Replatforming lays the groundwork for the next step: Virtualization.

    Virtualization

    Virtualization involves creating a virtual representation of hardware platforms, storage devices, or network resources. When modernizing a legacy Java application, this approach often means generating a virtual instance of a server, known as a virtual machine (VM). This VM contains a full copy of an operating system, the application, and necessary binaries and libraries. By abstracting the legacy application from the physical hardware, this process offers more flexibility in deploying environments and enhances the application’s adaptability to various computing resources.

    For legacy applications, this form of virtualization offers a pathway toward more modern architectures without requiring immediate, extensive changes to the application. Initially, you can move the application to a VM through “lift and shift” rehosting. This is often just the first step in a series of incremental changes.

    Following rehosting, the application can undergo replatforming, involving minor adjustments to optimize its performance within the new virtualized environment—potentially capitalizing on cloud-specific features like managed storage or improved scalability. The final step is refactoring, where the application is fundamentally restructured, often transitioning to a microservices architecture. Throughout this process, the application benefits from the isolated, flexible, and scalable nature of virtual environments. Virtualization makes these legacy systems more resilient and adaptable, aligning them better with modern computing paradigms.

    In Modernizing Pedal: Virtualization, we migrated Pedal’s monolithic application, originally deployed on standard operating systems, to Red Hat OpenShift—without any code changes. This transformation created a virtualized application that can be deployed easily wherever and whenever it’s needed. 

    Read more about Red Hat Virtualization.

    Identifying and deploying microservices

    The first step to breaking down a monolith into individual microservices through refactoring is to determine the boundaries between services. One way to start is to think about how you store the data within an application. For example, if only part of the monolith uses a specific database, combining the database with the code that uses it makes it a good microservice candidate.

    Another productive strategy, as previously mentioned, revolves around mirroring organizational structure. This method involves creating microservices that align with the teams responsible for managing specific functions. 

    As for the Pedal example, follow along as we use Quarkus and Red Hat Service Interconnect to break down the monolith Java application, identifying and building the required microservices. 

    Modernizing Java applications for the cloud

    One effective solution for modernizing Java applications for the cloud is Red Hat's migration toolkit for applications. Bundled with a Red Hat OpenShift subscription, this toolkit employs an extensive questionnaire and analyzes application artifacts, such as source code. Then, it generates an HTML report highlighting changes essential for cloud migration.

    The migration toolkit for applications supports various paths, including migrating from Java Spring Boot to Quarkus. Quarkus, a Kubernetes-native framework, amalgamates various libraries and standards, positioning Java as a first-class choice for running cloud applications. It’s optimized for executing Java microservices in Kubernetes with quick startup times and low resource requirements—enabling more applications to operate within the cluster.

    Once applications are deployed to the cloud, their operational and maintenance requirements might change. Tools such as Cryostat, which provides JDK flight recorder (JFR) management for Kubernetes and OpenShift, simplify profiling and monitoring these running applications.

    You can view this process in Modernize Java applications for the cloud, where we prepared Pedal for its eventual hybrid cloud deployment environment. Though numerous steps are involved, the approach’s inherent flexibility, resilience, and efficiency make it more than worthwhile. 

    Containerization

    Containerization is a step forward from virtualization. Instead of placing an entire monolith within a VM, containerization takes a microservice and creates a more lightweight image file. Running images, known as containers, can make better use of the operating system (OS) on which they’re running compared to running an entire second OS virtually inside a VM.

    Because of the lower overheads, you can generally pack more services onto the same amount of computing power with containerization than with virtualization. Containerization also benefits from management frameworks specifically designed to keep these workloads running smoothly.

    You can learn how to prepare microservices and databases for containerization, enabling smooth and reproducible deployments, in Using containerization in modern hybrid cloud application deployment, where we containerized the modern Pedal application.

    Data management

    With monolithic application development in a self-hosted environment, creating new services such as databases or file storage can be challenging, requiring coordination across teams.

    For example, once you have a database, it’s natural to use that resource for all the monolith’s data management needs. If you’re lucky, you can use different schemas—or at least different tables—for different parts of the system. However, it’s typical to end up with a monolithic database structure mirroring the monolithic code structure, with code directly accessing the data it needs.

    A key tenet of a microservices architecture is the independent deployability of various services, implying that each microservice should be responsible for its own data. The object is to avoid a situation where one team changes a database table’s structure when deploying their service, and a different service starts failing because it hasn’t been updated to work with the new schema.

    For Pedal, this approach involves creating specific databases for each service while ensuring restricted data access, allowing only the appropriate services to access databases. To see that process, check out Modernizing Pedal: Data management.

    Deployment and DevOps of modernized services

    Not too long ago, all software had very long release cycles, issuing new versions perhaps once a year. Correcting errors through patch updates was a slow and expensive process for manufacturers. And, with a year’s worth of changes, applying the updates was slow and expensive for the customer too.

    The growth of web-based software services enabled a rolling release cycle, letting developers introduce new features right after creating them, requiring minimal customer intervention apart from potential training. However, these services also generally followed a linear development process involving deploying changes to a test server and a dedicated quality assurance team manually checking them before releasing them into production. The rate of releases was still limited to maybe one or two a day.

    Today, industry best practices include using continuous integration/continuous delivery (CI/CD) pipelines. With CI/CD, engineers commit changes to a source control system, which an automated service checks. If the changes pass the check, they’re packaged up and delivered into the production system. When combined with containerization (see above), CI/CD ensures swift identification of issues in the production system, enabling swift rollback to earlier releases.

    The steps we’ve navigated through thus far establish a streamlined and uniform deployment process for Pedal. Red Hat OpenShift makes the process straightforward—to see it in action, take a look at Modernizing Pedal: Deployment.

    Automated testing with pipelines

    As highlighted earlier, automated testing is a critical component of any CI/CD process. When deploying to the Red Hat OpenShift container platform, you use testing tasks in the OpenShift pipeline for that part of the process. Deployment will only take place if the testing tasks pass.

    Red Hat actively contributes to the development of the open source Tekton framework, used by OpenShift pipelines to construct versatile CI/CD pipelines across diverse platforms.

    Security for hybrid systems

    Hybrid cloud deployments have their own set of security concerns. To address them, Red Hat provides Advanced Cluster Security (ACS) and Advanced Cluster Management (ACM)–services that are focused on ensuring and maintaining system security.

    • ACS: Helps protect containerized Kubernetes workloads in Red Hat OpenShift and the major cloud service providers. It continuously scans system-level events in the Kubernetes environment, checking them against security best practices. This process prevents deployments and configurations from exposing the system to security threats.
    • ACM: Provides a single console for managing deployments, with security baked in. You can set security policies centrally and have ACM apply those policies to all the apps and clusters that it manages.

    Red Hat OpenShift has many security features apart from these tools, like built-in cluster security. To learn more about how OpenShift protects our Pedal application, read Overview of cluster security. 

    Conclusion

    Although the process of modernizing a monolithic application may seem intimidating, Red Hat offers numerous resources and guidance to help you along the way. A full refactoring exercise can be a significant undertaking, but you can use rehosting and replatforming to move one step at a time.

    A modern microservice architecture has many advantages over a traditional, monolithic, on-premise deployment. By splitting up the monolith, you enable the system’s parts to be independently developed and deployed, granting teams the autonomy to manage their own workloads. Containerization works hand-in-hand with CI/CD, so you can safely deploy new features to production as soon as they’re ready, supported by modern monitoring and scaling tools.

    Last updated: September 13, 2024

    Related Posts

    • Monolith to microservices: Breaking down apps the useful way

    • How applications evolve from monolith to microservices

    • Distributed transaction patterns for microservices compared

    • Patterns for distributed transactions within a microservices architecture

    • Implementing clean architecture solutions: A practical example

    • 5 design principles for microservices

    Recent Posts

    • Integrate Red Hat AI Inference Server & LangChain in agentic workflows

    • Streamline multi-cloud operations with Ansible and ServiceNow

    • Automate dynamic application security testing with RapiDAST

    • Assessing AI for OpenShift operations: Advanced configurations

    • OpenShift Lightspeed: Assessing AI for OpenShift operations

    What’s up next?

    Kubernetes Native Microservices e-book tile card

    Read Kubernetes-Native Microservices with Quarkus and MicroProfile for an essential understanding of what it takes to develop cloud-native applications using modern tools such as microservices that utilize and integrate with Kubernetes features naturally and efficiently. 

    Get the e-book
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue