This article is for developers who want to code, test, and build applications according to the most versatile and powerful modern techniques in software engineering. Perhaps you have read the Agile Software Development Manifesto, embraced the DevOps Culture, and even started down the Path to GitOps as a way of putting the first two documents into practice. This path is a great way to handle future green-field projects, but what about existing applications that might not have had the benefit of being launched in the age of DevOps?
We will look at a compelling use case that led to the development of KubeVirt, the upstream open source project behind Red Hat OpenShift Virtualization.
Red Hat OpenShift Virtualization is an Operator-based add-on available to anybody with a Red Hat OpenShift Container Platform subscription. Through OpenShift Virtualization, you can add virtual machines (VMs) as custom resources with their own controller and API server. The VMs are based on the same KVM technology used to run virtual machines on Red Hat Enterprise Linux and Red Hat Virtualization. But in OpenShift, the VMs are encapsulated in Kubernetes pods.
VirtualMachines
in OpenShift are located in this pod layer and can be labeled, annotated, and targeted as endpoints just like any pod in the cluster.
Bookinfo example
For the purpose of this article, we will cheat a little bit and select an application already written as a collection of microservices, then install that application on a VM as our target for modernization. The application we're using is the Istio project's bookinfo sample app. Stitched together from a collection of different services, the app displays information about an example book.
Bookinfo contains four different services, all written in different languages. The productpage
service displays the main application and calls two other services, details
and reviews
. Information about books is stored by the details
service. The reviews
service provides a pair of short reader reviews of the sample book and further calls the ratings
service to provide a 1-5 star rating for each review.
As a microservices-based application, the different services refer to each other by name and expect to find all services listening on port 9080. To install the app on a single VM, a little code modification is required to change three of the services from the default 9080 port. The changes to do this are in a fork of the istio repository. Using environment variables and systemd unit files, all four services can be set up in a self-contained manner on one Fedora virtual machine to play the part of our legacy application.
Virtual legacy versus microservices
As noted earlier, a VM running in an OpenShift cluster is reachable in much the same manner as any deployment or pod in the cluster. The VirtualMachine
configuration follows:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: bookinfo-legacy
spec:
running: true
template:
metadata:
annotations:
vm.kubevirt.io/os: fedora
vm.kubevirt.io/workload: server
labels:
kubevirt.io/domain: bookinfo-legacy
vm.kubevirt.io/name: bookinfo-legacy
app: bookinfo-legacy
spec:
domain:
cpu:
cores: 1
sockets: 1
threads: 1
devices:
disks:
- bootOrder: 1
disk:
bus: virtio
name: rootdisk
interfaces:
- masquerade: {}
name: default
networkInterfaceMultiqueue: true
rng: {}
features:
acpi: {}
smm:
enabled: true
firmware:
bootloader:
efi: {}
machine:
type: pc-q35-rhel8.6.0
resources:
requests:
memory: 1Gi
evictionStrategy: LiveMigrate
hostname: bookinfo-legacy
networks:
- name: default
pod: {}
terminationGracePeriodSeconds: 180
volumes:
- dataVolume:
name: bookinfo-rootdisk
name: rootdisk
To target the bookinfo-legacy VM from within the cluster, use a Service
:
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
service: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: bookinfo-legacy
Finally, to expose the Service
outside the cluster, you need a Route
:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: productpage
spec:
port:
targetPort: 9080
to:
kind: Service
name: productpage
weight: 100
wildcardPolicy: None
If everything goes right, you should have a working website that shows a product page for the Shakespeare play, Comedy of Errors. Under the title and summary are two sections: Book Details, which displays metadata such as ISBN and publisher, and Book Reviews with a pair of blurbs and one-to-five star ratings.
GitOps for the win
Once you complete the manual work of setting up the VM, it is a good idea to take a snapshot of the working configuration and place the disk image someplace where it can easily be cloned to create new instances.
In OpenShift Virtualization, this procedure is as simple as using a DataVolume
(DV) to copy the VM's root disk to the openshift-virtualization-os-images
namespace. An example follows of a DV that copies the image from the bookinfo
namespace to the openshift-virtualization-os-images
namespace:
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: bookinfo
namespace: openshift-virtualization-os-images
annotations:
cdi.kubevirt.io/storage.bind.immediate.requested: "true"
kubevirt.ui/provider: Fedora
spec:
source:
pvc:
namespace: bookinfo
name: bookinfo-rootdisk
storage:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
Once the VM's disk image is available for cloning, you can create a Git repository and a corresponding Argo CD Application under Red Hat OpenShift GitOps. The DV for the bookinfo-legacy VM demonstrated in this article employs a different image source from the one we just built. The new DV targets a local HTTP server running as a container on a management node next to the OpenShift cluster.
This is a good place to experiment with different DataVolume sources, whether a clone of a PVC, a downloaded Fedora cloud image or even a container image.
An example repository is available under a repository created for GitOpsCon NA 2022. (This article is based on a talk at GitOpsCon, an event co-located with KubeCon and CloudNative Con NA 2022.)
The gitopscon22
repository is laid out in a standard Kustomization pattern, with base
and overlays
directories. To run through the demonstration for the application migration, tags are used to advance through the commit log, and the Argo CD Application specifies a particular tag such as dev
or prod
.
Starting at commit 7480090, there is a running VM with its DV, a set of Service
resources, and a Route
. Earlier, I described the Route
and Service
for productpage
. The remainder of the services currently exists to redirect individual components of the bookinfo app to their constituent TCP ports on the VM.
The first migration
At this point in the demo, we simulate a few sprints worth of work and rewrite the application's landing page as a microservice. You can find the new productpage
deployment in commit c616a60. Notable differences here include the addition of productpage.yaml
to our base kustomization.yaml
configuration file and a change in the application selector for the productpage
service:
@@ -56,4 +56,4 @@ spec:
 - port: 9080
 name: http
 selector:
- app: bookinfo-legacy
+ app: productpage
To demonstrate the benefit of using GitOps to determine which code gets deployed in the development and production environments, force push an update to the dev
tag. The update then gets deployed by Red Hat OpenShift GitOps to the development environment:
$ git tag -f dev c616a60
Updated tag 'dev' (was 7480090)
$ git push -f --tags
Total 0 (delta 0), reused 0 (delta 0), pack-reused 0
To github.com:cwilkers/gitopscon22.git
+ 7480090...c616a60 dev -> dev (forced update)
If you encounter any issues, you can fix them in dev
before being pushed to prod
. Once the update goes through (it might require a refresh on the Argo CD side), you can verify that the productpage
service is now selecting app=productpage
instead of app=bookinfo-legacy
:
$ oc get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
details ClusterIP 172.30.231.130 <none> 9080/TCP 35d app=bookinfo-legacy
productpage ClusterIP 172.30.8.204 <none> 9080/TCP 35d app=productpage
ratings ClusterIP 172.30.83.87 <none> 9080/TCP 35d app=bookinfo-legacy
reviews ClusterIP 172.30.108.78 <none> 9080/TCP 35d app=bookinfo-legacy
More importantly, you can check that the endpoints have changed:
$ oc get endpoints
NAME ENDPOINTS AGE
details 10.129.2.62:9081 35d
productpage 10.131.0.149:9080 35d
ratings 10.129.2.62:9083 35d
reviews 10.129.2.62:9082 35d
This output shows that the productpage
service points to a different service IP address from the one used by the rest of the services. Moreover, the details
, ratings
, and reviews
endpoints have different port numbers. The different ports spring from the changes made to the bookinfo application's microservices. They originally all listened on port 9080, but to play nicely together in a VM, they must listen on distinct TCP ports.
The migrated application
Much like the instructions on a shampoo bottle, rinse and repeat. There is no need to explicitly go through the steps for the remaining services here. If you follow the repository commits page, you can find commits adding details, reviews, and ratings microservices.
With the last commit of the ratings
service, the application is fully migrated. All services point at the results of deployments, not the virtual machine, and the old bookinfo-legacy
VM is now redundant. Commit ddb4761 changes the VM definition to running: false
, which causes the legacy application to shut down.
After testing, promotion to production, etc., you can delete the VM's YAML from the Git repository and tell OpenShift GitOps to prune the repository during its next sync operation.
GitOps for new and legacy applications
Hopefully, this article will encourage teams with complicated legacy applications to try a migration to OpenShift. Whether the teams take on a full application migration journey from top to bottom, as demonstrated here, or use the platform's capabilities to develop new applications alongside old ones, OpenShift is a great place to give those legacy applications new life.
Last updated: December 1, 2023