As we reviewed in my previous articles about blue/green deployment, a critical topic in cloud-native computing is the microservice architecture. We are no longer dealing with one monolithic application. We have several applications that have dependencies on each other and also have other dependencies like brokers or databases.
Applications have their own life cycle, so we should be able to execute independent canary deployment. All the applications and dependencies will not change their version at the same time.
Another important topic in the cloud-native space is continuous delivery. If we are going to have several applications doing canary deployment independently, we have to automate it. We will use Helm, Red Hat OpenShift Service Mesh, Red Hat OpenShift GitOps, and (of course) Red Hat OpenShift to help us.
In the next steps, we will see a real example of how to install, deploy and manage the life cycle of cloud-native applications doing canary deployment using OpenShift Service Mesh.
Let's start with some theory. After that, we will have a hands-on example.
Canary deployment
A canary deployment is a strategy where the operator releases a new version of their application to a small percentage of the production traffic. This small percentage can test the new version and provide feedback. If the new version is working well, the operator can increase the percentage, until all the traffic is using the new version. Unlike blue/green, canary deployments are smoother, and failures have limited impact.
Shop application
We are going to use very simple applications to test canary deployment. We have created two Quarkus applications: Products and Discounts. Figure 1 shows the shop applications.
Products call Discounts to get the product's discount and expose an API with a list of products with its discounts.
Shop Canary
To achieve canary deployment with cloud-native applications using OpenShift Service Mesh, we have designed this architecture which you can see in Figure 2 as a simplification.
OpenShift components - online:
- Route, Gateway, and Virtual Services.
- Services mapped to the deployment.
In blue/green deployment, we always have an offline service to test the version that is not in production. In the case of canary deployment, we do not need it because progressively we will have the new version in production.
We have defined an active or online service, products-umbrella-online
. The final user will always use products-umbrella-online
. When a new version is deployed, OpenShift Service Mesh will send the amount of traffic that has been defined in the Virtual Service. We have to take care of the number of replicas in the new release and the old release, based on the amount of traffic that we have defined in the virtual service.
Shop umbrella Helm chart
One of the best ways to package cloud-native applications is Helm. In canary deployment, it makes even more sense. We have created a chart for each application that does not know anything about canary deployment. Then we pack everything together in an umbrella Helm chart.
Demo
Prerequisites
- Red Hat OpenShift 4.13 with admin rights
- You can download Red Hat OpenShift Local for OCP 4.13.
- Refer to the getting started guide.
- Git
- GitHub account
- oc 4.13 CLI
We have a GitHub repository for this demo. As part of the demo, you will have to make some changes and commits. So it is important that you fork the repository and clone it in your local.
git clone https://github.com/your_user/cloud-native-deployment-strategies
Install OpenShift GitOps
Go to the folder where you have cloned your forked repository and create a new branch canary-mesh
:
git checkout -b canary-mesh
git push origin canary-mesh
Log in to OpenShift as a cluster admin and install the OpenShift GitOps operator with the following command. This may take a few minutes.
oc apply -f gitops/gitops-operator.yaml
Once OpenShift GitOps is installed, an instance of Argo CD is automatically installed on the cluster in the openshift-gitops
namespace and a link to this instance is added to the application launcher in OpenShift Web Console.
Log in to the Argo CD dashboard
Argo CD upon installation generates an initial admin password which is stored in a Kubernetes secret. To retrieve this password, run the following command to decrypt the admin password:
oc extract secret/openshift-gitops-cluster -n openshift-gitops --to=-
Click on Argo CD from the OpenShift web console application launcher and then log in to Argo CD with admin
username and the password retrieved from the previous step.
Configure OpenShift with Argo CD
We are going to follow, as much as we can, a GitOps methodology in this demo. So we will have everything in our Git repository and use Argo CD to deploy it in the cluster.
In the current Git repository, the gitops/cluster-config directory contains OpenShift cluster configurations such as:
- Namespaces
gitops
. - Role binding for Argo CD to the namespace
gitops
. - OpenShift Service Mesh
- Kiali Operator
- OpenShift Elasticsearch Operator
- Red Hat OpenShift distributed tracing platform
Let's configure Argo CD to recursively sync the content of the gitops/cluster-config directory into the OpenShift cluster.
Execute this command to add a new Argo CD application that syncs a Git repository containing cluster configurations with the OpenShift cluster.
oc apply -f canary-service-mesh/application-cluster-config.yaml
Looking at the Argo CD dashboard, you will notice that an application has been created.
You can click on the cluster-configuration
application to check the details of sync resources and their status on the cluster.
Create the shop application
We are going to create the application shop, which we will use to test canary deployment. Because we will make changes in the application's GitHub repository, we have to use the repository that you have just forked. Edit the file canary-service-mesh/application-shop-mesh.yaml
and set your own GitHub repository in the reportURL
and the OCP cluster domain in change_domain
.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: shop
namespace: openshift-gitops
spec:
destination:
name: ''
namespace: gitops
server: 'https://kubernetes.default.svc'
source:
path: helm/quarkus-helm-umbrella/chart
repoURL: https://github.com/change_me/cloud-native-deployment-strategies.git
targetRevision: canary-mesh
helm:
valueFiles:
- values/values-mesh.yaml
parameters:
- name: "global.namespace"
value: gitops
- name: "domain"
value: "change_domain"
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
oc apply -f canary-service-mesh/application-shop-mesh.yaml
Looking at the Argo CD dashboard, you will notice that we have a new shop application.
Test the shop application
We have deployed the shop with Argo CD. We can test that it is up and running.
We have to get the route that we have created.
oc get routes shop-umbrella-products-route -n istio-system --template='https://{{.spec.host}}/products'
Notice that in each microservice response, we have added metadata information to better see the version of each application. This will help us to see the changes while we do the canary deployment. We can see that the current version is v1.0.1
:
{
"products":[
{
...
"name":"TV 4K",
"price":"1500€"
}
],
"metadata":{
"version":"v1.0.1", <--
"colour":"none",
"mode":"online"
}
}
Products canary deployment
We have already deployed the products version v1.0.1 with 2 replicas, and we are ready to use a new products version, v1.1.1, that has a new description attribute.
Figure 3 has the current status.
We have split a cloud-native canary deployment into three automatic steps:
- Deploy canary version for 10%.
- Scale canary version to 50%.
- Scale canary version to 100%.
This is just an example. The key point here is that it's very easy for us to use the canary deployment that best fits our needs.
Step 1: Deploy canary version for 10%
We will deploy a new version, v1.1.1. To do that, we have already configured products-green
with the new version, v1.1.1. And we have to edit the file helm/quarkus-helm-umbrella/chart/values/values-mesh.yaml
and make some changes:
In
global.istio
change the weight to send 10% of the traffic to the new version:global: istio: productsblueWeight: 90 productsgreenWeight: 10
Increase the number of replicas to be able to support 10% of the traffic in the new version:
products-green: quarkus-base: replicaCount: 1
Push the changes to start the deployment:
git add . git commit -m "Deploy products v1.1.1 with 10% traffic" git push origin canary-mesh
Argo CD will refresh the status after some minutes. If you don't want to wait, you can refresh it manually from Argo CD UI or configure the Argo CD Git webhook.
Figure 4 has the current status.
In the products URL's response, you will have the new version in 10% of the requests.
New revision:
{
"products":[
{
"discountInfo":{...},
"name":"TV 4K",
"price":"1500€",
"description":"The best TV" <--
}
],
"metadata":{
"version":"v1.1.1", <--
}
}
Old revision:
{
"products":[
{
"discountInfo":{...},
"name":"TV 4K",
"price":"1500€"
}
],
"metadata":{
"version":"v1.0.1", <--
}
}
Step 2: Scale canary version to 50%
Now we have to make the changes to send 50% of the traffic to the new version v1.1.1. We have to edit the file helm/quarkus-helm-umbrella/chart/values/values-mesh.yaml
.
In
global.istio
, change the weight to send 50% of the traffic to the new version:global: istio: productsblueWeight: 50 productsgreenWeight: 50
Increase the number of replicas to be able to support 50% of the traffic in the new version:
products-green: quarkus-base: replicaCount: 2
Push the changes to start the deployment:
git add . git commit -m "Deploy products v1.1.1 with 50% traffic" git push origin canary-mesh
Figure 5 has the current status.
In the products URL's response, you will have the new version in 50% of the requests.
Step 3: Scale canary version to 50%
Now we have to make the changes to send 100% of the traffic to the new version v1.1.1. We have to edit the file helm/quarkus-helm-umbrella/chart/values/values-mesh.yaml
.
In
global.istio
change the weight to send 50% of the traffic to the new version.global: istio: productsblueWeight: 0 productsgreenWeight: 100
We can decrease the number of replicas in the old version because it will not receive traffic.
products-blue: quarkus-base: replicaCount: 0
Push the changes to start the deployment.
git add . git commit -m "Delete product v1.0.1" git push origin canary-mesh
Figure 6 has the current status.
In the products URL's response, you will only have the new version v1.1.1.
{
"products":[
{
"discountInfo":{...},
"name":"TV 4K",
"price":"1500€",
"description":"The best TV" <--
}
],
"metadata":{
"version":"v1.1.1", <--
}
}
Delete environment
To delete all the things that we have done for the demo, you have to:
- In GitHub, delete the branch
canary-mesh
. - In Argo CD, delete the applications
cluster-configuration
andshop
. - In OpenShift, go to project
openshift-operators
and delete the installed operators OpenShift GitOps, OpenShift Service Mesh, Kiali Operator, OpenShift Elasticsearch Operator, Red Hat OpenShift distributed tracing platform.