Argo CD and Tekton featured image

Over the last two years, my coworkers and I have worked on developing a multicluster project for Kubernetes and Red Hat OpenShift. We needed a way to efficiently deploy applications, oversee access and authorization, and manage application placement across clusters. This need led us to develop with Argo CD and GitOps.

Recently, I switched to another team that also focuses on multicluster development. During my interviews, I promised to help create a catalog of our projects and develop a process to deploy them rapidly. Together, the catalog and process would allow the team to just work on things, rather than trying to figure out how to get them operational. However, I quickly hit a wall. With Argo CD, I couldn't control when and in what order cluster objects were deployed onto new or existing clusters. Eventually, I discovered Tekton, a powerful addition to my development toolset.

In this article, I briefly describe my process for developing the catalog and process tool. I'll introduce the components involved, explain a little about how Tekton Pipelines works, and leave you with a tool that you can share with your organization and teams.

Argo CD and Tekton

Argo CD watches cluster objects stored in a Git repository and manages the create, update, and delete (CRUD) processes for objects within the repository. Tekton is a CI/CD tool that handles all parts of the development lifecycle, from building images to deploying cluster objects.

Tekton runs one or more tasks, which launch the appropriate container(s) and execute a specific set of commands. A user can combine a series of tasks to form a Tekton pipeline. In a pipeline to build a container image, for example, the developer pushes the code to a repository. Tekton sees the change and launches a pipeline, which builds the code, creates a container image, and pushes it to the image registry.

Developing the catalog project

For the catalog project, I needed to deploy two clusters and connect them using Amazon Web Services (AWS) Virtual Private Cloud (VPC) peering. Then, I needed to deploy objects to both clusters. To start, I installed the OpenShift Operators for Hive and Argo CD. I stored my Hive cluster deployments within a Git repository, then had Argo CD watch the repository to deploy the clusters.

The issue was that I couldn't get the cluster to import into Argo CD and programmatically create the Argo CD YAML files, which I needed to deploy the Kubernetes objects from the Git repository. To overcome the issue, I used Tekton. The diagram in Figure 1 shows how I used Tekton pipelines to define precisely when each item should be applied to the newly created clusters.

Diagram showing the flow through the pipeline
Figure 1: The Tekton pipeline for importing into Argo CD and programmatically creating the Argo CD YAML files.

Getting started

I set up the tekton-argocd project repository to help guide you through this workflow. Before getting started, fork the repository. You will add clusters to the /clusters directory. Modify the repository and create the application within Argo CD:

$ cd ~/git/tekton-argocd

$ sed -i ‘s/cooktheryan/YOURUSERNAME/g’ argo-app/*.yaml

$ oc create -f argo-app

These commands create a multitude of tasks and a pipeline within the Hive namespace, all of which are managed by Argo CD.

Running a pipeline

Log into the OpenShift console and select the project hive and the Pipelines menu from the list of available components. You will find your newly created pipelines there. After adding the clusters to your Git repository, specify the required variables as shown:

Field Value
pathToYamlFile1 clusters/east1
pathToYamlFile2 clusters/east2
cluster1 east1
cluster2 clusters/east2
pathToYamlFile3 acm
git-source Create Pipeline Resource

Once the dialog box is filled out (see Figure 2), run the pipeline.

The OpenShift "Start Pipeline" dialog box filled out for the example.
Figure 2: Specify the required variables, and run the pipeline in the OpenShift console.

You now have a repeatable process to deploy two clusters, peer them together, and deploy an application to each of the clusters. This operation is possible because Hive saves the kubeconfigs for the newly created clusters as a secret. It then creates a task called remote-cluster-apply, which uses the secret and issues kubectl create -f commands against the Hive-created clusters.

Within the repository, you will notice additional Tekton pipelines. You can use these pipelines to create a single cluster and remotely apply one or more objects to the cluster. I have included pipelines within the repository cleanup to help with managing the cluster lifecycle.

Using Tekton for CI and Argo for CD

Think of pipelines as an extra pair of hands for ensuring that no request or workflow process is ever forgotten. Combining Argo CD and Tekton creates safer and repeatable processes, which allows everyone on the team to be successful. Watch this video to learn more about combining Argo CD and Tekton: GitOps Continued: Using Tekton for CI and Argo for CD.

Conclusion

There is no perfect tool to accomplish everything. With this project, I found a way to pair specific tools to create a consistent and repeatable experience. Sharing Tekton tasks among your organization, distributed by Argo CD, allows teams to collaborate and increases team efficiency across the development and operational lifecycles.

Last updated: October 31, 2023