Red Hat OpenShift GitOps and Red Hat OpenShift Pipelines are flexible tools for automating continuous integration and continuous development (CI/CD). But until now, it was cumbersome to trigger a pipeline based on specific changes in GitOps, a frequent requirement. In OpenShift GitOps 1.6, notifications is a new feature (in Tech Preview) that can help automate such transitions.
A typical CI/CD requirement arises when you merge a pull request in GitHub, which triggers an application synchronization in OpenShift GitOps. At that point, you might want certain automated processes to occur, such as running tests. But before OpenShift GitOps 1.6, this process had to be triggered manually via a combination of post-synchronization hooks and Kubernetes jobs, requiring additional effort.
The new notifications feature
The new OpenShift GitOps 1.6 notifications feature enables it to send messages based on various events (aka triggers) to various sources such as Slack or email. While notifications are often used to send notices or alerts to messaging systems, such as when an application has failed to sync, the notifications can also drive use cases such as the aforementioned testing.
This article explores that use case to provide integration or end-to-end (E2E) testing each time an application has been synchronized. Figure 1 shows the overall flow for this use case. We cover the last steps in this sequence. Specifically, we use notifications in OpenShift GitOps to trigger a pipeline by sending a webhook (a REST message in JSON format) to a Pipeline event listener.
The processes used in this article are based on upstream open source tools invoked by the Red Hat services. Red Hat OpenShift GitOps relies on Argo CD to orchestrate activities, and Red Hat OpenShift Pipelines relies on Tekton to define pipelines. You'll see the upstream tools in the examples in this article.
How to configure notifications in OpenShift GitOps
Notifications in GitOps work by defining triggers, templates, and services. The Argo CD Notifications documentation goes into these in great detail. But as a quick introduction, here is a brief description of each:
- Service: The destination application to which the notification is sent. Out of the box, notifications supports various services, including email, Slack, and GitHub. In our use case, we use the webhook service to send a payload to a Tekton
EventListener
. - Trigger: The event causing the notification will be sent: When an application is created, when it is deleted, when synchronization succeeds, when synchronization fails, etc. You can view the catalog of available triggers in the Argo CD documentation.
- Template: Format of the message. For a webhook, we define the HTTP method that sends the message (POST in our case) as well as the body of the message, which is a JSON payload.
Notifications can be enabled to the Argo CD custom resource (CR) through the use of a simple flag in the configuration:
apiVersion: argoproj.io/v1alpha1
kind: ArgoCD
metadata:
name: argocd
spec:
…
notifications:
enabled: true
…
Enabling the notifications
property causes the GitOps Operator to deploy the notifications controller. The operator also creates a default ConfigMap named argocd-notifications-cm
and a secret named argocd-notifications-secret
. The ConfigMap contains a default configuration with many triggers and templates for common services. The secret can be used to protect sensitive data, such as authentication tokens and passwords.
This article focuses on the argocd-notifications-cm
ConfigMap, because we are not using any authentication in our Tekton EventListener
. Authentication isn't needed in this use case because traffic is routed internally in OpenShift between the GitOps and Pipelines services.
To send a notification to Tekton, you need to define a service, template, subscription as well as update the desired trigger. These are shown in the following subsections.
Defining the notification's service
The service is very straightforward because it is just a URL pointing to the Tekton EventListener
to which you want to send the message. Call the service server-post-prod
and define it as follows:
service.webhook.server-post-prod: |-
url: http://el-server-post-prod.product-catalog-cicd.svc:8080
The URL here lists the service name, el-server-post-prod
, the namespace, and product-catalog-cicd.
In my case, I am calling this service from a different namespace.
Defining the notification's template
With the service in place, you can now define the template that determines how the message is sent and the formatting of the message. Here is the template we are using:
template.server-post-prod-app-sync: |-
webhook:
server-post-prod:
method: POST
path: /
body: |
{
{{if eq .app.status.operationState.phase "Running"}} "state": "pending"{{end}}
{{if eq .app.status.operationState.phase "Succeeded"}} "state": "success"{{end}}
{{if eq .app.status.operationState.phase "Error"}} "state": "error"{{end}}
{{if eq .app.status.operationState.phase "Failed"}} "state": "error"{{end}},
"description": "ArgoCD",
"application": "{{.app.metadata.name}}"
}
This template is mostly cribbed from the webhook example in the Argo CD documentation. The template includes three items: the application's synchronization state, a description noting that the message is coming from Argo CD, and the name of the application to be synchronized.
Updating the desired trigger
In order for the trigger to send the message to the appropriate service with the right template, we need to either define a trigger or add our template to an existing trigger. When notifications is enabled, the operator will create a default configuration map with a number of templates and triggers in it. For the purposes of this example we will simply add our template to the existing on-sync-succeeded trigger as per below, note the new server-post-prod-app-sync that was added which matches our template name.
trigger.on-sync-succeeded: |-
- description: Application syncing has succeeded
send:
- app-sync-succeeded
- server-post-prod-app-sync
when: app.status.operationState.phase in ['Succeeded']
Defining the notification's subscription
The final step is to create a subscription to the notification. Although you can create default subscriptions in the ConfigMap, you simply add an annotation to the Argo CD application object that will use the notification. An example of this annotation follows:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
annotations:
notifications.argoproj.io/subscribe.on-sync-succeeded.server-post-prod: ""
name: product-catalog-prod
namespace: product-catalog-gitops
spec:
…
Receiving notifications in pipelines
OpenShift Pipelines can receive webhook requests by defining triggers (an overloaded term since we also have triggers in notifications). In this case, you need to define an EventListener
, which creates a service to listen on a specific port, as well as a TriggerBinding
object, which defines the format for the payload to be received. Let's look at the TriggerBinding
first:
apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerBinding
metadata:
name: argocd-notification
spec:
params:
- name: state
value: $(body.state)
- name: application
value: $(body.application)
- name: description
value: $(body.description)
This TriggerBinding
defines the three fields you will receive from the notification, identical to the items in the template you created in the previous section. As the name TriggerBinding
implies, this object enables Pipelines to bind elements from the JSON body to specific fields that you can retrieve later.
Now let us have a look at the EventListener
:
apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
name: server-post-prod
spec:
serviceAccountName: pipeline
triggers:
- name: server-post-prod-webhook
interceptors:
- name: "Only accept sync succeeded"
ref:
name: "cel"
params:
- name: "filter"
value: "body.state in ['success']"
bindings:
- kind: TriggerBinding
ref: argocd-notification
template:
ref: server-post-prod
This EventListener
specifies the TriggerBinding
to use, which you defined previously. The configuration also defines an Interceptor using a Common Expression Language (CEL) filter. This interceptor ensures that the pipeline is triggered only when the application synchronization is a success. Failed or degraded states after synchronization is ignored.
Testing the flow
At this point, you can test the flow by simply navigating to your application in OpenShift GitOps and clicking the SYNC button highlighted in Figure 2.
The notification should trigger, and your pipeline will run to test the application (Figure 3).
The new GitOps feature triggers automated pipelines
We have seen that using OpenShift GitOps notifications to trigger pipelines to execute after an operation is a straightforward and easy process. Notifications can be beneficial in a large variety of use cases. Although we looked at a specific example involving automated testing, the same basic activities can be useful anytime there is a need to trigger more complex processes in response to changes in OpenShift GitOps.
Last updated: October 31, 2023