In this series of articles, we will demonstrate how to use Git, Argo CD, and Red Hat OpenShift GitOps to build a continuous delivery cycle that automatically synchronizes and deploys different solutions. In this article, we will discuss the single sign-on for Red Hat solutions.
In the world of Kubernetes, where simplicity is tied to complexity, sometimes you can deploy your application with flexible horizontal auto-scaling, out-of-the-box load balancing, distributed management of components, and centralized control of multiple applications. However, with great power comes great responsibilities and complexity.
To help us solve this complexity and take accountability for our newfound power, strategies have been developed for Kubernetes. In this article, we will take a closer look at continuous integration and continuous deployment (CI/CD). These systems usually work with a high level of abstraction to help us solve four common issues: version control, change logging, consistency of deployments, and rollback functionality. One of the most popular approaches to this abstraction layer is called GitOps.
GitOps, originally proposed in a Weaveworks blog post in 2017, is built around Git implementation. It is a “single source of truth” in CI/CD processes.
Red Hat OpenShift Container Platform is the leading Kubernetes platform in GitOps deployments. It comes out of the box with access to operators (curated and supported by Red Hat) that are part of the OpenShift GitOps Operator. Argo CD, a declarative continuous delivery tool, is also part of this.
By having comprehensive management of the deployment and lifecycle of things, it provides solutions for version control, configurations, and application definitions in Kubernetes environments, organizing complex data with an easy-to-understand user interface.
Argo CD has support for common ways of deploying Kubernetes manifests such as Kustomize applications, Helm charts, and regular YAML/JSON files.
Prerequisites
To start this demonstration, you will need the following:
- Red Hat OpenShift cluster
- Admin user
- The tools listed in this table:
Tools Required |
macOS |
Linux/Fedora |
Git |
||
OpenShift client 4.11 |
||
Argo CD CLI |
Note: These tools are available through the Web Terminal Operator.
Install OpenShift GitOps operator and Argo CD instance
For the scope of this article, we will use the following Git repository:
git clone https://github.com/ignaciolago/keycloak-gitops
cd keycloak-gitops
To install the OpenShift GitOps operator and Argo CD instance, take a look a the files first as follows:
$ cat 00_argocd_namespace.yaml
We will use this file to create a namespace for our installation:
apiVersion: v1
kind: Namespace
#kubernetes api element kind for namespace
metadata:
#argocd wave in which the element is synced
name: openshift-gitops
#default name for the installation in Openshift
spec: {}
Enter the following command for the subscription to install the GitOps operator:
$ cat 01_argocd_subscription.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
# subscription for the Operator, would add it to the operator
# this operator installs an argocd instance for default
metadata:
name: openshift-gitops-operator
namespace: openshift-operators
spec:
channel: latest
installPlanApproval: Automatic
name: openshift-gitops-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
# startingCSV: openshift-gitops-operator.v1.7.2
The role binding for the Argo CD instance:
$ cat 02_argocd_rbac.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
# role base access control binding for argocd permissions
metadata:
name: argocd-rbac-ca
subjects:
- kind: ServiceAccount
# tied to the argocd service account
name: openshift-gitops-argocd-application-controller
# since we are using applications we use the argocd application controller
namespace: openshift-gitops
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
# in this example we are using cluster-admin but we can give it lesser permissions if needed
We are using Kustomize to package because it helps with debugging and allows us to reutilize code for multiple environments. We will use this in subsequent articles. We have a kustomization.yaml
file containing the references for all the other files as follows:
$ cat customization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- 00_argocd_namespace.yaml
- 01_argocd_subscription.yaml
- 02_argocd_rbac.yaml
Now that we understand the files, we will proceed with installing the OpenShift GitOps operator and Argo CD instance. Use the Kustomize flag, -k
or –kustomize
as follows:
$ oc apply -k bootstrap/argocd
As we can see in the following output, the previous command will create three resources for us, the namespace, the role-binding, and the subscription and installation of the operator.
namespace/openshift-gitops created
clusterrolebinding.rbac.authorization.k8s.io/argocd-rbac-ca created
subscription.operators.coreos.com/openshift-gitops-operator created
Now we are going to verify that the OpenShift GitOps operator and Argo CD instance components are installed by using the following command to verify the status of the deployments:
$ watch oc get pods -n openshift-gitops
The output:
NAME READY STATUS RESTARTS AGE
cluster-54f5bcdc85-5vs57 1/1 Running 0 4m5s
kam-7d7bfc8675-xmcdg 1/1 Running 0 4m4s
openshift-gitops-application-controller-0 1/1 Running 0 4m2s
openshift-gitops-applicationset-controller-5cf7bb9dbc-8qtzl 1/1 Running 0 4m2s
openshift-gitops-dex-server-6575c69849-knt27 1/1 Running 0 4m2s
openshift-gitops-redis-bb656787d-wglb7 1/1 Running 0 4m2s
openshift-gitops-repo-server-54c7998dbf-tglrc 1/1 Running 0 4m2s
openshift-gitops-server-786849cbb8-l9npk 1/1 Running 0 4m2s
Of course, we can also check how the installation is going in the GUI by accessing the OpenShift web console and the Operator Hub (Figure 1):
Now that the operator is installed, we recover the path of the Argo CD GUI by using the following command:
$ oc get route openshift-gitops-server -n openshift-gitops --template='https://{{.spec.host}}'
Copy the following URL and paste it into the browser to access the Argo CD login page.
https://openshift-gitops-server-openshift-gitops.apps.cluster-lvn9g.lvn9g.sandbox1571.opentlc.com
Click the log in via OpenShift button.
Then enter your OpenShift credentials.
Proceed to authorize access to Argo CD to the user.
Now that we have added the permissions to our user, we are able to see the Argo CD interface. Now it's time to learn how to deploy applications using it.
Deploying single sign-on for a dev environment
For this part, we are going to learn how to automatically install all the components needed to run single sign-on (SSO) using GitOps and a Git repository as the source of truth.
We are going to install a namespace-contained installation of SSO using the operator deploying a managed SSO and Postgres installation; for that, let us take a look at the files for this we do:
$ cd../../resources/01_rhsso-dev
We have the namespace yaml like before:
$ cat 00_namespace_rhsso-dev.yaml
Again we create the namespace, but this time for SSO:
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
argocd.argoproj.io/sync-wave: "0"
# now that we deploying using ArgoCD we can specify the order of deploy
name: rhsso-dev
spec: {}
$ cat 01_rhsso-operator_resourcegroups.yaml
Operator group for SSO:
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
annotations:
argocd.argoproj.io/sync-wave: "1"
# Sync Wave 1 since we need to have the namespace first
name: rhsso-dev
namespace: rhsso-dev
spec:
targetNamespaces:
- rhsso-dev
$ cat 02_rhsso-operator.yaml
Now we will create a subscription for the installation of the operator.
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
annotations:
argocd.argoproj.io/sync-wave: "2"
# sync wave 2 since we need the Namespace and OperatorGroup and so on...
name: rhsso-operator
namespace: rhsso-dev
spec:
channel: stable
installPlanApproval: Automatic
name: rhsso-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
startingCSV: rhsso-operator.7.6.1-opr-005
$ cat 03_deploy_rhsso-dev.yaml
This file contains the config of the instance that we are going to create inside the operator to install the product.
---
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
annotations:
argocd.argoproj.io/sync-wave: "3"
name: rhsso-dev
labels:
app: rhsso-dev
namespace: rhsso-dev
spec:
multiAvailablityZones:
enabled: true
# we add this flag for HA deploy, we need more than 1 pod for this
externalAccess:
enabled: true
keycloakDeploymentSpec:
imagePullPolicy: Always
postgresDeploymentSpec:
imagePullPolicy: Always
instances: 2
# we set a minimum of 2 pods for the HA
storageClassName: gp2
$ cat kustomization.yaml
This is the Kustomization file that contains all files we will use for this installation.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonAnnotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
# To skip the dry run for missing resource types, use the following annotation
resources:
- 00_namespace_rhsso-dev.yaml
- 01_rhsso-operator_resourcegroups.yaml
- 02_rhsso-operator.yaml
- 03_deploy_rhsso-dev.yaml
Now that we understand the files that we are going to install, we have to take a look at the Argo CD application that is going to deploy and sync them:
$ cd../../bootstrap/deploy/application/01_rhsso-dev
$ cat 01_rh-sso.yaml
Here we can take a look at a regular application for Argo CD; we are going to deploy it in the same namespace as Argo CD and aim it at our Git repository:
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
annotations:
argocd.argoproj.io/sync-wave: "0"
name: rhsso-dev
namespace: openshift-gitops
spec:
destination:
name: ''
namespace: openshift-gitops
server: 'https://kubernetes.default.svc'
source:
path: resources/01_rhsso-dev
# we specify the folder for the files
repoURL: 'https://github.com/ignaciolago/keycloak-gitops.git'
# the repository url
targetRevision: HEAD
# and branch, in this case HEAD / main
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- PruneLast=true
The kustomize file:
$ cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonAnnotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
resources:
- 01_rh-sso.yaml
Now we are going to apply all these files to start the installation.
$ oc apply -k bootstrap/deploy/application/01_rhsso-dev
application.argoproj.io/rhsso-dev created
We have to wait for all pods to be in a running state.
$ watch oc get pods -n rhsso-dev
NAME READY STATUS RESTARTS AGE
keycloak-0 0/1 Init:0/1 0 4s
keycloak-postgresql-59f5b79f4b-bbck4 0/1 Pending 0 4s
rhsso-operator-7d8f749748-hc888 1/1 Running 0 6m56s
NAME READY STATUS RESTARTS AGE
keycloak-0 1/1 Running 0 4m41s
keycloak-1 1/1 Running 0 3m40s
keycloak-postgresql-59f5b79f4b-bbck4 1/1 Running 0 5m30s
rhsso-operator-7d8f749748-hc888 1/1 Running 0 7m20s
We can track the process and status updates of the components, as shown in Figure 2.
When they are all in sync, the status page will appear as in Figure 3.
Once the installation is ready, we have to recover the single sign-on route as follows:
@oc get route keycloak -n rhsso-dev --template='https://{{.spec.host}}'
https://keycloak-rhsso-dev.apps.cluster-lvn9g.lvn9g.sandbox1571.opentlc.com
We can access the GUI by entering this URL in the browser to verify that it is working (Figure 4).
We need the credentials in order to gain access. To recover the credentials for this, we must either use the oc
CLI or the OpenShift GUI to get to the secret and decode it. The secret is called credential-rhsso-dev
.
Tip: The second part of the name will depend on the name of the instance.
Recover the credentials from the GUI
Figure 5 shows the credentials secret details on OpenShift.
Recover the credentials from the oc CLI
We can recover the credentials from the oc CLI as follows:
$ oc get secret credential-rhsso-dev -n rhsso-dev -o jsonpath="{.data['ADMIN_PASSWORD']}" | base64 -d
n3XViKMKmg_ZFw==
We will use the credentials to authenticate. After logging in, we can check that everything is working (Figure 6).
Congratulations, you have successfully deployed a single sign-on using Argo CD!
Deploy a single sign-on for the product environment
We cannot use a namespace-contained deployment for production because the database would be running inside a pod—this would be a Single Point Of Failure (SPoF) and we can not allow it! So we will learn how to deploy using an external (and hopefully highly available) database.
First we need to add the following lines to our Keycloak resource like this:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
Metadata: {}
spec:
multiAvailablityZones:
enabled: true
externalDatabase: ## ADD THIS LINE
enabled: true ## ADD THIS LINE
Take a look at the resources folder and see the files for a prod deploy.
The namespace, operator group, and operator subscription are the same as before, and we can check it by doing the cat
again as follows:
$ cd../../../../resources/02_rhsso-prod
$cat 00_namespace_rhsso-prod.yaml 01_rhsso-operator_resourcegroups.yaml 02_rh-sso-operator.yaml
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
argocd.argoproj.io/sync-wave: "0"
name: rhsso-prod
spec: {}
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
annotations:
argocd.argoproj.io/sync-wave: "1"
name: rhsso-prod
namespace: rhsso-prod
spec:
targetNamespaces:
- rhsso-prod
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
annotations:
argocd.argoproj.io/sync-wave: "2"
name: rhsso-operator
namespace: rhsso-prod
spec:
channel: stable
installPlanApproval: Automatic
name: rhsso-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
startingCSV: rhsso-operator.7.6.1-opr-005
But we have one new file 01_secret_rhsso-prod-database.yaml
. This file is a secret containing the credentials for the external database.
$ cat 01_secret_rhsso-prod-database.yaml
---
apiVersion: v1
kind: Secret
metadata:
annotations:
argocd.argoproj.io/sync-wave: "1"
name: keycloak-db-secret
namespace: rhsso-prod
stringData:
POSTGRES_DATABASE: "pgsql-rhsso-prod"
POSTGRES_USERNAME: "pgsql-admin"
POSTGRES_PASSWORD: "pgsql-password"
POSTGRES_EXTERNAL_ADDRESS: "pgsql-database-url"
POSTGRES_EXTERNAL_PORT: "5432"
# of course we have to change the values to real ones
And the new lines in our 03_deploy_rhsso-prod.yaml
:
$ cat 03_deploy_rhsso-prod.yaml
---
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
annotations:
argocd.argoproj.io/sync-wave: "3"
name: rhsso-prod
labels:
app: rhsso-prod
namespace: rhsso-prod
spec:
multiAvailablityZones:
enabled: true
externalDatabase:
# this one
enabled: true
#this one
keycloakDeploymentSpec:
imagePullPolicy: Always
postgresDeploymentSpec:
imagePullPolicy: Always
instances: 2
storageClassName: gp2
We can get the logs for one of the pods and check if it is running by using the command line or the GUI. We can see in the following log that it is up.
[0m[0m22:46:33,951 INFO [org.hibernate.annotations.common.Version] (ServerService Thread Pool -- 82) HCANN000001: Hibernate Commons Annotations {5.0.5.Final-redhat-00002}
[0m[0m22:46:34,070 INFO [org.hibernate.dialect.Dialect] (ServerService Thread Pool -- 82) HHH000400: Using dialect: org.hibernate.dialect.PostgreSQL95Dialect
[0m[0m22:46:34,099 INFO [org.hibernate.engine.jdbc.env.internal.LobCreatorBuilderImpl] (ServerService Thread Pool -- 82) HHH000424: Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException
[0m[0m22:46:34,102 INFO [org.hibernate.type.BasicTypeRegistry] (ServerService Thread Pool -- 82) HHH000270: Type registration [java.util.UUID] overrides previous : org.hibernate.type.UUIDBinaryType@33731349
[0m[0m22:46:34,106 INFO [org.hibernate.envers.boot.internal.EnversServiceImpl] (ServerService Thread Pool -- 82) Envers integration enabled? : true
[0m[0m22:46:34,301 INFO [org.hibernate.orm.beans] (ServerService Thread Pool -- 82) HHH10005002: No explicit CDI BeanManager reference was passed to Hibernate, but CDI is available on the Hibernate ClassLoader.
[0m[0m22:46:34,890 INFO [org.hibernate.validator.internal.util.Version] (ServerService Thread Pool -- 82) HV000001: Hibernate Validator 6.0.23.Final-redhat-00001
[0m[0m22:46:35,634 INFO [org.hibernate.hql.internal.QueryTranslatorFactoryInitiator] (ServerService Thread Pool -- 82) HHH000397: Using ASTQueryTranslatorFactory
[0m[0m22:46:36,014 INFO [org.keycloak.services] (ServerService Thread Pool -- 82) KC-SERVICES0050: Initializing master realm
[0m[0m22:46:36,744 INFO [org.keycloak.services] (ServerService Thread Pool -- 82) KC-SERVICES0006: Importing users from '/opt/eap/standalone/configuration/keycloak-add-user.json'
[0m[0m22:46:37,010 INFO [org.keycloak.services] (ServerService Thread Pool -- 82) KC-SERVICES0009: Added user 'admin' to realm 'master'
Even if we try to change or delete any of the components like the single sign-on, the operator, or the namespace, Argo CD is going to redeploy according to the info on our Git repository.
Deploying single sign-on is complete
We have demonstrated how we can leverage Argo CD to not only deploy and define the state of our single sign-on instance, but also manage the state without any external interference or input. This can be done in a dev and production environment using a GitOps approach.
If you have questions, comment below. We welcome your feedback.
Last updated: October 26, 2023