In this article, you will learn how to deploy Red Hat’s single sign-on technology 7.4 with Red Hat OpenShift 4. For this integration, we'll use the PostgreSQL database. PostgreSQL requires a persistent storage database provided by an external Network File System (NFS) server partition.
The PostgreSQL persistent storage database is not a hard requirement, but the advantage of using persistent storage for the PostgreSQL pod is that the data is preserved across (single sign-on or PostgreSQL) pod restarts.
Prerequisites
To follow the instructions in this article, you will need the following components in your development environment:
- An OpenShift 4 or higher cluster with leader and follower nodes.
- Red Hat’s single sign-on template.
- Red Hat OpenShift Data Foundation.
Note: For the purposes of this article, I will refer to leader and follower nodes, although the code output uses the terminology of master
and worker
nodes.
The OpenShift storage configuration described in this article is Persistent storage using NFS, as it corresponds to the OpenShift storage configuration used in our lab to test this setup.
OpenShift also provides many other persistent storage configuration or block storage services. However, such considerations go beyond the scope of this article.
Create persistent storage for an OpenShift 4.5 cluster
In this section, we'll describe the different steps to set up persistent storage on OpenShift 4.5 with an NFS server.
Set up the external NFS server
NFS allows remote hosts to mount file systems over a network and interact with them as though they are mounted locally. This lets system administrators consolidate resources on centralized servers on the network. For an introduction to NFS concepts and fundamentals, see the introduction to NFS in the Red Hat Enterprise Linux 7 documentation.
Map OpenShift persistent storage to NFS
To access an NFS partition from an OpenShift cluster's follower (worker
) nodes, you must manually map persistent storage to the NFS partition. In this section, you will do the following:
- Get a list of nodes.
- Access a follower node:
- Ping the NFS server.
- Mount the exported NFS server partition.
- Verify that the file is present on the NFS server.
Get a list of nodes
To get a list of the nodes, enter:
$ oc get nodes
The list of requested nodes displays as follows:
NAME STATUS ROLES AGE VERSION master-0.example.com Ready master 81m v1.18.3+2cf11e2 worker-0.example.com Ready worker 72m v1.18.3+2cf11e2 worker-1.example.com Ready worker 72m v1.18.3+2cf11e2
Access the follower node
To access the follower node, use the oc debug node
command and type chroot /root
, as shown here:
$ oc debug node/worker-0.example.com Starting pod/worker-example.com-debug ...
Run chroot /host
before issuing further commands:
sh-4.2# chroot /host
Ping the NFS server
Next, ping the NFS server from the follower node in debug mode:
sh-4.2#ping node-0.nfserver1.example.com
Mount the NFS partition
Now, mount the NFS partition from the follower node (still in debug mode):
sh-4.2#mount node-0.nfserver1.example.com:/persistent_volume1 /mnt
Verify that the file is present on the NFS server
Create a dummy file from the follower node in debug mode:
sh-4.2#touch /mnt/test.txt
Verify that the file is present on the NFS server:
$ cd /persistent_volume1
$ ls -al total 0 drwxrwxrwx. 2 root root 22 Sep 23 09:31 . dr-xr-xr-x. 19 root root 276 Sep 23 08:37 .. -rw-r--r--. 1 nfsnobody nfsnobody 0 Sep 23 09:31 test.txt
Note: You must issue the same command sequence for every follower node in the cluster.
Persistent volume storage
The previous section showed how to define and mount an NFS partition. Now, you'll use the NFS partition to define and map an OpenShift persistent volume. The steps are as follows:
- Make the persistent volume writable on the NFS server.
- Map the persistent volume to the NFS partition.
- Create the persistent volume.
Make the persistent volume writable
Make the persistent volume writable on the NFS server:
$ chmod 777 /persistent_volume1
Map the persistent volume to the NFS partition
Define a storage class and specify the default storage class. For example, the following YAML defines the StorageClass
of slow
:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: slow provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete
Next, make the storage class the default class:
$ oc create -f slow_sc.yaml $ oc patch storageclass slow -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
Note: The StorageClass
is common to all namespaces.
Create the persistent volume
You can create a persistent volume either from the OpenShift admin console or from a YAML file, as follows:
apiVersion: v1 kind: PersistentVolume metadata: name: example spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: slow nfs: path: /persistent_volume2 server: node-0.nfserver1.example.com $ oc create -f pv.yaml persistentvolume/example created $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example 5Gi RWO Retain Available slow 5s
Deploy SSO on the OpenShift cluster
Next, you'll deploy Red Hat’s single sign-on technology on the OpenShift cluster. The steps are as follows:
- Create a new project.
- Download the
sso-74
templates. - Customize the
sso74-ocp4-x509-postgresql-persistent
template.
Create a new project
Create a new project using the oc new-project
command:
$ oc new-project sso-74
Import the OpenShift image for Red Hat’s single sign-on technology 7.4:
$ oc -n openshift import-image rh-sso-7/sso74-openshift-rhel8:7.4 --from=registry.redhat.io/rh-sso-7/sso74-openshift-rhel8:7.4 --confirm
Note: If you need to delete and re-create the SSO project, first delete the secrets, which are project-specific.
Note: The command (oc -n openshift
) needs to be run under a privileged cluster user (cluster administrator) or under a user with write access into the OpenShift namespace; otherwise, it won’t work.
Download the sso-74 templates
Here is the list of available templates:
$ oc get templates -n openshift -o name | grep -o 'sso74.\+' sso74-https sso74-ocp4-x509-https sso74-ocp4-x509-postgresql-persistent sso74-postgresql sso74-postgresql-persistent
Customize the sso74-ocp4-x509-postgresql-persistent template
Next, you'll customize the sso74-ocp4-x509-postgresql-persistent
template to allow a TLS connection to the persistent PostgreSQL database:
$ oc process sso74-ocp4-x509-postgresql-persistent -n openshift SSO_ADMIN_USERNAME=admin SSO_ADMIN_PASSWORD=password -o yaml > my_sso74-x509-postgresql-persistent.yaml
Control manually set pod replica scheduling
To set the pod replica scheduling, change the setting of the replicas
field in the definition of both the sso
and
sso-postgresql
deployment configs. Within the updated template file, my_sso74-x509-postgresql-persistent.yaml
, set both replicas for sso
and sso-postgresql
to zero (0
).
Setting the replicas to zero (0
) within each deployment config lets you manually control the initial pod rollout. If that's not enough, you can also increase the initialDelaySeconds
value for the liveness and readiness probes. Here is the updated deployment config of sso
:
kind: DeploymentConfig metadata: labels: application: sso rhsso: 7.4.2.GA template: sso74-x509-postgresql-persistent name: sso spec: replicas: 0 selector: deploymentConfig: sso
Here is the updated config for sso-postgresql
:
metadata: labels: application: sso rhsso: 7.4.2.GA template: sso74-x509-postgresql-persistent name: sso-postgresql spec: replicas: 0 selector:my_sso74-ocp4-x509-postgresql-persistent.yaml deploymentConfig: sso-postgresql
Process the YAML template
Use the oc create
command to process the YAML template:
$ oc create -f my_sso74-x509-postgresql-persistent.yaml service/sso created service/sso-postgresql created service/sso-ping created route.route.openshift.io/sso created deploymentconfig.apps.openshift.io/sso created deploymentconfig.apps.openshift.io/sso-postgresql created persistentvolumeclaim/sso-postgresql-claim created
Upscale the sso-postgresql pod
Use the oc scale
command to upscale the sso-postgresql
pod:
$ oc scale --replicas=1 dc/sso-postgresql
Note: Wait until the PostgreSQL pod has reached a ready state of 1/1
. This might take a couple of minutes.
$ oc get pods NAME READY STATUS RESTARTS AGE sso-1-deploy 0/1 Completed 0 10m sso-postgresql-1-deploy 0/1 Completed 0 10m sso-postgresql-1-fzgf7 1/1 Running 0 3m46s
When the sso-postgresql
pod starts correctly, it provides a log output similar to the following one:
pg_ctl -D /var/lib/pgsql/data/userdata -l logfile start waiting for server to start....2020-09-25 15:13:01.579 UTC [37] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2020-09-25 15:13:01.588 UTC [37] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432" 2020-09-25 15:13:01.631 UTC [37] LOG: redirecting log output to logging collector process 2020-09-25 15:13:01.631 UTC [37] HINT: Future log output will appear in directory "log". done server started /var/run/postgresql:5432 - accepting connections => sourcing /usr/share/container-scripts/postgresql/start/set_passwords.sh ... ALTER ROLE waiting for server to shut down.... done server stopped Starting server... 2020-09-25 15:13:06.147 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2020-09-25 15:13:06.147 UTC [1] LOG: listening on IPv6 address "::", port 5432 2020-09-25 15:13:06.157 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2020-09-25 15:13:06.164 UTC [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432" 2020-09-25 15:13:06.206 UTC [1] LOG: redirecting log output to logging collector process 2020-09-25 15:13:06.206 UTC [1] HINT: Future log output will appear in directory "log".
Upscale the sso pod
Use the oc scale
command to upscale the sso
pod as follows:
$ oc scale --replicas=1 dc/sso deploymentconfig.apps.openshift.io/sso
Next, use the oc get pods
command and get the SSO pod fully up and running. It reaches a ready state of 1/1
as shown:
$oc get pods NAME READY STATUS RESTARTS AGE sso-1-d45k2 1/1 Running 0 52m sso-1-deploy 0/1 Completed 0 63m sso-postgresql-1-deploy 0/1 Completed 0 63m sso-postgresql-1-fzgf7 1/1 Running 0 57m
Testing
The testing oc status
command includes:
$ oc status In project sso-74 on server https://api.example.com:6443 svc/sso-ping (headless):8888 https://sso-sso-74.apps.example.com (reencrypt) (svc/sso) dc/sso deploys openshift/sso74-openshift-rhel8:7.4 deployment #1 deployed about an hour ago - 1 pod
svc/sso-postgresql - 172.30.113.48:5432 dc/sso-postgresql deploys openshift/postgresql:10 deployment #1 deployed about an hour ago - 1 pod
Visit the URL https://sso-sso-74.apps.example.com to access the administrator console of the Red Hat single sign-on technology 7.4, running on OpenShift 4.5. Provide the single sign-on administrator username and password when prompted.
Conclusion
This article highlighted the basic steps to be executed when deploying Red Hat's single sign-on technology 7.4 on OpenShift. Deploying single sign-on on OpenShift makes OpenShift's SSO features available out of the box. As one example, it is very easy to increase your workload capacity by adding new single sign-on pods to your OpenShift deployment during horizontal scaling.
Last updated: October 7, 2022