OpenShift APIs for Data Protection (OADP) is an operator that facilitates the backup and restore of workloads in Red Hat OpenShift clusters. Based on the upstream open source project Velero, it allows you to backup and restore all Kubernetes resources for a given project, including persistent volumes.
The underlying mechanism within OADP that allows the backup and restore of persistent volumes is either Restic, Kopia, CSI snapshots, or CSI dataMover. Backups are incremental by default.
This guide aims to demonstrate a very basic use-case of OADP, with the aim to simulate a disaster recovery scenario. A backup is created of a simple database application running in a namespace, then a restoration is tested of the same application after the namespace is deleted.
What this guide is
- A simple demonstration in a home lab (with single node OpenShift version 4.16) using Red Hat OpenShift Data Foundation, connected to an external Ceph cluster.
- A demonstration of a very basic use-case of OADP.
What this guide is not
- An exploration of more sophisticated namespaces with different types of storage.
- A tutorial on how to install single node OpenShift, or OpenShift Data Foundation.
- A tutorial that factors in appropriate SSL/TLS configuration.
- A guide to use OADP using the Velero command-line interface (CLI).
Prerequisites
Prior to installing OADP, ensure you have the following:
- OpenShift cluster configured (we are using single node OpenShift v4.16 for this demo).
oc
(OpenShift client) CLI.- An S3 compatible bucket for storing the resources (we are using OpenShift Data Foundation connected to external ceph for this demo, which includes Noobaa, providing S3 compatible object bucket storage).
- A CSI compatible storage.
Installation of OADP operator
Log in to your OpenShift console, and from the Operator menu, select OperatorHub. Search for OADP and choose the Operator from Red Hat, similar to what is shown in Figure 1.
Once installed, you should see something similar to this (Figure 2).
Set up a default backing store
In order for your backups to be stored somewhere, OADP requires some way to store backed up data. As we have OpenShift Data Foundation setup with NooBaa, we will create an S3 compatible object store.
We use the following YAML file to create our object bucket:
$ cat > ./oadp_noobaa_objectbucket.yaml << EOF
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
labels:
app: noobaa
bucket-provisioner: openshift-storage.noobaa.io-obc
noobaa-domain: openshift-storage.noobaa.io
name: mys3bucket
namespace: openshift-adp
spec:
additionalConfig:
bucketclass: noobaa-default-bucket-class
bucketName: mys3bucket-mydemo-10000000
generateBucketName: mys3bucket
objectBucketName: obc-openshift-adp-mys3bucket
storageClassName: openshift-storage.noobaa.io
EOF
$ oc apply -f ./oadp_noobaa_objectbucket.yaml
objectbucketclaim.objectbucket.io/mys3bucket created
Verify the bucket is created:
$ oc get obc
NAME STORAGE-CLASS PHASE AGE
mys3bucket openshift-storage.noobaa.io Bound 25s
If the bucket is not bound yet, the STORAGE-CLASS
field will not be populated.
Or using the NooBaa CLI:
$ noobaa obc list
NAMESPACE NAME BUCKET-NAME STORAGE-CLASS BUCKET-CLASS PHASE
openshift-adp mys3bucket mys3bucket-mydemo-10000000 openshift-storage.noobaa.io noobaa-default-bucket-class Bound
In order to allow OADP to access this bucket, we need to extract the access ID and keys for this bucket.
Obtain the access key ID:
$ oc get secret mys3bucket -o json | jq -r .data.AWS_ACCESS_KEY_ID | base64 -d
mY_aCceS_kEy
Obtain the secret access key:
$ oc get secret mys3bucket -o json | jq -r .data.AWS_SECRET_ACCESS_KEY | base64 -d
mY_sEcRet_kEy
Create a file with these credentials in the following format:
$ cat << EOF > ./credentials-velero
[default]
aws_access_key_id=mY_aCceS_kEy
aws_secret_access_key=mY_sEcRet_kEy
EOF
Create the credentials based on the file above:
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
secret/cloud-credentials created
Create the default backing store based on the YAML below:
$ cat > ./mys3backuplocation.yaml << EOF
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
namespace: openshift-adp
name: ts-dpa
spec:
configuration:
velero:
defaultPlugins:
- openshift
- aws
- csi
backupLocations:
- name: default
velero:
provider: aws
objectStorage:
bucket: mys3bucket-mydemo-10000000
prefix: velero
config:
profile: default # should be same as the cloud-credentials/cloud
region: noobaa
s3ForcePathStyle: "true"
# from oc get route -n openshift-storage s3
s3Url: https://s3-openshift-storage.apps.sno1.local.momolab.io
insecureSkipTLSVerify: "true"
credential:
name: cloud-credentials
key: cloud
default: true
EOF
$ oc apply -f ./mys3backuplocation.yaml
dataprotectionapplication.oadp.openshift.io/ts-dpa created
Once created, you should be able to verify the configuration was applied successfully:
$ oc get dpa
NAME AGE
ts-dpa 34s
Verify/check dpa status:
$ oc describe dpa ts-dpa | grep -A 5 "Status"
Status:
Conditions:
Last Transition Time: 2024-09-16T23:15:43Z
Message: Reconcile complete
Reason: Complete
Status: True
Type: Reconciled
Events: <none>
Afterwards, you should be able to see:
$ oc get bsl
NAME PHASE LAST VALIDATED AGE DEFAULT
default Available 17s 24s true
Create a basic application
Here, we simply create a simple to-do list database application using the following steps:
$ oc apply -f https://raw.githubusercontent.com/openshift/oadp-operator/master/tests/e2e/sample-applications/mysql-persistent/mysql-persistent.yaml
namespace/mysql-persistent created
serviceaccount/mysql-persistent-sa created
persistentvolumeclaim/mysql created
securitycontextconstraints.security.openshift.io/mysql-persistent-scc created
service/mysql created
deployment.apps/mysql created
service/todolist created
Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+
deploymentconfig.apps.openshift.io/todolist created
route.route.openshift.io/todolist-route created
Verify the application is created:
$ oc project mysql-persistent
Now using project "mysql-persistent" on server "https://api.sno1.local.momolab.io:6443".
$ oc get pods
NAME READY STATUS RESTARTS AGE
mysql-6d6c7fdb65-gwwbl 2/2 Running 0 12m
todolist-1-deploy 0/1 Completed 0 19m
todolist-1-t78k6 1/1 Running 0 19m
$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
mysql Bound pvc-fb0571b9-7999-459b-8b62-32803a9595a4 1Gi RWO ocs-external-storagecluster-ceph-rbd <unset> 19m
Add a few tasks, as shown in Figure 3.
Create a backup custom resource
Use the following YAML to create a backup CR:
$ cat > ./backup_mysql-persistent.yaml << EOF
apiVersion: velero.io/v1
kind: Backup
metadata:
name: backup-mysql-persistent
labels:
velero.io/storage-location: default
namespace: openshift-adp
spec:
hooks: {}
includedNamespaces:
- mysql-persistent
includedResources: []
excludedResources: []
storageLocation: default
ttl: 720h0m0s
EOF
$ oc create -f backup_mysql-persistent.yaml
backup.velero.io/backup-mysql-persistent created
Check the CR:
$ oc get backups -n openshift-adp
NAME AGE
backup-mysql-persistent 30s
Verify that the backup succeeded:
$ oc get backups -n openshift-adp mysql-persistent -o yaml
Name: backup-mysql-persistent
Namespace: openshift-adp
Labels: velero.io/storage-location=default
Annotations: velero.io/resource-timeout: 10m0s
velero.io/source-cluster-k8s-gitversion: v1.29.7+4510e9c
velero.io/source-cluster-k8s-major-version: 1
velero.io/source-cluster-k8s-minor-version: 29
API Version: velero.io/v1
Kind: Backup
Metadata:
Creation Timestamp: 2024-09-18T23:21:41Z
Generation: 6
Resource Version: 3043558
UID: 47ee04bf-3570-480b-971e-ca006821b370
Spec:
Csi Snapshot Timeout: 10m0s
Default Volumes To Fs Backup: false
Excluded Resources:
Hooks:
Included Namespaces:
mysql-persistent
Included Resources:
Item Operation Timeout: 4h0m0s
Snapshot Move Data: false
Storage Location: default
Ttl: 720h0m0s
Status:
Backup Item Operations Attempted: 1
Backup Item Operations Completed: 1
Completion Timestamp: 2024-09-18T23:21:55Z
Csi Volume Snapshots Attempted: 1
Csi Volume Snapshots Completed: 1
Expiration: 2024-10-18T23:21:41Z
Format Version: 1.1.0
Hook Status:
Phase: Completed
Progress:
Items Backed Up: 64
Total Items: 64
Start Timestamp: 2024-09-18T23:21:41Z
Version: 1
Events: <none>
This implies the backup was successful.
Verify data on NooBaa S3 bucket
To verify the content, we need to download and install the AWS CLI utility, and provide it with the NooBaa details, as per (if you have SSL problems, use http
instead of https
for the endpoint below):
$ cat > ./setup_alias.sh << EOF
export NOOBAA_S3_ENDPOINT=https://s3-openshift-storage.apps.sno1.local.momolab.io
export NOOBAA_ACCESS_KEY=mY_aCceS_kEy
export NOOBAA_SECRET_KEY=mY_sEcRet_kEy
alias s3='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint $NOOBAA_S3_ENDPOINT --no-verify-ssl s3'
EOF
$ source ./setup_alias.sh
Show contents of the bucket (ignore SSL verification here, but ideally setup):
$ s3 ls s3://mys3bucket-mydemo-10000000/ 2>/dev/null
PRE velero/
Note
2>/dev/null
was added to the end of the command to hide a warning from the Python urllb3 module around certificates, for clarity purposes. See this documentation for further details.
The PRE velero
folder is where all the backups are stored.
Test a restore
Delete the namespace mysql-persistent
:
$ oc delete project mysql-persistent
project.project.openshift.io "mysql-persistent" deleted
Create the following restore CR:
$ cat > ./restore_mysql-persistent.yaml << EOF
apiVersion: velero.io/v1
kind: Restore
metadata:
name: restore-mysql-persistent
namespace: openshift-adp
spec:
backupName: backup-mysql-persistent
EOF
$ oc create -f restore_mysql-persistent.yaml
restore.velero.io/restore-mysql-persistent created
Check the status of the restore CR:
$ oc get restore -n openshift-adp
NAME AGE
restore-mysql-persistent 20s
Verify restore was successful:
$ oc describe restore restore-mysql-persistent
Name: restore-mysql-persistent
Namespace: openshift-adp
Labels: <none>
Annotations: <none>
API Version: velero.io/v1
Kind: Restore
Metadata:
Creation Timestamp: 2024-09-18T23:24:26Z
Finalizers:
restores.velero.io/external-resources-finalizer
Generation: 8
Resource Version: 3044806
UID: 156fef20-d2e2-42b2-9542-544723f595f2
Spec:
Backup Name: backup-mysql-persistent
Excluded Resources:
nodes
events
events.events.k8s.io
backups.velero.io
restores.velero.io
resticrepositories.velero.io
csinodes.storage.k8s.io
volumeattachments.storage.k8s.io
backuprepositories.velero.io
Item Operation Timeout: 4h0m0s
Status:
Completion Timestamp: 2024-09-18T23:24:51Z
Hook Status:
Phase: Completed
Progress:
Items Restored: 32
Total Items: 32
Start Timestamp: 2024-09-18T23:24:26Z
Warnings: 7
Events: <none>
Wait for the pods to come up, and verify the data is preserved as expected, as shown in Figure 4.
Conclusion
Using OADP, we backed up a basic application deployed in a specific namespace by capturing its resources and persistent volumes. After verifying the backup, the entire namespace was deleted, including all associated data and configurations. With a few simple commands, we initiated a full restoration, successfully recovering the namespace, application, and its data, demonstrating the tool’s reliability in ensuring seamless, end-to-end recovery of OpenShift workloads.
Acknowledgment
This article wouldn’t be possible without the help of Red Hat’s OADP team.
Last updated: November 8, 2024