Overview: Deployment of Red Hat OpenShift Data Foundation using GitOps
What is OpenShift Data Foundation?
As cloud-native workloads continue to grow in complexity, the need for reliable, scalable, and integrated storage solutions becomes more critical. Red Hat OpenShift Data Foundation (ODF) is a software-defined storage that seamlessly integrates with Red Hat OpenShift to support your applications with persistent storage, data resiliency, and disaster recovery. It can be installed on cloud or on-premise and allows the automatic provisioning of any storage type: block, file, or object storage. As a good practice in OpenShift, and as with any configuration, ODF can be installed using an automated GitOps process to help ensure the desired state is easily maintained by comparing the cluster’s configuration with the Git repository. That said, other installation methods, such as deploying ODF through the OpenShift web console, are fully supported and remain valid options.
In this learning path, we'll walk through how to deploy OpenShift Data Foundation using OpenShift GitOps to declaratively manage ODF installation and configuration. This method enhances repeatability, consistency, and auditability for Day 1 and Day 2 operations.
OpenShift Data Foundation is a unified and highly scalable data storage solution that integrates with OpenShift to support containerized applications. Built on top of proven upstream technologies such as Ceph, Rook, and NooBaa, ODF provides:
- Block, file, and object storage for applications
- Dynamic provisioning of persistent volumes
- Storage replication and encryption
- Multi-cloud gateway and disaster recovery options
ODF can be used in both internal (converged) and external (disaggregated) deployments, depending on your storage architecture and performance needs.
This learning path will demonstrate how to deploy ODF in internal mode using GitOps workflows.
Why use GitOps for ODF Deployment?
Traditionally, deploying ODF involves navigating the OpenShift UI or CLI to create necessary resources and configurations. However, with OpenShift GitOps, you can store all deployment manifests in Git and automate their application to the cluster. This approach offers several benefits:
- Version-controlled deployments
- Automated reconciliation
- Easier rollbacks and audits
- Multi-cluster consistency
Always remember the GitOps principle: If it is not in Git, it does not exist.
Prerequisites:
- An OpenShift 4.x cluster, with 3 or more worker nodes. In the example, OpenShift 4.18 will be used.
- Install OpenShift GitOps Operator and be able to deploy and configure Operators.
- Cluster-admin privileges.
- A Git repository to store your ODF manifests.
- Raw block storage attached to each ODF node that will be consumed by ODF via Local Storage Operator.
- Alternatively, this learning path leverages the already available StorageClass from AWS, which is gp3-csi.*
This learning path assumes that the Git repository, as well as the Git folder structure, is already defined. If you need help defining the correct setup that is suitable for your needs, the article at GitOps Repository Structure Guide can provide further information.
Example repositories and charts
The following ready-to-go repositories can be verified and used:
In addition, the blog post Automating Operator Installations with Argo CD describes how to deploy Operators automated using GitOps. Because the installation of an Operator will take some time, it’s recommended to also create a Job that constantly verifies if the installation succeeded. If yes, the GitOps workflow continues. This is an opinionated way to achieve this, and there are other options as well. For example, simply retries.
In this learning path, you will:
- Learn to deploy OpenShift Data Foundation (ODF), the unified and highly scalable data storage solution for OpenShift.
- Implement an automated GitOps process for managing ODF's lifecycle, from installation to configuration.
- Utilize OpenShift GitOps (Argo CD) to declaratively manage the ODF environment.
- Ensure enhanced repeatability, consistency, and auditability across all your OpenShift data foundation deployments.