Deployment of Red Hat OpenShift Data Foundation using GitOps

Deploy Red Hat OpenShift Data Foundation (ODF), a unified data storage solution for OpenShift, using GitOps. Manage installation declaratively with Argo CD for enhanced consistency and auditability.

Try Red Hat OpenShift Data Foundation

Next, we’ll be creating a Storage Cluster Custom Resource (CR) to define the specific configuration, such as the number of replicas, size, and type of backing storage, that the installed ODF Operator will use to provision the actual OpenShift Data Foundation storage service.

In this lesson, you will:

  • Create a Storage Cluster Custom Resource (CR).

Create a StorageCluster

Once the operator is installed, the StorageCluster resource can be created. This object defines the whole configuration of the storage environment you are going to install, whether it's object storage only or a full-fledged storage environment providing all types of storage.

The ODF Helm Chart tries to provide any useful options. However, in the end, the configuration depends on the individual requirements and environments.

Warning

Selecting the appropriate sizing of your storage environment can be quite complex. The ODF Sizer may help in finding the right settings.

  1. Let's create a minimum example for a full installation:

    apiVersion: ocs.openshift.io/v1
    kind: StorageCluster
    metadata:
     name: ocs-storagecluster
     namespace: openshift-storage
     annotations:
       argocd.argoproj.io/sync-wave: "3" # <1>
       argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
    spec:
     manageNodes: false
     monDataDirHostPath: /var/lib/rook
     storageDeviceSets:
       - name: ocs-deviceset # <2>
         count: 1 # <3>
         dataPVCTemplate:
           spec:
           accessModes: # <4>
               - ReadWriteOnce
             resources:
               requests:
                 storage: 2Ti # <5>
             storageClassName: gp3-csi #<6>
             volumeMode: Block # <7>
         replica: 3
     resourceProfile: balanced # <8>
  2. Argo CD Syncwave is now "3" instead of "0" as for the other objects.  

  3. Name of the storage device set. 

  4. Number of devices in each StorageClassDeviceSet, also verify settings with ODF Sizer

  5. Default Access Mode.

  6. Size of the Storage. Might be 512Gb, 2Ti, 4Ti ... 

  7. Storageclass where ODF shall be virtualized. This call must exist already! 

  8. Defines what type of volume is required by the claim. Number of Replicas. 

  9. Resource profile. Can either be lean, balanced (default), or performance.

  10. As an alternative, if you want to use object storage only, you can use the following example:

    apiVersion: ocs.openshift.io/v1
    kind: StorageCluster
    metadata:
     name: ocs-storagecluster
     namespace: openshift-storage
     annotations:
       argocd.argoproj.io/sync-wave: "3"
       argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
    spec:
     multiCloudGateway: # <1>
       dbStorageClassName: gp3-csi
       reconcileStrategy: standalone
  11. Creates a MultiCloudGateway ... an object storage.
Previous resource
Install the Red Hat OpenShift Data Foundation Operator
Next resource
Create the Argo CD application