Deployment of Red Hat OpenShift Data Foundation using GitOps

Deploy Red Hat OpenShift Data Foundation (ODF), a unified data storage solution for OpenShift, using GitOps. Manage installation declaratively with Argo CD for enhanced consistency and auditability.

Try Red Hat OpenShift Data Foundation

This end-to-end example demonstrates a complete ODF deployment using GitOps, configuring it to provide all storage types: block, file, and object storage, including an optional NFS feature.

In this lesson, you will:

  • Learn to deploy the full ODF using GitOps, enabling block, file, and object storage services, with options for configuring resource profiles and NFS.

View the deployment 

The second example shows a full deployment of ODF. Not only is object storage installed, but the block and file storage is also available.

  1. We will use the same chart again for our configuration: Argo CD Repository for Setup OpenShift Data Foundation

  2. This folder will automatically be used for an Argo CD Application. 

  3. The values file looks like the following:

    ###########################
    # Enable and configure ODF
    ###########################
    openshift-data-foundation:
     storagecluster:
       enabled: true
       syncwave: 3
    
       # There are two options:
       # Either install the MultiCloudGateway only. This is useful if you just need S3 for Quay registry for example.
       multigateway_only:
         enabled: false # <1>
    
         # Name of the storageclass
         # The class must exist upfront and is currently not created by this chart.
         storageclass: gp3-csi
    
       # Second option is a full deployment, which will provide Block, File and Object Storage
       full_deployment:
         enabled: true # <2>
    
         # Enable NFS or not
         nfs: enabled # <3>
    
         # The label the nodes should have to allow hosting of ODF services
         # Default: cluster.ocs.openshift.io/openshift-storage
         # default_node_label: cluster.ocs.openshift.io/openshift-storage
    
         # -- In the Configure performance. The following profiles are available: 
         # <ul>
         # <li>lean (24 CPU, 72GiB): Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory.</li>
         # <li>balanced (30 CPU, 72 GiB, default): Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads.</li>
         # <li>performance (45 CPU, 96 GiB): Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads.</li>
         # </ul>
         # 
         # @default -- balanced
         resourceProfile: balanced # <4>
    
         # Define the DeviceSets
         storageDeviceSets: # <5>
             # Name of the DeviceSet
           - name: ocs-deviceset
             # Definitions of the PVC Template
             dataPVCTemplate:
               spec:
                 # Default AccessModes
                 # Default: ReadWriteOnce
                 # accessModes:
                 #   - ReadWriteOnce
    
                 # Size of the Storage. Might be 512Gi, 2Ti, 4Ti
                 # Default: 512Gi
                 resources:
                   requests:
                     storage: 512Gi
    
                 # Name of the storageclass
                 # The class must exist upfront and is currently not created by this chart.
                 storageClassName: gp3-csi
    
    # Install Operator Compliance Operator
    # Deploys Operator --> Subscription and Operatorgroup
    # Syncwave: 0
    helper-operator: # <6>
    
     console_plugins: # <7>
       enabled: true
       plugins:
         - odf-console
       job_namespace: kube-system
    
     operators:
       odf-operator: # <8>
         enabled: true
         syncwave: '0'
         namespace: # <9>
           name: openshift-storage
           create: true
         subscription: # <10>
           channel: stable-4.18
           approval: Automatic
           operatorName: odf-operator
           source: redhat-operators
           sourceNamespace: openshift-marketplace
         operatorgroup: # <12>
           create: true
    
    helper-status-checker: # <12>
     enabled: true
     # approver: false
    
     checks:
       - operatorName: odf-operator
         namespace:
           name: openshift-storage
         serviceAccount:
           name: "status-checker-odf"
  4. Disable standalone MultiCloudGateway deployment. 

  5. Enable Full Storage Deployment. 

  6. Enable NFS feature. 

  7. Resource profile. Can either be lean, balanced (default), or performance. 

  8. Settings for the storage cluster. Compare with section Step 2: Creating Custom Resource for Storage.

  9. helper-operator: Creates the namespace, subscription, and OperatorGroup. 

  10. Create a job that deploys the console plug-in of ODF. 

  11. Name of the subscription. 

  12. Namespace name and if it will be created by this chart.  

  13. Subscription definition, like channel and deployment strategy. 

  14. Create the OperatorGroup.

  15. helper-status-checker: Enable status check to verify if the operator has been deployed successfully.
Previous resource
End-to-end example #1 - MultiCloudGateway only
Next resource
Update the ODF Operator