Page
End-to-end example #2 - Full Storage Deployment
This end-to-end example demonstrates a complete ODF deployment using GitOps, configuring it to provide all storage types: block, file, and object storage, including an optional NFS feature.
In this lesson, you will:
- Learn to deploy the full ODF using GitOps, enabling block, file, and object storage services, with options for configuring resource profiles and NFS.
View the deployment
The second example shows a full deployment of ODF. Not only is object storage installed, but the block and file storage is also available.
We will use the same chart again for our configuration: Argo CD Repository for Setup OpenShift Data Foundation.
This folder will automatically be used for an Argo CD Application.
The values file looks like the following:
########################### # Enable and configure ODF ########################### openshift-data-foundation: storagecluster: enabled: true syncwave: 3 # There are two options: # Either install the MultiCloudGateway only. This is useful if you just need S3 for Quay registry for example. multigateway_only: enabled: false # <1> # Name of the storageclass # The class must exist upfront and is currently not created by this chart. storageclass: gp3-csi # Second option is a full deployment, which will provide Block, File and Object Storage full_deployment: enabled: true # <2> # Enable NFS or not nfs: enabled # <3> # The label the nodes should have to allow hosting of ODF services # Default: cluster.ocs.openshift.io/openshift-storage # default_node_label: cluster.ocs.openshift.io/openshift-storage # -- In the Configure performance. The following profiles are available: # <ul> # <li>lean (24 CPU, 72GiB): Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory.</li> # <li>balanced (30 CPU, 72 GiB, default): Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads.</li> # <li>performance (45 CPU, 96 GiB): Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads.</li> # </ul> # # @default -- balanced resourceProfile: balanced # <4> # Define the DeviceSets storageDeviceSets: # <5> # Name of the DeviceSet - name: ocs-deviceset # Definitions of the PVC Template dataPVCTemplate: spec: # Default AccessModes # Default: ReadWriteOnce # accessModes: # - ReadWriteOnce # Size of the Storage. Might be 512Gi, 2Ti, 4Ti # Default: 512Gi resources: requests: storage: 512Gi # Name of the storageclass # The class must exist upfront and is currently not created by this chart. storageClassName: gp3-csi # Install Operator Compliance Operator # Deploys Operator --> Subscription and Operatorgroup # Syncwave: 0 helper-operator: # <6> console_plugins: # <7> enabled: true plugins: - odf-console job_namespace: kube-system operators: odf-operator: # <8> enabled: true syncwave: '0' namespace: # <9> name: openshift-storage create: true subscription: # <10> channel: stable-4.18 approval: Automatic operatorName: odf-operator source: redhat-operators sourceNamespace: openshift-marketplace operatorgroup: # <12> create: true helper-status-checker: # <12> enabled: true # approver: false checks: - operatorName: odf-operator namespace: name: openshift-storage serviceAccount: name: "status-checker-odf"Disable standalone
MultiCloudGatewaydeployment.Enable Full Storage Deployment.
Enable NFS feature.
Resource profile. Can either be lean, balanced (default), or performance.
Settings for the storage cluster. Compare with section Step 2: Creating Custom Resource for Storage.
helper-operator: Creates the namespace, subscription, and OperatorGroup.Create a job that deploys the console plug-in of ODF.
Name of the subscription.
Namespacename and if it will be created by this chart.Subscription definition, like channel and deployment strategy.
Create the OperatorGroup.
helper-status-checker: Enable status check to verify if the operator has been deployed successfully.