Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Gluster for OpenShift - Part 1: Container-Ready Storage

August 21, 2017
davis phillips
Related topics:
ContainersDeveloper ToolsKubernetes
Related products:
Red Hat OpenShift Container PlatformRed Hat Enterprise Linux

Share:

    OpenShift Container Platform (OCP) offers many different types of persistent storage. Persistent storage ensures that data should be insistent between builds and container migrations. When choosing a persistent storage backend to ensure that the backend supports the scaling, speed, dynamic provisioning, RWX/RWO support and redundancy that the project requires. Container-Ready Storage (CRS), or native Gluster for OCP, is defined by the concept of persistent volumes, which are OCP created objects that allow storage to be defined and then used by pods to allow for data persistence.

    Requesting of persistent volumes (PV) is done by using a persistent volume claim (PVC). This claim, when successfully fulfilled by the system will also mount the persistent storage to a specific directory within a pod or multiple pods. This directory is referred to as the mountPath and facilitated using a concept known as bind-mount.

    The OpenShift ansible contrib repo provides reference architectures for many platform providers including AWS, Azure, GCE, OpenStack, RHEV, and VMware.

    The github repo with playbooks and scripts to deploy OpenShift on VMware as well CRS are located here:

    https://github.com/openshift/openshift-ansible-contrib/tree/master/reference-architecture/vmware-ansible.

    These playbooks and scripts will guide you from start to finish in deploying OCP on VMware vCenter utilizing Container-Ready Storage.

    Deploying Container-Ready Storage

    A python script named add-node.py is provided in the openshift-ansible-contrib git repository. When add-node.py is used with the --node_type=storage option the following will be completely automated (dependent on variable "container_storage=cns" in the ocp-on-vmware.ini file).

    1. Create three VMware virtual machines with 32 GB Mem and 2 vCPUs.
    2. Register the new machines with Red Hat.
    3. Install the prerequisites for CRS for Gluster on each machine.
    4. Add a VMDK volume to each node as an available block device to be used for CRS.
    5. Create a heketi topology.json file using virtual machine hostnames and new VMDK device name.
    6. Install heketi and heketi-cli packages on one of the CRS nodes.
    7. Copy heketi public key to all CRS nodes.
    8. Modify heketi.json file with user supplied admin and user passwords and other necessary configuration for passwordless SSH to all CRS nodes.
    9. Using heketi-cli and topology.json file deploy the new CRS cluster.
    10. Create heketi-secret and new StorageClass object for PVC creation.

    Here is an example of what is automated for step 9 above. Loading the CRS topology.json file to create a new CRS Trusted Storage Pool (TSP). This is done from the CRS node where heketi was deployed. The topology.json file is archived on this node for future modification to add more storage devices, more storage nodes, etc.

    $ cat topology.json
    {
        "clusters": [
            {
                "nodes": [
                    {
                        "devices": [
                            "/dev/sdd"
                        ],
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "ocp3-crs0.dpl.local"
                                ],
                                "storage": [
                                    "172.0.10.215"
                                ]
                            },
                            "zone": 1
                        }
                    },
                    {
                        "devices": [
                            "/dev/sdd"
                        ],
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "ocp3-crs1.dpl.local"
                                ],
                                "storage": [
                                    "172.0.10.216"
                                ]
                            },
                            "zone": 2
                        }
                    },
                    {
                        "devices": [
                            "/dev/sdd"
                        ],
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "ocp3-crs2.dpl.local"
                                ],
                                "storage": [
                                    "172.0.10.217"
                                ]
                            },
                            "zone": 3
                        }
                    }
                ]
            }
        ]
    }

    Now export the heketi environment values and load the topology.json file:

    $ export HEKETI_CLI_SERVER=http://ocp3-crs-0.dpl.local:8080
    $ export HEKETI_CLI_USER=admin
    $ export HEKETI_CLI_KEY=myS3cr3tpassw0rd
    $ heketi-cli topology load --json=topology.json
    Creating cluster ... ID: bb802020a9c2c5df45f42075412c8c05
    	Creating node ocp3-crs-0.dpl.local ... ID: b45d38a349218b8a0bab7123e004264b
    				Adding device /dev/sdd ... OK
    	Creating node ocp3-crs-1.dpl.local ... ID: 2b3b30efdbc3855a115d7eb8fdc800fe
    				Adding device /dev/sdd ... OK
    	Creating node ocp3-crs-2.dpl.local ... ID: c7d366ae7bd61f4b69de7143873e8999
    				Adding device /dev/sdd ... OK

    Creating Heketi secret and CRS StorageClass OCP objects

    For step 10 above, OCP allows for the use of secrets so that items do not need to be stored in clear text. The admin password for heketi, specified during configuration of the heketi.json file should be stored in base64-encoding. OCP can refer to this secret instead of specifying the password in clear text.

    $ echo -n myS3cr3tpassw0rd | base64
    bXlTM2NyM3RwYXNzdzByZA==

    On the master or workstation with the OCP client installed with cluster-admin privileges use the base64 password string in the following YAML to define the secret in OCP’s default namespace.

    $ cat heketi-secret.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: heketi-secret
      namespace: default
    data:
      key: bXlTM2NyM3RwYXNzdzByZA==
    type: kubernetes.io/glusterfs

    Create the secret by using the following OCP CLI command.

    $ oc create -f heketi-secret.yaml
    secret "heketi-secret" created

    A StorageClass object requires certain parameters to be defined to successfully create the resource. Use the values of the exported environment variables from the previous steps to define the resturl, restuser, secretNamespace, and secretName. The key benefit of using a StorageClass object is that the persistent storage can be created with access modes ReadWriteOnce (RWO), ReadOnlyMany (ROX), or ReadWriteMany (RWX).

    $ cat storageclass.yaml
    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
      name: crs-gluster
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://ocp3-crs-0.dpl.local:8080"
      restauthenabled: "true"
      restuser: "admin"
      secretNamespace: "default"
      secretName: "heketi-secret"

    Once the StorageClass yaml file has been created, use the oc create command to create the object in OpenShift.

    $ oc create -f storageclass.yaml

    To validate the StorageClass object was created perform the following:

    $ oc get storageclass
    NAME             TYPE
    crs-gluster kubernetes.io/glusterfs
    
    $ oc describe storageclass crs-gluster
    Name:		crs-gluster
    IsDefaultClass:	No
    Annotations:	<none>
    Provisioner:	kubernetes.io/glusterfs
    Parameters:	restauthenabled=true,resturl=http://ocp3-crs-0.dpl.local:8080,restuser=admin,secretName=heketi-secret,secretNamespace=default
    No events.

    Creating a Dynamic Persistent Volume Claim (PVC)

    The Storage Class created in the previous section allows storage to be dynamically provisioned using the CRS resources. The example below shows a dynamically provisioned gluster volume being requested from the crs-gluster StorageClass object. A sample persistent volume claim is provided below:

    $ vi db-claim.yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
     name: db
     annotations:
       volume.beta.kubernetes.io/storage-class: crs-gluster
    spec:
     accessModes:
      - ReadWriteOnce
     resources:
       requests:
         storage: 10Gi
    
    $ oc create -f db-claim.yaml
    persistentvolumeclaim "db" created

    Configuring OpenShift templates with the desired StorageClass object name can also make Dynamic PV claims. The example is shown below for how to modify the default openshift mysql-persistent template file. If the StorageClass object name is not specified the default storageclass will be used if one is used. Also, the default size for the PVC is 1GB so make sure to increase this size if a larger size is needed.

    $ oc export template/mysql-persistent -n openshift -o yaml > mysql-persistent.yaml
    $ cat mysql-persistent.yaml
    ....omitted....
    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: ${DATABASE_SERVICE_NAME}
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: ${VOLUME_CAPACITY}
    ....omitted....

    Modify template with desired StorageClass object name to create dynamic PVC or gluster volume for mount-path=/var/lib/mysql/data.

    $ vim mysql-persistent.yaml
    ....omitted....
    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: ${DATABASE_SERVICE_NAME}
        annotations:
          volume.beta.kubernetes.io/storage-class: crs-gluster
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: ${VOLUME_CAPACITY}
    ....omitted....

    ... Gluster for OpenShift - Part 2: Container-Native Storage coming soon!


    DEPLOYING A RED HAT OPENSHIFT CONTAINER PLATFORM 3 ON VMWARE VCENTER 6, UTILIZING GLUSTER CONTAINER-NATIVE STORAGE This reference architecture describes how to deploy and manage Red Hat OpenShift Container Platform on VMware vCenter utilizing Gluster for persistent storage.

    Last updated: November 2, 2023

    Recent Posts

    • Simplify OpenShift installation in air-gapped environments

    • Dynamic GPU slicing with Red Hat OpenShift and NVIDIA MIG

    • Protecting virtual machines from storage and secondary network node failures

    • How to use OCI for GitOps in OpenShift

    • Using AI agents with Red Hat Insights

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue