Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Dynamic Persistent Storage Using the Red Hat Container Development Kit 3.0 Part 1

April 21, 2017
Andrew Block
Related topics:
ContainersKubernetes
Related products:
Red Hat OpenShift Container Platform

Share:

    Note: This article describes the functionality found in the Red Hat Container Development Kit 3.0 Beta. Features and functionality may change in future versions.

    In a prior article, Adding Persistent Storage to the Container Development Kit 3.0, an overview was provided for utilizing persistent storage with the Red Hat Container Development Kit 3.0, the Minishift based solution for running the OpenShift Container Platform from a single developer machine. In the prior solution, persistent storage was applied to the environment by pre-allocating folders and assigning Persistent Volumes to the directories using the HostPath volume plugin. While this solution provided an initial entry point into how persistent storage could be utilized within the CDK, there were a number of issues that limit the flexibility of this approach.

    • Manual creation of directories on the file system to store files persistently.
    • Persistent Volumes need to be manually created and associated with previously created directories.

    The primary theme in these limitations is the manual creation of resources associated with storage. Fortunately, OpenShift has a solution that can both automate the allocation of resources using a storage plugin that is common in many environments.

    Starting in OpenShift version 3.4, new functionality allowed persistent storage to be dynamically requested and created. All of this was made possible using Storage Classes. A StorageClass describes and classifies the specific type of storage and can be used to provide the necessary parameters to enable dynamic provisioning.

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: standard
    provisioner: kubernetes.io/aws-ebs
    parameters:
      type: gp2
    

    For example, a cluster administrator can define a StorageClass with the name of “fast” that makes use of higher quality backend storage and another StorageClass called “slow” that provides commodity grade storage. When requesting storage, an end user can specify a PersistentVolumeClaim with an annotation called volume.beta.kubernetes.io/storage-class, which specifies the value of the StorageClass they would like to use.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: claim
      annotations:
        volume.beta.kubernetes.io/storage-class: "fast"
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    

    An excellent overview of the functionality of StorageClasses and Dynamic Provisioning can be found in this blog article.

    The link between the StorageClass and the dynamic creation of a Persistent Volume is the provisioner. A provisioner contains the logic to bridge requests made against OpenShift (such as a newly created PersistentVolumeClaim) and communication with the backend storage. However, the majority of the included provisioners target cloud-based environments such as OpenStack, Amazon Web Services or Google Compute Engine. Since the CDK is running on a local developer’s machine and does not make use of cloud storage, an alternate solution must be utilized.

    Even though there are already a number of included and supported provisioners, the Kubernetes community recognized a need to provide alternative options. The term “in-tree” provisioner was coined to relate to the set of included provisioners while giving the option for creating “out-of-tree” provisioners that would target alternative backends.

    NFS (Network File System) is one of the most popular storage plugins for OpenShift and found in most enterprise environments. Because of this, an NFS provisioner was created as one of the first “out-of-tree” provisioners to provide support for dynamic provisioning targeting this storage type. While there are several approaches for deploying the NFS provisioner, the most common approach and one that would align best to the CDK to dynamically provide storage is to deploy the provisioner as a pod within OpenShift. An external storage project within the Kubernetes Incubator organization contains the content for the NFS provisioner.

    Now, let’s walk through the process of adding the NFS provisioner to the CDK. The full set of steps to deploy the NFS provisioner to the CDK or a standalone OpenShift cluster can be found here.

    First, since the host within the CDK ultimately serves file system storage, several packages must be installed. One of the benefits of the CDK is that it includes a fully subscribed instance of Red Hat Enterprise Linux. Assuming the CDK is already running, execute the following to invoke an SSH command to install the required packages and configure a few SELinux booleans related to NFS access to the file system:

    $ echo "
      sudo yum install -y nfs-utils nfs-utils-lib
      sudo setsebool -P virt_use_nfs 1
      sudo setsebool -P virt_sandbox_use_nfs 1
      " | minishift ssh
    

    With the underlying file system prerequisites complete, the OpenShift components can be created.

    First, create a new project for the NFS provisioner:

    $ oc new-project nfs-provisioner

    Next, create a new service account that will be used to run the provisioner:

    $ oc create serviceaccount nfs-provisioner

    Since the provisioner provides a cluster service, additional permissions are required. A new Security Context Constraint must be created to provide access to the HostPath storage type along with modifications to the Discretionary Access Control to read file attributes.

    $ oc create -f https://raw.githubusercontent.com/raffaelespazzoli/openshift-enablement-exam/master/misc/nfs-dp/nfs-provisioner-scc.yaml

    Add the newly created SCC to the nfs-provisioner service account created previously:

    $ oc adm policy add-scc-to-user nfs-provisioner -z nfs-provisioner

    Add several cluster roles to the service account:

    $ oc adm policy add-cluster-role-to-user cluster-reader system:serviceaccount:nfs-provisioner:nfs-provisioner
    $ oc adm policy add-cluster-role-to-user system:pv-provisioner-controller system:serviceaccount:nfs-provisioner:nfs-provisioner
    $ oc adm policy add-cluster-role-to-user system:pv-binder-controller system:serviceaccount:nfs-provisioner:nfs-provisioner
    $ oc adm policy add-cluster-role-to-user system:pv-recycler-controller system:serviceaccount:nfs-provisioner:nfs-provisioner
    

    Now, deploy the NFS provisioner:

    $ oc create -f https://raw.githubusercontent.com/raffaelespazzoli/openshift-enablement-exam/master/misc/nfs-dp/nfs-provisioner-dc-cdk.yaml

    The image will be deployed and configured to manage new requests for storage. Storage directories will be created on the CDK host machine in the /var/lib/minishift/export directory which will survive restarts of the CDK.

    You can monitor the pod status by executing the oc get pods command:

    The key behind dynamic provisioning as mentioned previously is the StorageClass along with the name of the provisioner that would ultimately manage the lifecycle of the Persistent Volume. The newly deployed NFS provisioner specified the value of “local-pod/nfs” as its provisioner name so this value would need to be specified within the StorageClass definition. The following is an example of a StorageClass called “local-pod”.

    kind: StorageClass
    apiVersion: storage.k8s.io/v1beta1
    metadata:
      name: local-pod
    provisioner: local-pod/nfs
    

    This StorageClass can be added to OpenShift by executing the following command:

    $ oc create -f https://raw.githubusercontent.com/raffaelespazzoli/openshift-enablement-exam/master/misc/nfs-dp/nfs-provisioner-class.yaml
    

    To simplify the interaction between the user and the dynamic provisioning facility, a default storage class can be configured within OpenShift by adding the storageclass.beta.kubernetes.io/is-default-class: "true" annotation.

    kind: StorageClass
    apiVersion: storage.k8s.io/v1beta1
    metadata:
      name: local-pod-default
      annotations:
        storageclass.beta.kubernetes.io/is-default-class: "true"
    provisioner: local-pod/nfs
    

    Once a default StorageClass has been defined, a user is no longer required to specify the name of the storage class within their PersistentVolumeClaim.

    Execute the following command to add the default storage class to the CDK:

    $ oc create -f https://raw.githubusercontent.com/raffaelespazzoli/openshift-enablement-exam/master/misc/nfs-dp/nfs-provisioner-class-default.yaml

    With the NFS provisioner running and the necessary storage classes configured, a new project can be created along with an application that makes use of persistent storage to validate the functionality of the dynamic storage provisioning process.

    First, create a new project

    $ oc new-project dynamic-storage-app

    Incorporate the jenkins-persistent template that is provided by default in CDK:

    $ oc new-app --template=jenkins-persistent

    Since the template contains a PersistentVolumeClaim,  the nfs provisioner will automatically create a new PersistentVolume.

    Execute the following command to confirm a PersistentVolume is bound to the PersistentVolumeClaim:

    $ oc get pvc
     
    NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
    jenkins   Bound     pvc-a0df54c0-2228-11e7-9c1c-080027d51fa0   1Gi        RWO           29m
    

    The parameters for the PersistentVolume are based on the configuration inside the PersistentVolumeClaim. The NFS provisioner also appends additional metadata to the PersistentVolume. Export the PersistentVolume to view the details:

    $ oc export pv 
    
    oc export pv pvc-a0df54c0-2228-11e7-9c1c-080027d51fa0
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      annotations:
        EXPORT_block: "\nEXPORT\n{\n\tExport_Id = 1;\n\tPath = /export/pvc-a0df54c0-2228-11e7-9c1c-080027d51fa0;\n\tPseudo
          = /export/pvc-a0df54c0-2228-11e7-9c1c-080027d51fa0;\n\tAccess_Type = RW;\n\tSquash
          = no_root_squash;\n\tSecType = sys;\n\tFilesystem_id = 1.1;\n\tFSAL {\n\t\tName
          = VFS;\n\t}\n}\n"
        Export_Id: "1"
        Project_Id: "0"
        Project_block: ""
        Provisioner_Id: 4e51aaa2-2224-11e7-9e9f-0242ac110003
        kubernetes.io/createdby: nfs-dynamic-provisioner
        pv.kubernetes.io/provisioned-by: local-pod/nfs
        volume.beta.kubernetes.io/storage-class: local-pod-default
      creationTimestamp: null
      name: pvc-a0df54c0-2228-11e7-9c1c-080027d51fa0
    

    The underlying storage can be view by browsing the /var/lib/minishift/export directory of the CDK host:

    $ minishift ssh
    $ cd /var/lib/minishift/export
    $ ls
    
    -rw-------.  1 root root   36 Apr 15 17:41 nfs-provisioner.identity
    drwxrwsrwx. 15 root root 4096 Apr 15 18:18 pvc-a0df54c0-2228-11e7-9c1c-080027d51fa0
    -rw-------.  1 root root  902 Apr 15 18:12 vfs.conf
    

    The name of the directory matches the name of the PersistentVolume and contains the data stored by the Jenkins instance.

    The directory will be retained as long as the PersistentVolumeClaim exists. As soon as the claim is deleted, the PersistentVolume and the underlying storage directory will also be removed.

    The use of the /var/lib/minishift/export directory to store the contents of the persistent volumes was specifically chosen as /var/lib/minishift is one of only a few directories that is persisted between restarts of the CDK. Since changes were also made at an operating system level (yum packages and SELinux Booleans), they will be lost if the CDK is restarted. If a restart occurs, rerunning the yum and SELinux commands as previously described will allow existing Persistent Volumes to be mounted by the deployed applications once again.

    The use of dynamic storage streamlines how persistent storage can be utilized within the Red Hat Container Development Kit. It eliminates the manual directory and volume management and accelerates how applications can be developed and utilized within OpenShift.


    The Red Hat Container Development Kit is available for download, and you can read more at the Red Hat Container Development Kit website.

    Last updated: April 3, 2023

    Recent Posts

    • Protecting virtual machines from storage and secondary network node failures

    • How to use OCI for GitOps in OpenShift

    • Using AI agents with Red Hat Insights

    • Splitting OpenShift machine config pool without node reboots

    • Node.js 20+ memory management in containers

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue