Note: This article describes the functionality found in the Red Hat Container Development Kit 3.0 Beta. Features and functionality may change in future versions.
In a prior article, Adding Persistent Storage to the Container Development Kit 3.0, an overview was provided for utilizing persistent storage with the Red Hat Container Development Kit 3.0, the Minishift based solution for running the OpenShift Container Platform from a single developer machine. In the prior solution, persistent storage was applied to the environment by pre-allocating folders and assigning Persistent Volumes to the directories using the HostPath volume plugin. While this solution provided an initial entry point into how persistent storage could be utilized within the CDK, there were a number of issues that limit the flexibility of this approach.
- Manual creation of directories on the file system to store files persistently.
- Persistent Volumes need to be manually created and associated with previously created directories.
The primary theme in these limitations is the manual creation of resources associated with storage. Fortunately, OpenShift has a solution that can both automate the allocation of resources using a storage plugin that is common in many environments.
Starting in OpenShift version 3.4, new functionality allowed persistent storage to be dynamically requested and created. All of this was made possible using Storage Classes. A StorageClass describes and classifies the specific type of storage and can be used to provide the necessary parameters to enable dynamic provisioning.
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: standard provisioner: kubernetes.io/aws-ebs parameters: type: gp2
For example, a cluster administrator can define a StorageClass with the name of “fast” that makes use of higher quality backend storage and another StorageClass called “slow” that provides commodity grade storage. When requesting storage, an end user can specify a PersistentVolumeClaim with an annotation called volume.beta.kubernetes.io/storage-class, which specifies the value of the StorageClass they would like to use.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: claim annotations: volume.beta.kubernetes.io/storage-class: "fast" spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
An excellent overview of the functionality of StorageClasses and Dynamic Provisioning can be found in this blog article.
The link between the StorageClass and the dynamic creation of a Persistent Volume is the provisioner. A provisioner contains the logic to bridge requests made against OpenShift (such as a newly created PersistentVolumeClaim) and communication with the backend storage. However, the majority of the included provisioners target cloud-based environments such as OpenStack, Amazon Web Services or Google Compute Engine. Since the CDK is running on a local developer’s machine and does not make use of cloud storage, an alternate solution must be utilized.
Even though there are already a number of included and supported provisioners, the Kubernetes community recognized a need to provide alternative options. The term “in-tree” provisioner was coined to relate to the set of included provisioners while giving the option for creating “out-of-tree” provisioners that would target alternative backends.
NFS (Network File System) is one of the most popular storage plugins for OpenShift and found in most enterprise environments. Because of this, an NFS provisioner was created as one of the first “out-of-tree” provisioners to provide support for dynamic provisioning targeting this storage type. While there are several approaches for deploying the NFS provisioner, the most common approach and one that would align best to the CDK to dynamically provide storage is to deploy the provisioner as a pod within OpenShift. An external storage project within the Kubernetes Incubator organization contains the content for the NFS provisioner.
Now, let’s walk through the process of adding the NFS provisioner to the CDK. The full set of steps to deploy the NFS provisioner to the CDK or a standalone OpenShift cluster can be found here.
First, since the host within the CDK ultimately serves file system storage, several packages must be installed. One of the benefits of the CDK is that it includes a fully subscribed instance of Red Hat Enterprise Linux. Assuming the CDK is already running, execute the following to invoke an SSH command to install the required packages and configure a few SELinux booleans related to NFS access to the file system:
$ echo " sudo yum install -y nfs-utils nfs-utils-lib sudo setsebool -P virt_use_nfs 1 sudo setsebool -P virt_sandbox_use_nfs 1 " | minishift ssh
With the underlying file system prerequisites complete, the OpenShift components can be created.
First, create a new project for the NFS provisioner:
$ oc new-project nfs-provisioner
Next, create a new service account that will be used to run the provisioner:
$ oc create serviceaccount nfs-provisioner
Since the provisioner provides a cluster service, additional permissions are required. A new Security Context Constraint must be created to provide access to the HostPath storage type along with modifications to the Discretionary Access Control to read file attributes.
$ oc create -f https://raw.githubusercontent.com/raffaelespazzoli/openshift-enablement-exam/master/misc/nfs-dp/nfs-provisioner-scc.yaml
Add the newly created SCC to the nfs-provisioner service account created previously:
$ oc adm policy add-scc-to-user nfs-provisioner -z nfs-provisioner
Add several cluster roles to the service account:
$ oc adm policy add-cluster-role-to-user cluster-reader system:serviceaccount:nfs-provisioner:nfs-provisioner $ oc adm policy add-cluster-role-to-user system:pv-provisioner-controller system:serviceaccount:nfs-provisioner:nfs-provisioner $ oc adm policy add-cluster-role-to-user system:pv-binder-controller system:serviceaccount:nfs-provisioner:nfs-provisioner $ oc adm policy add-cluster-role-to-user system:pv-recycler-controller system:serviceaccount:nfs-provisioner:nfs-provisioner
Now, deploy the NFS provisioner:
$ oc create -f https://raw.githubusercontent.com/raffaelespazzoli/openshift-enablement-exam/master/misc/nfs-dp/nfs-provisioner-dc-cdk.yaml
The image will be deployed and configured to manage new requests for storage. Storage directories will be created on the CDK host machine in the /var/lib/minishift/export directory which will survive restarts of the CDK.
You can monitor the pod status by executing the oc get pods command:
The key behind dynamic provisioning as mentioned previously is the StorageClass along with the name of the provisioner that would ultimately manage the lifecycle of the Persistent Volume. The newly deployed NFS provisioner specified the value of “local-pod/nfs” as its provisioner name so this value would need to be specified within the StorageClass definition. The following is an example of a StorageClass called “local-pod”.
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: local-pod provisioner: local-pod/nfs
This StorageClass can be added to OpenShift by executing the following command:
$ oc create -f https://raw.githubusercontent.com/raffaelespazzoli/openshift-enablement-exam/master/misc/nfs-dp/nfs-provisioner-class.yaml
To simplify the interaction between the user and the dynamic provisioning facility, a default storage class can be configured within OpenShift by adding the storageclass.beta.kubernetes.io/is-default-class: "true" annotation.
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: local-pod-default annotations: storageclass.beta.kubernetes.io/is-default-class: "true" provisioner: local-pod/nfs
Once a default StorageClass has been defined, a user is no longer required to specify the name of the storage class within their PersistentVolumeClaim.
Execute the following command to add the default storage class to the CDK:
$ oc create -f https://raw.githubusercontent.com/raffaelespazzoli/openshift-enablement-exam/master/misc/nfs-dp/nfs-provisioner-class-default.yaml
With the NFS provisioner running and the necessary storage classes configured, a new project can be created along with an application that makes use of persistent storage to validate the functionality of the dynamic storage provisioning process.
First, create a new project
$ oc new-project dynamic-storage-app
Incorporate the jenkins-persistent template that is provided by default in CDK:
$ oc new-app --template=jenkins-persistent
Since the template contains a PersistentVolumeClaim, the nfs provisioner will automatically create a new PersistentVolume.
Execute the following command to confirm a PersistentVolume is bound to the PersistentVolumeClaim:
$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE jenkins Bound pvc-a0df54c0-2228-11e7-9c1c-080027d51fa0 1Gi RWO 29m
The parameters for the PersistentVolume are based on the configuration inside the PersistentVolumeClaim. The NFS provisioner also appends additional metadata to the PersistentVolume. Export the PersistentVolume to view the details:
$ oc export pv oc export pv pvc-a0df54c0-2228-11e7-9c1c-080027d51fa0 apiVersion: v1 kind: PersistentVolume metadata: annotations: EXPORT_block: "\nEXPORT\n{\n\tExport_Id = 1;\n\tPath = /export/pvc-a0df54c0-2228-11e7-9c1c-080027d51fa0;\n\tPseudo = /export/pvc-a0df54c0-2228-11e7-9c1c-080027d51fa0;\n\tAccess_Type = RW;\n\tSquash = no_root_squash;\n\tSecType = sys;\n\tFilesystem_id = 1.1;\n\tFSAL {\n\t\tName = VFS;\n\t}\n}\n" Export_Id: "1" Project_Id: "0" Project_block: "" Provisioner_Id: 4e51aaa2-2224-11e7-9e9f-0242ac110003 kubernetes.io/createdby: nfs-dynamic-provisioner pv.kubernetes.io/provisioned-by: local-pod/nfs volume.beta.kubernetes.io/storage-class: local-pod-default creationTimestamp: null name: pvc-a0df54c0-2228-11e7-9c1c-080027d51fa0
The underlying storage can be view by browsing the /var/lib/minishift/export directory of the CDK host:
$ minishift ssh $ cd /var/lib/minishift/export $ ls -rw-------. 1 root root 36 Apr 15 17:41 nfs-provisioner.identity drwxrwsrwx. 15 root root 4096 Apr 15 18:18 pvc-a0df54c0-2228-11e7-9c1c-080027d51fa0 -rw-------. 1 root root 902 Apr 15 18:12 vfs.conf
The name of the directory matches the name of the PersistentVolume and contains the data stored by the Jenkins instance.
The directory will be retained as long as the PersistentVolumeClaim exists. As soon as the claim is deleted, the PersistentVolume and the underlying storage directory will also be removed.
The use of the /var/lib/minishift/export directory to store the contents of the persistent volumes was specifically chosen as /var/lib/minishift is one of only a few directories that is persisted between restarts of the CDK. Since changes were also made at an operating system level (yum packages and SELinux Booleans), they will be lost if the CDK is restarted. If a restart occurs, rerunning the yum and SELinux commands as previously described will allow existing Persistent Volumes to be mounted by the deployed applications once again.
The use of dynamic storage streamlines how persistent storage can be utilized within the Red Hat Container Development Kit. It eliminates the manual directory and volume management and accelerates how applications can be developed and utilized within OpenShift.
The Red Hat Container Development Kit is available for download, and you can read more at the Red Hat Container Development Kit website.
Last updated: April 3, 2023