A persistent volume (PV) is a common way to preserve data in case a container is accidentally lost in Kubernetes or Red Hat OpenShift. This article shows you how to manage persistent volumes with the NFS Provisioner Operator that I developed.
Difficulties with PVs
In the early days of container-based development, each user had to ask an administrator to create a PV for that user's containers. Usually the administrator created 100 PVs in advance when the cluster was installed. The administrator also had to clean up the used PVs when they were released. Obviously, this process was inefficient and really burdensome.
Dynamic provisioning using StorageClass
was developed to solve this problem. With StorageClass
, you no longer have to manually manage your PVs—a provisioner manages them for you. Sounds good, right?
But the next question is how to set up the StorageClass
on the cluster without cost. If you can afford it, the easiest way is to use Red Hat OpenShift Dedicated, which provides the default gp2
StorageClass
. But it is not free.
Let's say you want to play around with an OpenShift cluster installed on your laptop using Red Hat CodeReady Containers. The environment is absolutely free and under your control. Wouldn't it be great if this cluster had a StorageClass
? With such an environment, you could test most scenarios without charge.
The NFS Provisioner Operator is open source and available on OperatorHub.io, which means that it can be easily installed via OpenShift's OperatorHub menu. The Operator uses the Kubernetes NFS subdir external provisioner from kuberentes-sigs internally.
Set up persistent volumes anywhere
To start, you need to have an OpenShift cluster, version 4.9.15 or later, and to log into the cluster with the cluster-admin
role user.
Begin by installing the NFS Provisioner Operator:
# Login
oc login -u kubeadmin -p kubeadmin https://api.crc.testing:6443
# Create a new namespace
oc new-project nfsprovisioner-operator
# Deploy NFS Provisioner operator in the terminal (You can also use OpenShift Console
cat << EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: nfs-provisioner-operator
namespace: openshift-operators
spec:
channel: alpha
installPlanApproval: Automatic
name: nfs-provisioner-operator
source: community-operators
sourceNamespace: openshift-marketplace
EOF
Next, create a directory in the node and add a label to that node:
# Check nodes
oc get nodes
NAME STATUS ROLES AGE VERSION
crc-8rwmc-master-0 Ready master,worker 54d v1.22.3+e790d7f
# Set Env variable for the target node name
export target_node=$(oc get node --no-headers -o name|cut -d'/' -f2)
oc label node/${target_node} app=nfs-provisioner
# ssh to the node
oc debug node/${target_node}
# Create a directory and set up the Selinux label.
$ chroot /host
$ mkdir -p /home/core/nfs
$ chcon -Rvt svirt_sandbox_file_t /home/core/nfs
$ exit; exit
Now you need to deploy an NFS server using the created folder on a HostPath volume. Note that you could use an existing persistent volume claim (PVC) for the NFS server as well.
# Create NFSProvisioner Custom Resource
cat << EOF | oc apply -f -
apiVersion: cache.jhouse.com/v1alpha1
kind: NFSProvisioner
metadata:
name: nfsprovisioner-sample
namespace: nfsprovisioner-operator
spec:
nodeSelector:
app: nfs-provisioner
hostPathDir: "/home/core/nfs"
EOF
# Check if NFS Server is running
oc get pod
NAME READY STATUS RESTARTS AGE
nfs-provisioner-77bc99bd9c-57jf2 1/1 Running 0 2m32s
Finally, you need to make the NFS StorageClass
the default:
# Update annotation of the NFS StorageClass
oc patch storageclass nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
# Check the default next to nfs StorageClass
oc get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs (default) example.com/nfs Delete Immediate false 4m29s
Congratulations—you have a StorageClass
. Verify it as follows:
# Create a test PVC
oc apply -f https://raw.githubusercontent.com/Jooho/jhouse_openshift/master/test_cases/operator/test/test-pvc.yaml
persistentvolumeclaim/nfs-pvc-example created
# Check the test PV/PVC
oc get pv, pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-e30ba0c8-4a41-4fa0-bc2c-999190fd0282 1Mi RWX Delete Bound nfsprovisioner-operator/nfs-pvc-example nfs 5s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/nfs-pvc-example Bound pvc-e30ba0c8-4a41-4fa0-bc2c-999190fd0282 1Mi RWX nfs 5s
The output shown here indicates that the NFS server, NFS provisioner, and NFS StorageClass
are all working fine. You can use the NFS StorageClass
for any test scenarios that need PVC.
CodeReady Containers allow you to do quite a bit with a local installation of RedHat OpenShift on your own hardware. For more, read my earlier article on Red Hat Developer, Configure CodeReady Containers for AI/ML development.
Last updated: November 8, 2023