Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Red Hat OpenShift Dedicated on Google Cloud Integration

Managed Google Cloud NetApp Volumes using NetAppTrident Operator

September 23, 2024
Mohamed Tleilia
Related topics:
Hybrid Cloud
Related products:
Red Hat OpenShiftRed Hat OpenShift Dedicated

Share:

    Red Hat OpenShift needs storage for different purposes. One of the major uses is persistent storage for the containerized workload.

    Many applications consume persistent storage to save data that needs to survive beyond the life cycle of the containers. This applies to different types of workloads running on the cluster including databases, messaging systems, and big data solutions, but also for OpenShift cluster components like monitoring, logging, and the internal image registry.

    There are different access modes from the Red Hat OpenShift cluster to the underlying storage solution. We can find 3 different access modes: ReadWriteMany (RWX), ReadOnlyMany (ROX), and ReadWriteOnce (RWO). Using one of them versus another depends on the application needs and on the storage exposed to the Red Hat OpenShift cluster. 

    In some cases, our stateful application will need to scale beyond one replica. This is why the storage consumed by this application should support multi access to the same storage from different containers. In such cases, our storage solution should provide a storage that can support ReadWriteMany (RWX) capabilities like Network File System (NFS) and Server Message Block (SMB).

    In this article, we are going to focus on the file storage solution provided by Google Cloud called Google Cloud NetApp Volumes (also known as GCNV) that can provide fully managed file storage. It offers capabilities such as snapshots, data replication, and multi-protocol access (NFS, SMB).

    Google Cloud NetApp Volumes is a managed data service that offers high performance file storage with advanced capabilities in data management and guarantee high availability.

    We use the Trident Operator to orchestrate the storage consumption of the cluster following the container storage interface topology so that Red Hat OpenShift can consume storage from the Google Cloud NetApp Volumes and claims persistent volumes from the GCNV storage pools.

    With the Trident Operator, we can dynamically provision persistent storage for our stateful applications through StorageClasses, and the underlying volumes are created on demand. 

    With the help of the StorageClasses, we can allow OpenShift to dynamically provision different storage types (Premium, Standard, etc.) that map to different performance tiers offered by the Google Cloud NetApp volumes.

    In the following use case, we are going to integrate Red Hat OpenShift dedicated with Google Cloud Netapp Volumes using the Trident operator.

    Before we dive in, let’s have a look at Red Hat OpenShift Dedicated. 

    Red Hat OpenShift Dedicated is a fully managed application platform backed by a team of Red Hat SREs making sure that your cluster is healthy, stable, and working as expected with an SLA of 99.95%. 

    OpenShift Dedicated is the ideal solution for teams looking to enhance their performances and remove the platform management burden. It also comes in a compliant setup with most of the industry compliance certifications (HIPAA, PCIDSS, ISO, etc.) so you can start developing and deploying applications in a very efficient way.

    First let’s have a look at the architecture of an Red Hat OpenShift Dedicated (managed) consuming file storage from Google Cloud NetApp Volumes.

    Important

    This is not an official reference architecture, we will use it to clarify the components integration.

    Figure 1 shows a diagram depicting the Red Hat OpenShift and Google Cloud NetApp Volumes integration.

    A diagram depicting Red Hat OpenShift and Google Cloud NetApp Volumes Integration.
    Figure 1: Red Hat OpenShift and Google Cloud NetApp Volumes integration.

    How to deploy Trident for Google Cloud NetApp Volumes on Red Hat OpenShift Dedicated

    Prerequisites:

    • A Red Hat OpenShift Dedicated cluster installed on Google Cloud.
    • Access to Google Cloud NetApp Volumes.
    • Full privileges on your Red Hat OpenShift Dedicated cluster.
    • The capability to mount volumes from all of the Kubernetes worker nodes.
    • A Linux host with oc and kubectl installed and configured to manage your Red Hat OpenShift Dedicated cluster that you want to use.
    • The KUBECONFIG environment variable set to point to your Red Hat OpenShift cluster configuration. 

    First, choose your installation method. You can use two methods to install Trident: using an operator or using tridentctl. Select the installation method that's right for you. You should also review the considerations for moving between methods before making your decision.

    We are going to use the Trident operator.

    Whether deploying manually or using Helm, the Trident operator is a great way to simplify installation and dynamically manage Trident resources. You can even customize your Trident operator deployment using the attributes in the TridentOrchestrator custom resource (CR).

    Step 1: Download and install Trident on Red Hat OpenShift Dedicated

    Download a copy of Trident to your local computer that has oc installed and that can access your Red Hat OpenShift cluster using a cluster-admin user. The trident artifacts can be found in this NetApp GitHub repo:

    $ wget https://github.com/NetApp/trident/releases/download/v24.06.0/trident-installer-24.06.0.tar.gz

    You will need to unzip the file:

    $ tar -xf trident-installer-24.06.0.tar.gz

    Then navigate to the trident-installer/deploy directory where you can find all the resources needed to deploy and install the Trident operator.

    First, install the custom resource definition (CRD) for the Trident operator:

    $ oc create -f deploy/crds/trident.netapp.io_tridentorchestrators_crd_post1.16.yaml

    Then, we will create the trident namespace and deploy the operator along with the service account and role-based access control (RBAC) for the operator:

    $ oc create ns trident
    namespace/trident created
    $ oc create -f deploy/bundle_post_1_25.yaml
    serviceaccount/trident-operator created
    clusterrole.rbac.authorization.k8s.io/trident-operator created
    clusterrolebinding.rbac.authorization.k8s.io/trident-operator created
    deployment.apps/trident-operator created

     You should now see the operator appear in your cluster:

    $ oc get pods -n trident

    Note

    If you face any issue while deploying the Operator, follow these instructions to troubleshoot the deployment.

    Deploy the Trident orchestrator CR:

    $ oc apply -f deploy/crds/tridentorchestrator_cr.yaml
    tridentorchestrator.trident.netapp.io/trident created

    This resource will deploy several pods: a controller pod and a pod on each worker node. This may take a while to happen.

    You can check using the following command:

    $ oc get pods -n trident

    Trident is now up and running. The next step will be to configure the Google Cloud NetApp Volume managed service.

    Step 2: Create the Trident backend

    First, we need to enable the NetApp API in order to use the Google Cloud NetApp Volume service as shown in Figure 2.

    A screen showing how to enable the NetApp API.
    Figure 2: Enabling the NetApp API.

    Then, we need to create the storage pools. Storage pools act as a container for volumes. All volumes in a storage pool share the same location, service level, Virtual Private Cloud network, Active Directory policy, and customer-managed encryption key (CMEK) policy. You can assign the capacity of the pool to volumes within the pool.

    To create storage pools, you need to Enable private services access for the VPC you intend to use. You can also enable private services access as part of the storage pool creation if you are using the Google Cloud console.

    Create an allocated IP address range within your VPC network for the Cloud Volumes Service mount points (the following example assumes that a VPC network already exists in the project):

    $   gcloud --project=openenv-x7l85 compute addresses create netapp-addresses-production-vpc1 --global --purpose=VPC_PEERING     --prefix-length=20 --network=s9k5s6s0i0q9l3b-qzhck-network     --no-user-output-enabled

    Create a private service connection to the Cloud Volumes Service endpoint:

    $  gcloud     --project=openenv-x7l85 services vpc-peerings connect     --service=cloudvolumesgcp-sds-api-network.netapp.com     --ranges=netapp-addresses-production-vpc1     --network=s9k5s6s0i0q9l3b-qzhck-network

    Note

    If the API [servicenetworking.googleapis.com] is not enabled on your project, you will need to enable it.

    Enable custom route propagation:

    $  gcloud     --project=openenv-x7l85 compute networks peerings update netapp-sds-nw-customer-peer     --network=s9k5s6s0i0q9l3b-qzhck-network     --import-custom-routes     --export-custom-routes

    Check that the connection is established:

    $  gcloud     --project=openenv-x7l85 services vpc-peerings list     --network=s9k5s6s0i0q9l3b-qzhck-network

    Now that the peering is done, we are ready to create the private service access. You have the ability to do it through the private service access via the google cloud console.

    As part of the storage pool creation, when you set up the connections, you will be able to configure the private service access connection, as shown in Figure 3.

    Creating a private service access connection.
    Figure 3: Creating a private service access connection.

    Now, we can create the Storage Pools that will be used by Red Hat OpenShift through the Trident Operator (Figure 4).

    Creating a storage pool.
    Figure 4: Creating a storage pool.

     We created 3 storage pools: Standard, Premium and Extreme. Each will offer different tiers of performance (Figure 5).

    Three storage pools: Standard, Premium, and Extreme.
    Figure 5: Three storage pools: Standard, Premium, and Extreme.

     Create a service account that will be used by the Trident operator to manage volumes:

    $ gcloud iam service-accounts create trident-admin --display-name "Trident Admin"
    

    The service account will need the following roles assigned to it:

    • roles/netapp.admin 
    • roles/netapp.viewer

    See below:

    $ gcloud projects add-iam-policy-binding {PROJECT} --member serviceAccount:trident-admin --role roles/netapp.admin --condition=None
    $ gcloud projects add-iam-policy-binding {PROJECT} --member serviceAccount:trident-admin --role roles/netapp.viewer --condition=None

    Create a service account key via Google Cloud Console or gcloud, and remember to replace {PROJECT} with your project ID:

    $ gcloud iam service-accounts keys create trident-admin.json --iam-account=trident-admin@{PROJECT}.iam.gserviceaccount.com

    Then we need to create a secret to allow Trident access to the NetApp storage pools.

    Create a secret that contains private_key_id and private_key fields that are populated by values obtained from the service account key you just created trident-admin.json:

    apiVersion: v1
    kind: Secret
    metadata:
      name: trident-admin
      namespace: trident 
    type: Opaque
    stringData:
      private_key_id: "3b36e74dd9f61f1c427d44ed96ab3cda8e80fed6"
      private_key : "-----BEGIN PRIVATE KEY-----\n dummy key \n-----END PRIVATE KEY-----\n"

    We create the secret in the Trident namespace: 

    $ oc create -f secret.yaml
    secret/trident-admin created

    The following backend example file includes the ability to create and access volumes in both a Standard and a Premium NetApp Volumes storage pools.

    This example backend file below will create a volume in any configured storage pool with the correct characteristics, but you could get more granular by specifying individual storage pool names. 

    Denote the name by adding storagePools to the backend file, under the main area or the virtual pools section (Premium, Standard, etc.)

    As shown, the backend file includes your project number, location, backend name, and so on.

    Get the projectId and projectNumber via gcloud :

    $ gcloud projects describe {PROJECT_ID}
    Sample output
    createTime: '2023-03-29T12:05:58.955104Z'
    lifecycleState: ACTIVE
    name: kimambo-sandbox
    parent:
      id: '682351974682'
      type: organization
    projectId: kimambo-sandbox
    projectNumber: '735461050625'

    Create the TridentBackendConfig YAML as shown below. 

    Feel free to paste the service account key trident-admin.json as is into the apiKey field—just make sure to delete private_key_id and private_key from the content.

    Standard:

    apiVersion: trident.netapp.io/v1
    kind: TridentBackendConfig
    metadata:
      name: gcv-backend
    spec:
      version: 1
      storage:
      - labels:
          performance: standard
        serviceLevel: standard
      storageDriverName: google-cloud-netapp-volumes
      backendName: volumes-for-kubernetes
      projectNumber: '735461050625'
      location: europe-west3
      apiKey: {
        "type": "service_account",
        "project_id": "{PROJECT_ID}",
        "client_email": "trident-admin@{PROJECT_ID}.iam.gserviceaccount.com",
        "client_id": "113715711968682935182",
        "auth_uri": "https://accounts.google.com/o/oauth2/auth",
        "token_uri": "https://oauth2.googleapis.com/token",
        "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
        "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/trident-admin%40{PROJECT_ID}.iam.gserviceaccount.com",
        "universe_domain": "googleapis.com"
      }
      
      credentials:
        name: trident-admin

     Premium:

    apiVersion: trident.netapp.io/v1
    kind: TridentBackendConfig
    metadata:
      name: gcv-backend
    spec:
      version: 1
      storage:
      - labels:
          performance: premium
        serviceLevel: premium
      storageDriverName: google-cloud-netapp-volumes
      backendName: volumes-for-ocp
      storagePools: 
        - ocp-max-storage
      projectNumber: '735461050625'
      location: europe-west3
      apiKey: {
        "type": "service_account",
        "project_id": "kimambo-sandbox",
        "client_email": "trident-admin@kimambo-sandbox.iam.gserviceaccount.com",
        "client_id": "113715711968682935182",
        "auth_uri": "https://accounts.google.com/o/oauth2/auth",
        "token_uri": "https://oauth2.googleapis.com/token",
        "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
        "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/trident-admin%40kimambo-sandbox.iam.gserviceaccount.com",
        "universe_domain": "googleapis.com"
      }
      
      credentials:
        name: trident-admin

    We create the backend:

    $ oc create -f gcnv_backend.yaml -n trident           
    tridentbackendconfig.trident.netapp.io/gcv-backend created

    And check to be sure the backend is bound:

    $  oc get tridentbackendconfig tbc-gcnv -n trident

    Troubleshooting

    If there are any issues, use $ oc describe tridentbackendconfig <name> -n trident to debug. 

    Checking the logs of the trident-controller pod can also be helpful:

    oc logs  trident-controller-76ccc6b6c8-p5lc6 -n trident

     Check Kubernetes events, if there are errors they will also show up here:

    oc events 

    Step 3: Create storage classes

    We will create 2 storage classes, one for each storage pool tier.

    Storage class Standard: 

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: gcnv-standard-openshift
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: csi.trident.netapp.io
    parameters:
      backendType: "google-cloud-netapp-volumes"
      selector: "performance=standard"
    allowVolumeExpansion: true

     Storage class Premium:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: gcnv-premium-openshift
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: csi.trident.netapp.io
    parameters:
      backendType: "google-cloud-netapp-volumes"
      selector: "performance=premium"
    allowVolumeExpansion: true 

    We create both:

    $ oc create -f scstandard.yaml
    $ oc create -f scpremuim.yaml

    And you can see your 2 new storage classes already created:

    NAME                                PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    gcnv-premium-openshift (default)    csi.trident.netapp.io   Delete          Immediate              true                   18s
    gcnv-standard-openshift (default)   csi.trident.netapp.io   Delete          Immediate              true                   3m4s

    We will create 2 PVCs. One PVC will map to the Standard service level and one will map to the premium service level. The ReadWriteMany (RWX), ReadOnlyMany (ROX), and ReadWriteOncePod (RWOP) access modes are also supported.

    Standard PVC YAML:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
        name: standard-pv-claim
    spec:
      storageClassName: gcnv-standard-openshift
      accessModes:
        - ReadWriteOnce
      resources:
         requests:
            storage: 100Gi

    Premium PVC YAML:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
        name: premium-pv-claim
    spec:
      storageClassName: gcnv-premium-openshift
      accessModes:
        - ReadWriteOnce
      resources:
         requests:
            storage: 100Gi

    We create both PVCs:

    $ oc create -f  pvcstandard.yaml
    persistentvolumeclaim/standard-pvc created
    $ oc create -f  pvcpremium.yaml
    persistentvolumeclaim/premium-pvc created

    We can see the Persistent Volumes (PVs) are being created under under volumes in the Google Cloud NetApp volumes service (Figure 6).

    alt text
    Figure 6: Persistent Volumes created in the Google Cloud NetApp volumes service.

    Now, all we need to do is attach our application to the PVC and we’ll have high-performance reliable storage for our stateful Kubernetes applications.

    Related Posts

    • Red Hat technologies make open hybrid cloud a reality

    • Using containerization for modern hybrid cloud application development

    • Transitioning Red Hat SSO to a highly-available hybrid cloud deployment

    • Using Ansible to automate Google Cloud Platform

    Recent Posts

    • Create and enrich ServiceNow ITSM tickets with Ansible Automation Platform

    • Expand Model-as-a-Service for secure enterprise AI

    • OpenShift LACP bonding performance expectations

    • Build container images in CI/CD with Tekton and Buildpacks

    • How to deploy OpenShift AI & Service Mesh 3 on one cluster

    What’s up next?

    Learn the foundations of OpenShift through hands-on experience deploying and working with applications, using a no-cost OpenShift cluster through the Developer Sandbox for Red Hat OpenShift.

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue