Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

OpenShift Data Foundation and HashiCorp Vault securing data

June 18, 2025
Will Cushen
Related topics:
Security
Related products:
Red Hat OpenShiftRed Hat OpenShift Data Foundation

Share:

    Managing secrets securely is non-negotiable for many enterprises today across cloud and on-premise environments. Organizations are poised to take advantage of deeper integration opportunities between HashiCorp Vault and Red Hat OpenShift to strengthen their security posture.

    In this article, we’ll demonstrate the integration of Red Hat OpenShift Data Foundation with HashiCorp Vault to enable cluster-wide encryption using Vault as a key management system (KMS). We will deploy this demo on Amazon Web Services (AWS) using local devices as the backing store to the OpenShift Data Foundation cluster.

    5 steps to deploy Vault on OpenShift

    While many enterprises use an external Vault instance for demo purposes, we will deploy Vault within the OpenShift cluster using Helm.

    • Follow the instructions in this guide to deploy the Vault instance via Helm with TLS certificates managed by cert-manager.
    • The TLS certs will be created with a locally generated root CA.

    Step 1: Initialize Vault

    Once Vault is running, initialize it. You can either do these series of steps by executing each command with an oc exec wrapper or by simply obtaining your tokens and running oc exec -n vault -ti vault-0 -n vault -- sh to run everything within the Vault shell:

    $ oc exec -n vault vault-0 -- vault operator init -key-shares=1 -key-threshold=1 -format=json > init-keys.json

    This command generates the unseal keys and a root token. Make sure to store them securely. You’ll often be prompted to unseal Vault.

    Unseal Vault with the following:

    $ cat init-keys.json | jq -r ".unseal_keys_b64[]"
    hmeMLoRiX/trBTx/xPZHjCcZ7c4H8OCt2Njkrv2yXZY=
    $ VAULT_UNSEAL_KEY=$(cat init-keys.json | jq -r ".unseal_keys_b64[]")
    $ oc exec -n vault vault-0 -- vault operator unseal $VAULT_UNSEAL_KEY
    $ cat init-keys.json | jq -r ".root_token"
    s.XzExf8TjRVYKm85xMATa6Q7U
    $ VAULT_ROOT_TOKEN=$(cat init-keys.json | jq -r ".root_token")
    oc exec -n vault vault-0 -- vault login $VAULT_ROOT_TOKEN
    
    Success! You are now authenticated. The token information displayed below
    is already stored in the token helper. You do NOT need to run "vault login"
    again. Future Vault requests will automatically use this token.
    
    Key                  Value
    ---                  -----
    token                s.P3Koh6BZikQPDxPSNwDzmKJ5
    token_accessor       kHFYypyS2EcYpMyrsyXUQmNa
    token_duration       ∞
    token_renewable      false
    token_policies       ["root"]
    identity_policies    []
    policies             ["root"]

    Step 2: Configure Vault

    Next, configure Vault for Kubernetes Auth and enable the KV backend path by running the following:

    oc -n vault exec pods/vault-0  -- \
            vault auth enable kubernetes
    
    oc -n vault exec pods/vault-0  -- \
            vault secrets enable -path=odf kv-v2

    A note on OpenShift Data Foundation Vault roles

    The official OpenShift Data Foundation documentation recommends creating an odf-vault-auth service account in the openshift-storage namespace with an auth delegator role. This role allows Vault to authenticate against the OpenShift TokenReview API by using a long-lived service account token (token_reviewer_jwt).

    For this demo, instead of introducing a dedicated service account (SA) for delegation, we have bound the authentication role directly to the existing OpenShift Data Foundation operator-managed service accounts. While keeping the token authentication process separate from the OpenShift Data Foundation operator’s service accounts is generally recommended in organizations for better access control flexibility, this adjustment simplifies our setup.

    The odf-rook-ceph-op role in Vault is bound to the rook-ceph-osd SA. This SA is used by the object storage daemon (OSD) pods when creating OSD encryption keys in Vault.

    From v1.22, Kubernetes discourages the use of long-lived SA tokens and recommends using short-lived tokens via the TokenRequest API. We align with this best practice by setting a short time to live (TTL) in the Vault role policy, ensuring tokens are automatically rotated on the Vault side.

    Step 3: Configure Vault roles

    The next step is to configure the Vault roles as follows:

    oc -n vault exec pods/vault-0  -- \
            vault write auth/kubernetes/role/odf-rook-ceph-op \
            bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa \
            bound_service_account_namespaces=openshift-storage \
            policies=odf \
            ttl=1h
    
    oc -n vault exec pods/vault-0 -- \
            vault write auth/kubernetes/role/odf-rook-ceph-osd \
            bound_service_account_names=rook-ceph-osd \
            bound_service_account_namespaces=openshift-storage \
            policies=odf \
            ttl=1h

    Step 4: Configure the K8s authentication

    Note:

    If Vault is hosted on Kubernetes, kubernetes_ca_cert can be omitted from the config of a Kubernetes Auth mount.

    To configure the K8s authentication method to use location of the Kubernetes API, run the following:

    oc -n vault exec pods/vault-0  -- \
        vault write auth/kubernetes/config \
        kubernetes_host="https://kubernetes.default.svc:443"

    Step 5: Create an ACL policy

    Create an access control list (ACL) policy as follows:

    oc -n vault exec -i pods/vault-0 -- vault policy write odf - <<EOF
    path "odf/*" {
      capabilities = ["create", "read", "update", "delete", "list"]
    }
    path "sys/mounts" {
      capabilities = ["read"]
    }
    EOF

    Deploy OpenShift Data Foundation cluster

    Now that you have Vault configured, we’re ready to head over to the OpenShift console and step through the OpenShift Data Foundation wizard to get our storage cluster.

    Get to the stage where you’ve installed both the Local Storage Operator (LSO) and the OpenShift Data Foundation Operator via Operator Hub.

    Typically, when calling a Vault instance external to the cluster, we’re required to explicitly delegate authentication decisions to the Kubernetes API Server. In our scenario, Vault is set up to authenticate Kubernetes services accounts via the Kubernetes auth method in Vault.

    When deployed internally, the OpenShift Data Foundation-Vault token handshake looks a little different:

    • OpenShift Data Foundation Rook passes its service account token (automatically mounted in the pod) to Vault.
    • Vault is responsible for verifying that token with the Kubernetes API.
    • Vault’s service account has the permission to do TokenReview (observed by the following output):
    $ oc adm policy who-can create tokenreviews.authentication.k8s.io | grep vault
            system:serviceaccount:vault:vault

    In short, explicitly defining an auth-delegator RoleBinding is not required here.

    Create the storage system

    With the operators deployed, select Create StorageSystem (Figure 1) to get to the OpenShift Data Foundation configuration screen.

    Create StorageSystem
    Figure 1: Click Create StorageSystem to get to the OpenShift Data Foundation configuration screen.

    You will then be presented with the StorageSystem wizard (Figure 2). By opting for local devices, the LSO will kick off a discovery of our drives.

    StorageSystem - Screen 1
    Figure 2: StorageSystem wizard—first page (backing storage).

    On the next screen in Figure 3, we’ll set our LocalVolumeSet (LVS) and StorageClass name. The selected capacity confirms that all our disks have been picked up correctly. 

    Click Confirm to create the LVS.

    StorageSystem - Screen 2
    Figure 3: Create the local volume set in the StorageSystem wizard second page.

    Once we've provisioned our localblock-backed PersistentVolumes (PVs), we can view the target hosts for the OpenShift Data Foundation cluster. 

    $ oc get localvolumeset -A
    NAMESPACE                 NAME          AGE
    openshift-local-storage   local-block   3m7s

    For this demo, we will select a performance profile of Lean (Figure 4).

    StorageSystem - Screen 3
    Figure 4: Select the capacity and nodes in the StorageSystem wizard third screen.

    The screen depicted in Figure 5 shows where we’ll set up everything we need for KMS integration with Vault.

    By default, it uses Kubernetes as the authentication method. Enter a connection name "odf." This is a suitable choice since we’re configuring this specifically for the OpenShift Data Foundation.

    Provide the address of the Vault cluster. Since the deployment is within the cluster, you can use the service address along with the default Vault Port, 8200.

    StorageSystem - Screen 4
    Figure 5: Security and network are on the StorageSystem fourth screen.

    Click Advanced settings. 

    In line with our Vault settings we configured earlier, set the backend and authentication path to odf and kubernetes, respectively (Figure 6). Then upload the Root CA that was used to issue your cert-manager (or other) issuer-backed Vault certificates. This will generate a secret in the openshift-storage namespace.

    StorageSystem wizard--Key Management Service Advanced Settings page.
    Figure 6: This is the Key Management Service advanced settings page.

    Under Review and create, select Create StorageSystem (Figure 7).

    StorageSystem wizard--fifth page (Review and create).
    Figure 5: Select Create StorageSystem in the wizard fifth page in Review and create. 

    The OpenShift Container Storage (OCS) operator should spawn an ocs-kms-ca-secret-<hash> K8s secret in addition to a csi-kms-ca-secret-<hash> secret (if it doesn’t do this immediately, delete the pod and let it recreate). A ConfigMap containing the KMS connection details should also be present in the openshift-storage namespace. 

    Inspect its contents to verify all the relevant data you entered via the OpenShift Data Foundation console is there.

    It should look something like this:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: ocs-kms-connection-details
      namespace: openshift-storage
    data:
      KMS_SERVICE_NAME: odf
      VAULT_AUTH_KUBERNETES_ROLE: odf-rook-ceph-op
      VAULT_AUTH_METHOD: kubernetes
      VAULT_NAMESPACE: ''
      VAULT_CACERT: ocs-kms-ca-secret-pzm5nwjd
      VAULT_AUTH_MOUNT_PATH: kubernetes
      VAULT_TLS_SERVER_NAME: ''
      VAULT_BACKEND_PATH: odf
      VAULT_ADDR: 'https://vault.vault.svc:8200'
      KMS_PROVIDER: vault

    You can tail the logs of the rook-ceph operator pod to check for any errors. However, the ultimate test is to verify that the encryption keys have been created in the backend path in Vault.

    There’s a chance that you will encounter the following error. It just means that Vault is in need of being unsealed again.

    $ oc logs -f rook-ceph-operator-65d9d759c9-6g4s5
    ...
    URL: PUT https://vault.vault.svc:8200/v1/auth/kubernetes/login
    Code: 503. Errors:
    
    * Vault is sealed

    Simply rerun the oc exec -n vault vault-0 -- vault operator unseal $VAULT_UNSEAL_KEY command as follows:

    $ oc -n vault exec vault-0 -- vault kv list odf
    Keys
    ----
    rook-ceph-osd-encryption-key-ocs-deviceset-localblock-0-data-0xlflh
    rook-ceph-osd-encryption-key-ocs-deviceset-localblock-1-data-0vzjf7
    rook-ceph-osd-encryption-key-ocs-deviceset-localblock-2-data-0xc84g

    Or you can do the following in the Vault console as shown in Figure 8.

    Vault OSD Keys
    Figure 8: Vault OSD keys.

    With these present in Vault, you should now see the OSD pods come up.

    $ oc get pod -l app=rook-ceph-osd
    NAME                               READY   STATUS    RESTARTS   AGE
    rook-ceph-osd-0-84755dcfbc-jsfhk   2/2     Running   0          92s
    rook-ceph-osd-1-bb58bf7f6-qzsb4    2/2     Running   0          90s
    rook-ceph-osd-2-54d85d8b55-kz6hd   2/2     Running   0          84s

    And there we have it. The OpenShift Data Foundation cluster is complete and ready for use, with the following storage classes available for PV consumption on the cluster:

    $ oc get sc
    NAME                                    PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    gp2-csi                                 ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   7h2m
    gp3-csi (default)                       ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   7h2m
    localblock                              kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                  34m
    ocs-storagecluster-ceph-rbd (default)   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   7m26s
    ocs-storagecluster-cephfs               openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   7m56s
    openshift-storage.noobaa.io             openshift-storage.noobaa.io/obc         Delete          Immediate              false                  4m39s

    Wrap up

    Integrating OpenShift Data Foundation with HashiCorp Vault gives you a secure, centralized way to manage encryption for your data at rest. Whether you’re using Vault as your KMS or looking to enhance security with HSM-backed key storage, there’s a lot of flexibility in how you approach the setup. Vault’s policies also allow for fine-grained access control, ensuring that only the right services can access your sensitive data. We can imagine a world in which certain industries require this more than others.

    This guide covers the basics, but the true value of this setup lies in its flexibility to adapt to your organization’s specific needs. OpenShift Data Foundation encryption doesn’t stop at cluster-wide encryption for OSDs. You can also configure what I would loosely reference as double encryption. This involves encrypting not only the cluster-wide storage but also individual StorageClass-level encryption, where PVs are encrypted at-rest, per namespace, really locking things down. I’ll show you how in the next article in this series. 

    Related Posts

    • Best practices for OpenShift Data Foundation disaster recovery resource planning

    • Multicluster replication in Red Hat OpenShift Data Foundation S3

    • Red Hat OpenShift Data Foundation topology considerations

    • Red Hat OpenShift Data Foundation for developers and data scientists

    • Access Quay on OpenShift with short-lived credentials

    Recent Posts

    • A deep dive into Apache Kafka's KRaft protocol

    • Staying ahead of artificial intelligence threats

    • Strengthen privacy and security with encrypted DNS in RHEL

    • How to enable Ansible Lightspeed intelligent assistant

    • Why some agentic AI developers are moving code from Python to Rust

    What’s up next?

    The Red Hat OpenShift cheat sheet presents basic oc commands to help you build, deploy and manage an application with OpenShift.

    Get the cheat sheet
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue