In this article, we will show you how to install Red Hat OpenShift Container Platform with confidential nodes using AMD SEV-SNP and Intel TDX-enabled confidential virtual machines (cVMs) on Google Cloud Platform (GCP).
This is part 3 in our series covering Red Hat OpenShift deployment on confidential nodes. The first article introduced OpenShift confidential nodes and how they help address some privacy and security concerns associated with cloud environments. By running OpenShift on confidential nodes, the cluster state and workload are isolated from other tenants, and most importantly, the cloud provider. In a follow-up post, we explained how to install an OpenShift cluster with confidential nodes on Azure by configuring AMD SEV-SNP on the control plane and worker nodes.
Launching nodes on top of cVMs guarantees that their memory always remains hidden from the underlying infrastructure. In future releases, we plan to further implement remote attestation support for each node. This enhancement will allow us to deliver a fully confidential cluster solution with Red Hat OpenShift. Currently, we rely on the attestation services offered by the cloud provider.
This article assumes you are familiar with basic OpenShift installation customization, Google Cloud Platform’s compute engine and confidential computing. In addition to reading the previous articles from this series, we recommend reading the confidential computing primer.
Prerequisites
Before starting the installation, ensure the following requirements are met.
First, you need a Red Hat account. If you don’t have one, you can create one by filling out the registration form.
In addition, you need the key of a service account with an owner role (or equivalent), in a GCP project already configured for an OpenShift installation. You can find more information on how to configure a GCP project for OpenShift installation in the documentation. In short, it requires:
- Creating a GCP project or having access to one.
- Enabling a valid billing method.
- Enabling API services in GCP.
- Configuring a domain name (you have to own one).
- Creating a service account
- Granting the owner role (or equivalent) to the service account.
- Creating a key for that service account.
After that, you should be ready to follow the rest of this article and install an OpenShift cluster on GCP confidential nodes.
Create an SSH key
You must complete a few additional steps before cluster installation. First, you need to create an SSH key pair. The OpenShift installer will require this SSH key later and configure the cluster nodes accordingly. That way, you can open SSH sessions to the nodes or run installation troubleshooting commands such as openshift-install gather
.
To create the SSH key pair, simply do the following:
$ ssh-keygen -t ed25519 -f ~/.ssh/confidential_nodes_gcp
Then, start the SSH agent (if you haven’t already done so) and add the key:
$ eval $(ssh-agent)
$ ssh-add ~/.ssh/confidential_nodes_gcp
Download the OpenShift installer
We will install OpenShift on GCP with installer-provisioned infrastructure. In other words, the installer will provision and configure the nodes and infrastructure in GCP based on the cluster configuration.
Download the OpenShift installer to install the cluster. You can download the installer binary from the Install OpenShift on GCP with installer-provisioned infrastructure section of the Red Hat Hybrid Cloud Console. This brings you to a webpage, shown in Figure 1.

Once on that webpage, click Download installer. Afterwards, we will create a folder to store installation files and move the installer into it. This assumes that the installer was automatically downloaded into ~/Downloads/openshift-install-linux.tar.gz
.
$ mkdir gcp-confidential-nodes
$ cd gcp-confidential-nodes
$ mv ~/Downloads/openshift-install-linux.tar.gz .
$ tar -xvzf ./openshift-install-linux.tar.gz && rm ./openshift-install-linux.tar.gz
Obtain an OpenShift pull secret
The Installer will ask for an OpenShift pull secret. You can download it from the same page as the installer. We suggest storing the pull secret in a plain text file in the directory for the installation files you created above.
Optional: Download the OpenShift command-line interface
Once the installation is complete, you will likely want to interact with the OpenShift cluster. You can do this via the OpenShift command-line interface, oc
. It is available on the same installer webpage. Assuming that the web browser downloaded the file in ~/Downloads
:
$ mv ~/Downloads/openshift-client-linux.tar.gz .
$ tar -xvzf openshift-client-linux.tar.gz && rm openshift-client-linux.tar.gz
Alternatively, you can store the oc
binary in a directory anywhere in your $PATH
.
Install the cluster
The installation consists of two steps. First, you need to create an install-config.yaml
file and modify it to configure your own confidential computing choices. Then, you’ll install the cluster based on the customized configuration.
Create the install-config.yaml
as follows:
$ ./openshift-install create install-config --dir=installation-dir
The program will lead you through a bunch of prompts asking for some basic cluster information, which will cover:
- A SSH public key: We created one before, which is probably
~/.ssh/confidential_nodes_gcp.pub
. - Cloud platform: Select gcp, as we want to install the cluster on the Google Cloud Platform.
- Service account: Write the absolute path to the file containing the service account key you created previously.
- Project ID: Select the project we configured previously as a prerequisite.
- Region: Select a region that supports AMD SEV-SNP or Intel TDX confidential computing machines. A safe choice for testing purposes that works for both is us-central1.
- The base domain of your cluster.
- Specify a name for the cluster (e.g.,
gcp-confidential-nodes
). - The pull secret: Paste in the pull secret you downloaded from the Red Hat Hybrid Cloud Console earlier.
This will create an install-config.yaml
file under installation-dir
.
Now, you need to modify the install-config
file to match your choices. In this case, we will set the default platform machine to be a confidential node. This ensures that the bootstrap, control plane, and compute nodes will all run on confidential virtual machines (VMs).
To do this, we need to find the platform
field, add the defaultMachinePlatform
configuration subfield, and:
- Select a machine
type
that supports AMD SEV-SNP or Intel TDX. - Set
confidentialCompute
toAMDEncryptedVirtualizationNestedPaging
, orIntelTrustedDomainExtensions
to configure AMD SEV-SNP or Intel TDX, respectively. - Set
onHostMaintentance
toTerminate
. Currently, GCP doesn’t support live migration of AMD SEV-SNP or Intel TDX machines.
In other words, to install an OpenShift cluster on GCP AMD SEV-SNP machines, we need to find the platform field in the install-config
file and edit it so it looks like the following:
platform:
gcp:
projectID: <project-id>
region: <gcp-compute-region>
defaultMachinePlatform:
secureBoot: Enabled
confidentialCompute: AMDEncryptedVirtualizationNestedPaging
type: n2d-standard-4
onHostMaintenance: Terminate
To install the cluster on Intel TDX machines instead:
platform:
gcp:
projectID: <project-id>
region: <gcp-compute-region>
defaultMachinePlatform:
secureBoot: Enabled
confidentialCompute: IntelTrustedDomainExtensions
type: c3-standard-4
onHostMaintenance: Terminate
Note how the confidentialCompute
and type
fields vary from one configuration snippet to another.
When you're done editing, save and close the file.
Finally, install the cluster on confidential nodes on GCP by running the following command:
$ ./openshift-install create cluster --dir=installation-dir
After following the previous steps and waiting until the installation is complete, a cluster running on confidential nodes on GCP should be ready.
Verify the installation
You successfully installed the cluster. Now, you can deploy a simple workload, such as an NGINX server, to check that it works. To do so, use the OpenShift command-line interface you downloaded earlier. However, first, you have to export the KUBECONFIG
environment variable so it points to the kubeconfig
file of the freshly installed cluster:
$ export KUBECONFIG=$PWD/installation-dir/auth/kubeconfig
$ ./oc run nginx-demo --image=nginx --port=80 # To deploy a nginx server
pod/nginx-demo created
$ ./oc rsh nginx-demo curl localhost # To check that nginx is up an running
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
As you might notice, we haven’t run boot time attestation to ensure that the nodes are indeed encrypted. We will cover that in future articles. However, you can verify that the provisioned nodes are indeed AMD SEV-SNP or Intel TDX VM nodes by checking their dmesg
.
To do so, we can simply iterate through the cluster’s nodes and look for specific keywords in their dmesg
by:
$ oc get nodes -o=custom-columns=NAME:.metadata.name --no-headers | while IFS= read -r nodename; do oc debug node/$nodename -- chroot /host dmesg | grep -iE 'sev|tdx'; done
Starting pod/gcp-conf-nodes-t8j2w-master-0-debug-678t2 ...
To use host binaries, run `chroot /host`
[ 0.000000] tdx: Guest detected
[ 1.504598] process: using TDX aware idle routine
[ 1.585560] Memory Encryption Features active: Intel TDX
[ 3.719072] systemd[1]: Detected confidential virtualization tdx.
[ 19.554842] systemd[1]: Detected confidential virtualization tdx.
Removing debug pod ...
Starting pod/gcp-conf-nodes-t8j2w-master-1-debug-gcxt7 ...
To use host binaries, run `chroot /host`
[ 0.000000] tdx: Guest detected
[ 1.511404] process: using TDX aware idle routine
[ 1.591260] Memory Encryption Features active: Intel TDX
[ 3.709439] systemd[1]: Detected confidential virtualization tdx.
[ 25.130726] systemd[1]: Detected confidential virtualization tdx.
Removing debug pod ...
Starting pod/gcp-conf-nodes-t8j2w-master-2-debug-8qpgb ...
To use host binaries, run `chroot /host`
[ 0.000000] tdx: Guest detected
[ 1.503914] process: using TDX aware idle routine
[ 1.582455] Memory Encryption Features active: Intel TDX
[ 6.699533] systemd[1]: Detected confidential virtualization tdx.
[ 25.749073] systemd[1]: Detected confidential virtualization tdx.
Removing debug pod ...
Starting pod/gcp-conf-nodes-t8j2w-worker-a-sp5tj-debug-knrvn ...
To use host binaries, run `chroot /host`
[ 0.000000] tdx: Guest detected
[ 1.516088] process: using TDX aware idle routine
[ 1.595273] Memory Encryption Features active: Intel TDX
[ 3.728367] systemd[1]: Detected confidential virtualization tdx.
[ 14.594085] systemd[1]: Detected confidential virtualization tdx.
Removing debug pod ...
Starting pod/gcp-conf-nodes-t8j2w-worker-b-rm5cs-debug-jt8v4 ...
To use host binaries, run `chroot /host`
[ 0.000000] tdx: Guest detected
[ 1.502138] process: using TDX aware idle routine
[ 1.580371] Memory Encryption Features active: Intel TDX
[ 3.732747] systemd[1]: Detected confidential virtualization tdx.
[ 17.717727] systemd[1]: Detected confidential virtualization tdx.
Removing debug pod ...
Starting pod/gcp-conf-nodes-t8j2w-worker-c-jrl4s-debug-bqpv2 ...
To use host binaries, run `chroot /host`
[ 0.000000] tdx: Guest detected
[ 1.502594] process: using TDX aware idle routine
[ 1.582148] Memory Encryption Features active: Intel TDX
[ 3.774506] systemd[1]: Detected confidential virtualization tdx.
[ 17.726609] systemd[1]: Detected confidential virtualization tdx.
Removing debug pod ...
That should match some of the kernel’s messages confirming that it is running in a memory-encrypted environment. It should match messages containing AMD SEV-SNP or TDX, depending on the chosen confidential computing platform. Above, we show an example of how the output of Intel TDX nodes would look. The command wouldn’t match messages in case nodes weren’t running on memory-encrypted nodes.
Destroy the cluster
Once you are done playing around with the cluster you just installed, you can destroy it and free all of its resources as follows:
$ ./openshift-install destroy cluster --dir=installation-dir
Summary
In this article, we walked through the process of installing an OpenShift cluster on GCP with confidential nodes, leveraging Intel TDX and AMD SEV-SNP technologies. We explained the prerequisites, showed how to set up a local environment to run an installer-provisioned infrastructure, and covered the required configuration for the installation.
GCP confidential nodes are supported from OpenShift 4.19. You can read more about this OpenShift release on the documentation and in the blog post announcing this release.