As the support for various confidential hardware technologies, such as AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP), Intel Trust Domain Extensions (TDX), and ARM Confidential Compute Architecture (CCA), is finding its way into the mainline kernel, we see continuously increasing interest in the adoption of confidential computing technologies far beyond the public cloud.
The promise of confidential virtual machines (CVMs) is simple, clear, and appealing. CVMs allow you to isolate their content from any underlying infrastructure, making it possible to protect the workload’s intellectual property from tampering and ensure the results it produces are trustworthy.
CVM capabilities
Contrary to conventional virtual machines and containers designed to protect a host (and other workloads) from malicious workloads running in guest environments, confidential virtual machines also allow you to ensure that a malicious host cannot interfere with a deployed workload. This is particularly important when the workload is deployed on third-party compute resources, even if the resource provider is trustworthy.
However, by itself, the ownership of the underlying hardware is not always a sole factor for why one may require CVMs. A great example of this is the manufacturing and power grid edge computing. Even though in this use case, the infrastructure is typically owned, controlled, and maintained by the workload owner, it is often impossible to provide guarantees that you can successfully restrict physical access to the on-site hardware.
For example, in the case of primary and secondary substations in the power utility industry, often a thin sheet of metal is the only thing that separates an intruder from gaining unauthorized access to the hardware located in a field, literally. Failure to guarantee the isolation of critical workloads from a potentially malicious host in this situation might jeopardize the security of essential elements of the critical infrastructure. It is worth mentioning that the industrial edge use case often relates to control loops and typically requires real-time support from the infrastructure.
In general, when talking about real-time in the context of Linux operating systems, we normally refer to the concept of soft real-time, meaning that there are no formal guarantees that the operating system can ensure specific deadlines for deployed workloads. Instead, we can demonstrate that the latency introduced by the operating system and underlying hardware is upper-bound and cannot exceed a specific value (unique for this operating system plus hardware combination). To have a high level of confidence in this value, we need to execute latency benchmarking for an extended period of time (up to 7 days) to accumulate sufficient statistics.
As most of the confidential technologies rely on guest memory isolation (from the underlying host operating system) and on a memory encryption, it is self-evident that they can come at the cost of added latency. After all, every memory page has to be decrypted on the fly. This assumption motivated us to look in the details of how confidential virtual machines perform in regards to latency compared to their conventional counterparts.
Latency analysis: CVM versus VM
For our latency analysis, we focus on AMD SEV-SNP which is available both in Red Hat Enterprise Linux (RHEL) 9 and CentOS Stream 9. Further, as opposed to Intel TDX and ARM CCA that are still pending upstream work, SEV-SNP's enablement is already complete in the mainline kernel.
To evaluate the real-time capabilities of SEV-SNP CVMs we provisioned a Dell PowerEdge R7625 server equipped with two AMD EPYC 9124 CPUs. We followed the guidelines for configuring the BIOS to enable the SEV-SNP support on the firmware level. Additionally, we performed standard changes to improve real-time performance of the server. We then provisioned the system with the latest version of CentOS Stream 9 operating system. Before we proceeded with the setup, we ensured that SEV SNP was actually exposed to the kernel by running the following:
dmesg | grep SEV
The following output confirmed that SEV-SNP settings were correctly applied in BIOS
[ 0.000000] SEV-SNP: RMP table physical range [0x0000000015a00000 - 0x00000000562fffff]
[ 0.003980] SEV-SNP: Reserving start/end of RMP table on a 2MB boundary [0x0000000056200000]
[ 25.976242] ccp 0000:02:00.5: SEV firmware update successful
[ 28.797246] ccp 0000:02:00.5: SEV API:1.55 build:37
[ 28.797260] ccp 0000:02:00.5: SEV-SNP API:1.55 build:37
[ 30.569238] kvm_amd: SEV enabled (ASIDs 8 - 1006)
[ 30.569240] kvm_amd: SEV-ES enabled (ASIDs 1 - 7)
[ 30.569241] kvm_amd: SEV-SNP enabled (ASIDs 1 - 7)
We continued by following the standard steps to install required packages on the provisioned system:
dnf config-manager --set-enabled nfv
dnf config-manager --set-enabled rt
dnf install kernel-rt kernel-rt-kvm tuned-profiles-nfv-host realtime-tests stress-ng
In the two final steps, we updated the isolated_cores variable in /etc/tuned/realtime-virtual-host-variables.conf
to reflect the desired configuration. Then we applied the realtime-virtual-host profile, disabled irqbalance service, and restarted the system for changes to take place.
tuned-adm profile realtime-virtual-host
systemctl stop irqbalance && systemctl disable irqbalance
reboot
After rebooting, we confirmed that the real-time tuning was effective by executing cyclictest and observing a maximal latency of 15µs. With the host configured, we proceeded by installing the folllowing required virtualization packages:
dnf install qemu-kvm libvirt virt-install virt-viewer
for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
With all required bits in place, we created a virtual machine using the virt-install
command and ensured the presence of a virtio-scsi device (required by the current implementation of confidential computing):
virt-install -n CentOS9-RT --os-variant=centos-stream9 --ram=8192 --vcpus=2 --numatune=1 --controller type=scsi,model=virtio-scsi --disk cache=none,format=raw,io=threads,size=30 --graphics none --console pty,target_type=serial -l /tmp/CentOS-Stream-9-latest-x86_64-dvd1.iso --extra-args 'console=ttyS0,115200n8' --boot uefi,loader_secure=no
However, before you can start testing, you must make some changes to the VM’s libvirt xml file. To apply these changes, we powered off the newly created guest and patched libvirt xml as follows.
First, we replaced this section:
<os firmware='efi'>
<type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
<firmware>
<feature enabled='no' name='enrolled-keys'/>
<feature enabled='no' name='secure-boot'/>
</firmware>
<loader readonly='yes' secure='no' type='pflash' format='raw'>/usr/share/edk2/ovmf/OVMF_CODE.fd</loader>
<nvram template='/usr/share/edk2/ovmf/OVMF_VARS.fd' templateFormat='raw' format='raw'>/var/lib/libvirt/qemu/nvram/CentOS9-RT2_VARS.fd</nvram>
<boot dev='hd'/>
</os>
with the following:
<os firmware='efi'>
<type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
<loader stateless='yes'/>
<boot dev='hd'/>
</os>
Next we added the following after the device section:
<launchSecurity type='sev-snp'>
<cbitpos>51</cbitpos>
<reducedPhysBits>1</reducedPhysBits>
<policy>0x00030000</policy>
</launchSecurity>
Then we configured memory backing with memfd as follows:
<memoryBacking>
<source type='memfd'/>
<access mode='shared'/>
</memoryBacking>
We also removed the virtio-rng and tpm-crb devices by deleting the corresponding sections of the libvirtxml:
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</rng>
<tpm model='tpm-crb'>
<backend type='emulator' version='2.0'/>
</tpm>
We also disabled memory ballooning by setting the following:
<memballoon model='none'>
With these changes in place, we can boot up the VM and verify the guest has SEV-SNP functionality enabled, as shown here:
dmesg | grep -i sev
[ 1.234874] Memory Encryption Features active: AMD SEV SEV-ES SEV-SNP
[ 1.236849] SEV: Status: SEV SEV-ES SEV-SNP
[ 1.352854] SEV: APIC: wakeup_secondary_cpu() replaced with wakeup_cpu_via_vmgexit()
[ 3.069918] SEV: Using SNP CPUID table, 38 entries present.
[ 3.069934] SEV: SNP running at VMPL0.
With all the bits in place, we achieve proper real-time tuning of the guest by pinning the vCPUs of the virtual machine to isolated physical CPUs of the host in accordance with libvirt tuning:
<cputune>
<vcpupin vcpu='0' cpuset='24'/>
<vcpupin vcpu='1' cpuset='25'/>
<emulatorpin cpuset='41,11'/>
<vcpusched vcpus='0-1' scheduler='fifo' priority='1'/>
<emulatorsched scheduler='fifo' priority='1'/>
</cputune>
For the guest, we follow the same steps (as the host) to install the required real-time packages and apply the realtime-virtual-guest tuned profile. After applying the tuned profile and rebooting the guest, we performed latency measurements using cyclictest with stress-ng deployed alongside to simulate a noisy neighbor scenario as follows:
stress-ng -C 1
cyclictest -m -q -p95 --policy=fifo -D 10m -h60 -t 1 -a 1 -i 200 -b 100 --mainaffinity=0
To obtain a complete picture of real-time performance in CVMs compared to conventional VMs, we first collected baseline results in a conventional VM. We performed our tests in a VM with 4K page sizes. Following this, we repeated the same set of tests in a CVM and compared our results.
Latency results
Here are the results of the measurement of the maximal latency obtained in these experiments:
Maximal latency (µs) measured with cyclictest in a VM having stress-ng deployed alongside | |
Conventional RT VM | Confidential RT VM |
49 | 62 |
As you can see, the difference between maximal latency in confidential and conventional guests is non-negligible. In a non-confidential guest, maximal latency typically doesn’t exceed the 49µs mark when measured using cyclictest, while a confidential guest has demonstrated a maximal latency of 62µs.
It is truly impressive how close real-time performance of CVMs based on AMD SEV-SNP comes to the performance of conventional real-time VMs. Even though real-time CVMs still have a higher maximal latency, these latency values are sufficient for majority use cases in the manufacturing and electric utilities industry. For example, according to the vPAC Alliance Software Specification, cyclictest latency in VMs shall not exceed 100µs mark which is substantially higher than the observed values.
What's next
Stay tuned, as we continue evaluating real-time performance of CVMs backed by different technologies and see how results compare to each other and to conventional VMs. In the meantime, try out CVM real-time in RHEL 10 and tell us about your experience with latency.
Last updated: August 5, 2025