Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Confidential virtual machines versus VMs: Latency analysis

July 28, 2025
Marcelo Tosatti Alexander Lougovski
Related topics:
Edge computingLinuxSecurityVirtualization
Related products:
Red Hat Enterprise LinuxRed Hat Enterprise Linux for Edge

Share:

    As the support for various confidential hardware technologies, such as AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP), Intel Trust Domain Extensions (TDX), and ARM Confidential Compute Architecture (CCA), is finding its way into the mainline kernel, we see continuously increasing interest in the adoption of confidential computing technologies far beyond the public cloud. 

    The promise of confidential virtual machines (CVMs) is simple, clear, and appealing. CVMs allow you to isolate their content from any underlying infrastructure, making it possible to protect the workload’s intellectual property from tampering and ensure the results it produces are trustworthy.

    CVM capabilities

    Contrary to conventional virtual machines and containers designed to protect a host (and other workloads) from malicious workloads running in guest environments, confidential virtual machines also allow you to ensure that a malicious host cannot interfere with a deployed workload. This is particularly important when the workload is deployed on third-party compute resources, even if the resource provider is trustworthy. 

    However, by itself, the ownership of the underlying hardware is not always a sole factor for why one may require CVMs. A great example of this is the manufacturing and power grid edge computing. Even though in this use case, the infrastructure is typically owned, controlled, and maintained by the workload owner, it is often impossible to provide guarantees that you can successfully restrict physical access to the on-site hardware. 

    For example, in the case of primary and secondary substations in the power utility industry, often a thin sheet of metal is the only thing that separates an intruder from gaining unauthorized access to the hardware located in a field, literally. Failure to guarantee the isolation of critical workloads from a potentially malicious host in this situation might jeopardize the security of essential elements of the critical infrastructure. It is worth mentioning that the industrial edge use case often relates to control loops and typically requires real-time support from the infrastructure. 

    In general, when talking about real-time in the context of Linux operating systems, we normally refer to the concept of soft real-time, meaning that there are no formal guarantees that the operating system can ensure specific deadlines for deployed workloads. Instead, we can demonstrate that the latency introduced by the operating system and underlying hardware is upper-bound and cannot exceed a specific value (unique for this operating system plus hardware combination). To have a high level of confidence in this value, we need to execute latency benchmarking for an extended period of time (up to 7 days) to accumulate sufficient statistics.

    As most of the confidential technologies rely on guest memory isolation (from the underlying host operating system) and on a memory encryption, it is self-evident that they can come at the cost of added latency. After all, every memory page has to be decrypted on the fly. This assumption motivated us to look in the details of how confidential virtual machines perform in regards to latency compared to their conventional counterparts.

    Latency analysis: CVM versus VM

    For our latency analysis, we focus on AMD SEV-SNP which is available both in Red Hat Enterprise Linux (RHEL) 9 and CentOS Stream 9. Further, as opposed to Intel TDX and ARM CCA that are still pending upstream work, SEV-SNP's enablement is already complete in the mainline kernel. 

    To evaluate the real-time capabilities of SEV-SNP CVMs we provisioned a Dell PowerEdge R7625 server equipped with two AMD EPYC 9124 CPUs. We followed the guidelines for configuring the BIOS to enable the SEV-SNP support on the firmware level. Additionally, we performed standard changes to improve real-time performance of the server. We then provisioned the system with the latest version of CentOS Stream 9 operating system. Before we proceeded with the setup, we ensured that SEV SNP was actually exposed to the kernel by running the following:

    dmesg | grep SEV

    The following output confirmed that SEV-SNP settings were correctly applied in BIOS 

    [    0.000000] SEV-SNP: RMP table physical range [0x0000000015a00000 - 0x00000000562fffff]
    [    0.003980] SEV-SNP: Reserving start/end of RMP table on a 2MB boundary [0x0000000056200000]
    [   25.976242] ccp 0000:02:00.5: SEV firmware update successful
    [   28.797246] ccp 0000:02:00.5: SEV API:1.55 build:37
    [   28.797260] ccp 0000:02:00.5: SEV-SNP API:1.55 build:37
    [   30.569238] kvm_amd: SEV enabled (ASIDs 8 - 1006)
    [   30.569240] kvm_amd: SEV-ES enabled (ASIDs 1 - 7)
    [   30.569241] kvm_amd: SEV-SNP enabled (ASIDs 1 - 7)

    We continued by following the standard steps to install required packages on the provisioned system:

    dnf config-manager --set-enabled nfv
    dnf config-manager --set-enabled rt
    dnf install kernel-rt kernel-rt-kvm tuned-profiles-nfv-host realtime-tests stress-ng

    In the two final steps, we updated  the isolated_cores variable in /etc/tuned/realtime-virtual-host-variables.conf to reflect the desired configuration. Then we applied the realtime-virtual-host profile, disabled irqbalance service, and restarted the system for changes to take place.

    tuned-adm profile realtime-virtual-host
    systemctl stop irqbalance && systemctl disable irqbalance
    reboot

    After rebooting, we confirmed that the real-time tuning was effective by executing cyclictest and observing a maximal latency of 15µs. With the host configured, we proceeded by installing the folllowing required virtualization packages:

    dnf install qemu-kvm libvirt virt-install virt-viewer 
    for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done

    With all required bits in place, we created a virtual machine using the virt-install command and ensured the presence of a virtio-scsi device (required by the current implementation of confidential computing):

    virt-install   -n CentOS9-RT --os-variant=centos-stream9    --ram=8192   --vcpus=2 --numatune=1   --controller type=scsi,model=virtio-scsi  --disk cache=none,format=raw,io=threads,size=30   --graphics none --console pty,target_type=serial  -l /tmp/CentOS-Stream-9-latest-x86_64-dvd1.iso --extra-args 'console=ttyS0,115200n8' --boot uefi,loader_secure=no

    However, before you can start testing, you must make some changes to the VM’s libvirt xml file. To apply these changes, we powered off the newly created guest and patched libvirt xml as follows. 

    First, we replaced this section:

      <os firmware='efi'>
        <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
        <firmware>
          <feature enabled='no' name='enrolled-keys'/>
          <feature enabled='no' name='secure-boot'/>
        </firmware>
        <loader readonly='yes' secure='no' type='pflash' format='raw'>/usr/share/edk2/ovmf/OVMF_CODE.fd</loader>
        <nvram template='/usr/share/edk2/ovmf/OVMF_VARS.fd' templateFormat='raw' format='raw'>/var/lib/libvirt/qemu/nvram/CentOS9-RT2_VARS.fd</nvram>
        <boot dev='hd'/>
      </os>

    with the following:

      <os firmware='efi'>
        <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
        <loader stateless='yes'/>
        <boot dev='hd'/>
      </os>

    Next we added the following after the device section:

      <launchSecurity type='sev-snp'>
        <cbitpos>51</cbitpos>
        <reducedPhysBits>1</reducedPhysBits>
        <policy>0x00030000</policy>
      </launchSecurity>

    Then we configured memory backing with memfd as follows:

      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>

    We also removed the virtio-rng and tpm-crb devices by deleting the corresponding sections of the libvirtxml:

      <rng model='virtio'>
        <backend model='random'>/dev/urandom</backend>
        <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
      </rng>
      <tpm model='tpm-crb'>
        <backend type='emulator' version='2.0'/>
      </tpm>

    We also disabled memory ballooning by setting the following:

    <memballoon model='none'>

    With these changes in place, we can boot up the VM and verify the guest has SEV-SNP functionality enabled, as shown here:

    dmesg | grep -i sev
    [    1.234874] Memory Encryption Features active: AMD SEV SEV-ES SEV-SNP
    [    1.236849] SEV: Status: SEV SEV-ES SEV-SNP 
    [    1.352854] SEV: APIC: wakeup_secondary_cpu() replaced with wakeup_cpu_via_vmgexit()
    [    3.069918] SEV: Using SNP CPUID table, 38 entries present.
    [    3.069934] SEV: SNP running at VMPL0.

    With all the bits in place, we achieve proper real-time tuning of the guest by pinning the vCPUs of the virtual machine to isolated physical CPUs of the host in accordance with libvirt tuning:

    <cputune>
      <vcpupin vcpu='0' cpuset='24'/>
      <vcpupin vcpu='1' cpuset='25'/>
      <emulatorpin cpuset='41,11'/>
      <vcpusched vcpus='0-1' scheduler='fifo' priority='1'/>
      <emulatorsched scheduler='fifo' priority='1'/>
    </cputune>

    For the guest, we follow the same steps (as the host) to install the required real-time packages and apply the realtime-virtual-guest tuned profile. After applying the tuned profile and rebooting the guest, we performed latency measurements using cyclictest with stress-ng deployed alongside to simulate a noisy neighbor scenario as follows:

    stress-ng -C 1
    cyclictest -m -q -p95 --policy=fifo -D 10m -h60 -t 1 -a 1 -i 200 -b 100 --mainaffinity=0

    To obtain a complete picture of real-time performance in CVMs compared to conventional VMs, we first collected baseline results in a conventional VM. We performed our tests in a VM with 4K page sizes. Following this, we repeated the same set of tests in a CVM and compared our results.

    Latency results

    Here are the results of the measurement of the maximal latency obtained in these experiments:

    Maximal latency (µs) measured with cyclictest in a VM having stress-ng deployed alongside

    Conventional RT VM 

    Confidential RT VM

    49

    62

    As you can see, the difference between maximal latency in confidential and conventional guests is non-negligible. In a non-confidential guest, maximal latency typically doesn’t exceed the 49µs mark when measured using cyclictest, while a confidential guest has demonstrated a maximal latency of 62µs.

    It is truly impressive how close real-time performance of CVMs based on AMD SEV-SNP comes to the performance of conventional real-time VMs. Even though real-time CVMs still have a higher maximal latency, these latency values are sufficient for majority use cases in the manufacturing and electric utilities industry. For example, according to the vPAC Alliance Software Specification, cyclictest latency in VMs shall not exceed 100µs mark which is substantially higher than the observed values.

    What's next

    Stay tuned, as we continue evaluating real-time performance of CVMs backed by different technologies and see how results compare to each other and to conventional VMs. In the meantime, try out CVM real-time in RHEL 10 and tell us about your experience with latency.  

    Last updated: August 5, 2025

    Related Posts

    • Building virtual machines with Red Hat Developer Hub: The what, why, and how

    • How to run a fraud detection AI model on RHEL CVMs

    • Benchmarking transparent versus 1GiB static huge page performance in Linux virtual machines

    • A self-service approach to building virtual machines at scale

    Recent Posts

    • Skopeo: The unsung hero of Linux container-tools

    • Automate certificate management in OpenShift

    • Customize RHEL CoreOS at scale: On-cluster image mode in OpenShift

    • How to set up KServe autoscaling for vLLM with KEDA

    • How I used Cursor AI to migrate a Bash test suite to Python

    What’s up next?

    This learning path demonstrates how to utilize GitOps in OpenShift to manage your virtual machines (VMs).

     

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue