Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Best Practice Configuration and Tuning for Linux and Windows VMs

May 6, 2026
Jenifer Abrams Joe Mario
Related topics:
Virtualization
Related products:
Red Hat OpenShift Virtualization

    In this guide we’ll walk through some critical configuration details and some further “last mile” tuning options when running workloads in both Linux and Windows VMs on OpenShift Virtualization.

    As always, you can find more details on all available tuning options in the OpenShift Virtualization - Tuning & Scaling Guide, and keep up with what the Red Hat Performance & Scale Engineering team is up to on our blog.  

     

    VM Definition

    OS Preferences

    When defining a VM (Virtual Machine) using a provided Template or InstanceType, some critical best practice configurations are automatically applied which optimize performance for the guest OS type. This is especially important for Windows VMs because of some fundamental optimizations related to the guest clock, hyper-v enlightenments, and other bus type preferences.  

    If you choose to implement custom VM definitions that don’t follow these provided settings, especially for Windows VMs it is highly recommended to apply the appropriate VirtualMachinePreference to the VM which will configure the necessary tuning. 

    To see all provided VM Preferences: 

    oc get VirtualMachineClusterPreference

    An example of applying a Preference to a VM definition:

    spec:
      preference:
        name: windows.2k25.virtio
      runStrategy: [...]

    Note: if the VM is already running, a reboot will be required to pick up the configuration changes. 

    In order to see all the settings automatically applied by the Preference, you can query the VMI (running VirtualMachineInstance) definition, ex:

     oc get vmi <vm_name> -o yaml

    If there is a need to customize these VirtualMachinePreferences, the recommendation would be to create new/modified Preference names so that the default provided settings are always available as a reference. This article provides an example workflow of customizing a VirtualMachinePreference. 

    Disk

    In general bus: virtio is the best performing option, Windows VMs should not configure bus: sata for best performance.

    For multi-threaded storage workloads, especially for larger VMs with many vCPUs, consider enabling the supplementalPool ioThreadsPolicy (starting in 4.19).

    By default, all Block volumes will apply an io: native optimization. In order to utilize this optimization for Filesystem volumes, use preallocation.

    Network

    Use model: virtio, Windows VMs should not configure model: e1000e for best performance. 

    In most scenarios, to achieve maximum network throughput it is recommended to include networkInterfaceMultiqueue: true. 

     

    Host Tuning

    Tuned Profile

    By default OpenShift nodes apply a “throughput performance” based Tuned profile, which will perform well for a wide variety of cases. Other Tuned profiles are available for specific workload scenarios and can be managed using the default Node Tuning Operator.

    CPU Sleep States

    If your workload is sensitive to CPU wakeup latency, or in some cases if your hardware does not support optimized sleep state transitions, it can be useful to limit how “deep” CPUs are allowed to sleep. This tuning can be applied using the default Node Tuning Operator and does not require any reboots, for instance in order to limit CPUs to the “C1” state, apply this change: 

    apiVersion: tuned.openshift.io/v1
    kind: Tuned
    metadata:
      name: c1-lowlatency
      namespace: openshift-cluster-node-tuning-operator
    spec:
      profile:
      - data: |
          [main]
          summary=Pins to C1 cstate for low latency
    
          include=openshift-node
          [cpu]
          force_latency=1
        name: c1-lowlatency
      recommend:
      - machineConfigLabels:
          machineconfiguration.openshift.io/role: "worker"
        priority: 20
        profile: c1-lowlatency

    CPU Allocation Ratio

    The default allowed CPU overcommit ratio is 10:1, this ratio can be configured to the desired overcommit level by changing the cluster CPU Allocation Ratio, or CPU resource requests can be configured per-VM if desired.

     

    High Performance Workload Tuning

    Hugepages

    By default RHEL host kernels use THP (Transparent HugePages) which will try to auto promote page sizes when possible, however using statically reserved 1GB hugepages may improve performance for applications that are sensitive to having pages “locked down” or hardware TLB (Translation Lookaside Buffer) misses. See the documentation for more information on backing VM memory with hugepages. 

    Isolation and Pinning

    In most cases, the above tuning will be enough for applications to perform well, but in some cases workloads are very sensitive to any scheduler disruptions or require very low latency configurations. Further pinning and isolation of the VM may be required in these scenarios, see the Pinning sections in the Tuning Guide for more details. 

    Disclaimer: Please note the content in this blog post has not been thoroughly reviewed by the Red Hat Developer editorial team. Any opinions expressed in this post are the author's own and do not necessarily reflect the policies or positions of Red Hat.

    Recent Posts

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    • Red Hat UBI 8 builders have been promoted to the Paketo Buildpacks organization

    • Using eBPF in Red Hat products

    • How we made one data layer serve the UI, the mocks, and the E2E tests

    • Build trusted Python containers with Project Hummingbird and Calunga

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility