Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Introducing virtualization platform autopilot

Red Hat OpenShift Virtualization 4.22: New virtualization platform autopilot feature

May 7, 2026
Simone Tiraboschi Fabian Deutsch
Related topics:
Cloud automationOperators
Related products:
Red Hat OpenShift GitOpsRed Hat OpenShift Virtualization

    The latest releases of Red Hat OpenShift's assisted installer now include a virtualization bundle, which provides the recommended set of operators commonly utilized with Red Hat OpenShift Virtualization. This simplification of operator delivery and installation in disconnected environments is the foundation for a new feature: Virtualization platform autopilot, available as a Developer Preview in Red Hat OpenShift Virtualization 4.22.

    Virtualization platform autopilot manages operators it knows about and OpenShift components to optimize the entire cluster for virtualization workloads, based on recommended and validated reference configurations. This new feature reduces the need for a cluster administrator to know about complex cluster and adjacent operator configurations. From a cluster administrator's perspective, the cluster now reconfigures itself for virtualization during the installation of OpenShift Virtualization.

    What is OpenShift Virtualization?

    OpenShift Virtualization is a feature of Red Hat OpenShift for running virtual machines (VM) on OpenShift. Specifically, OpenShift Virtualization is implemented and delivered as an operator that contains and manages subcomponents such as KubeVirt. However, enabling the complete feature set of OpenShift Virtualization requires a cluster administrator to:

    • Configure OpenShift itself
    • Configure related operators

    This composability is providing a lot of flexibility by allowing you to adjust OpenShift to many use cases. While an advanced user can take advantage of this flexibility, we're seeing that newcomers sometimes struggle because they don't have all the knowledge they need up front. For example:

    • Enabling load-aware balancing requires installation and configuration of the descheduler operator, and the deployment of additional machine configurations
    • Enabling higher density requires the provisioning of swap using machine configs and configuration of a custom kubelet configuration
    • Enabling high availability requires the installation and configuration of NHC and at least one of the remediators, such as FAR or SNR

    Virtualization platform autopilot meets the needs of newcomers and advanced users by autoconfiguring parts of OpenShift and its operators, and stepping out of the way when the cluster administrator provides a custom configuration.

    Virtualization platform autopilot

    Today, platform and operator configuration is expected to be performed by the cluster administrator. This has two issues:

    • The cluster administrator must know that a certain feature requires configuration
    • The cluster administrator must know how an operator needs to be configured.

    On one hand, this is giving control to the cluster administrator, but on the other hand it requires the admin to stay current with the product's documentation, including changes between versions. With the introduction of the virtualization platform autopilot, as a development preview in version 4.22, OpenShift Virtualization is now providing a component that can do these configurations automatically without input from the cluster administrator.

    This moves the burden of configuration from the cluster admin over to code (technically, to a controller). This controller (the autopilot) automatically applies the configurations that are documented and expected to be applied manually by the cluster administrator. The documentation does not contain a static set of YAML files. Some values are computed dynamically at runtime. For example, tuning parameters that depend on the number of nodes in the cluster, or capabilities that are only enabled when specific hardware is detected on the nodes.

    Consider the potential difference between manual configuration and autopilot:

    Day 1 and 2 flow today without autopilot

    1. Install OpenShift
    2. Install the OpenShift Virtualization operator
    3. Reconfigure descheduler operator for load-aware balancing
    4. Apply MachineConfigs for higher density enablement
    5. Configure node health check (NHC) for high availability
    6. On minor releases:
      1. Revisit documentation and release notes
      2. Apply new suggested defaults

    Day 1 and 2 flow with autopilot

    1. Install OpenShift
    2. Install the OpenShift Virtualization operator

    Design considerations

    This is assumed to be a helpful tool for default configurations, but we understand that optimized deployments require custom configurations of the platform and operators. The autopilot can be disabled (fully or partially) to avoid conflicts with such custom deployments and keep full control in an administrator's hands.

    As part of the OpenShift 4.22 development preview, the autopilot automatically configures the descheduler to enable load-aware balancing. Over time, the autopilot will manage even more operators.

    A key requirement on the autopilot was to support newcomers without constraining advanced users in any way. With that in mind, we built autopilot around the following design principles:

    • Zero API surface: We decided against adding new Custom Resource Definitions (CRDs). You don't need to learn a new API or monitor new Status fields. The autopilot uses the APIs you already have.
    • Controller pattern: The autopilot runs as an independent controller that watches the HyperConverged (HCO) resource and manages all dependencies not once, but over their complete life-time.
    • Soft dependencies: The tool is designed to be composable. If an optional component (like the descheduler operator) is available, then autopilot configures it. If it's not available, then it gracefully waits and retries later.
    • Silent but observable: If the autopilot is doing its job, then you won't hear from it. It only alerts you, using Prometheus or Kubernetes Events, when human intervention is required.

    The "patched baseline" approach

    Virtualization platform autopilot moves the configuration burden to a dedicated controller. The controller is using a "patched baseline" approach to ensure your cluster stays in a healthy, optimized state:

    • Renders opinionated defaults: It starts by rendering the baseline configuration according to the documentation and cluster specifics as it is encoded into the controller as part of a release.
    • Applies user customizations: It surgically applies specific overrides that are provided by annotations (in memory) to create a modified desired state.
    • Detects drift and reconciles: It uses server-side apply (SSA) dry-runs to detect differences, and automatically reconciles the cluster to the desired state without controller conflict.

    This approach starts with reasonable defaults and allows a cluster administrator to manually manage specific configurations, while leaving others to the autopilot. Without any overrides, the autopilot manages the cluster fully automatically.

    Try it

    The autopilot is disabled by default during the developer preview in OpenShift Virtualization 4.22, but you can enable it.

    First, install OpenShift 4.22 and OpenShift Virtualization 4.22.

    Then add an annotation to your HyperConverged resource:

    $ oc annotate hco kubevirt-hyperconverged \
    -n openshift-cnv platform.kubevirt.io/autopilot="true"

    After the autopilot is enabled, there are a few additional configurations you can try.

    Configuration options

    The value of the platform.kubevirt.io/autopilot annotation controls what the autopilot manages. Use true to enable all available features, or pass one or more specific names (comma-separated) to enable only what you need:

    • true: All features
    • prometheus-alerts: Autopilot specific alerting rules
    • swap-enable: Swap on worker nodes (higher VM density)
    • descheduler-loadaware: Load-aware descheduling
    • kubelet-cpu-manager: CPU Manager
    • pci-passthrough: PCI passthrough
    • mtv-operator: VM migration
    • metallb-operator: MetalLB
    • observability-operator: Observability UI plugin

    For example:

    • platform.kubevirt.io/autopilot="true" enables everything
    • platform.kubevirt.io/autopilot="swap-enable,descheduler-loadaware,observability-operator" enables Swap, descheduler, and cluster observability operator (COO)

    Currently, autopilot is an optional feature but after it has reached General Availability, the autopilot will be active by default for new clusters.

    Existing clusters will stay manually configured. Administrators who want to maintain full manual control will need to opt out using an annotation.

    Dry-run and debugging

    Instead of (or before) enabling the autopilot, you can audit exactly what the autopilot would apply to your cluster. This allows you to audit the suggested configuration against your current cluster state.

    First, forward a port to provide access from your local machine:

    $ oc port-forward \
    -n openshift-cnv deploy/virt-platform-autopilot \
    8081:8081 &

    Then render all included assets as YAML:

    $ curl http://localhost:8081/debug/render
    Handling connection for 8081
    ...---
    # Asset: descheduler-loadaware
    # Path: active/descheduler/recommended.yaml.tpl
    # Component: KubeDescheduler
    # Status: INCLUDED
    apiVersion: operator.openshift.io/v1
    kind: KubeDescheduler
    metadata:
      name: cluster
      namespace: openshift-kube-descheduler-operator
    spec:
      deschedulingIntervalSeconds: 60
      evictionLimits:
        node: 2
        total: 5
      managementState: Managed
      mode: Automatic
      profiles:
      - DevKubeVirtRelieveAndMigrate
    ---

    This is the full computed configuration as the autopilot would apply it, including context-aware assets evaluated against your actual cluster hardware and state. Nothing is actually applied during this step, so you can use this to understand exactly what the autopilot would change before you opt in.

    You can take this one step further and diff the rendered output directly against your live cluster:

    $ curl 'http://localhost:8081/debug/render?only-installed=true' | oc diff -f -

    The only-installed=true parameter filters out any asset that has no CRD present in the cluster (for example, NodeHealthCheck would be excluded if the remediation operator is not installed), so oc diff can run cleanly without errors.

    Monitoring

    Unlike other operators or controllers, autopilot works without a dedicated custom resource (CR), so there is no CR status. However, you can monitor the autopilot's actions using standard tools, such as oc and the OpenShift console.

    Monitor events

    To see events such as Applied Descheduler profile KubeVirtRelieveAndMigrate when the autopilot makes a move, use the oc command:

    $ oc describe hco -n openshift-cnv

    Monitor metrics and alerts

    Check for kubevirt_autopilot_compliance_status, kubevirt_autopilot_customization_info, and kubevirt_autopilot_missing_dependency metrics in the OpenShift console to verify your components are currently aligned with the recommended reference.

    According to the value of those metrics, alerts will be raised when human intervention is needed. Use kubevirt_autopilot_paused_resources to identify resources in an edit loop that require manual intervention to be resolved.

    Automation with manual overrides

    A single configuration doesn't suit everyone, so autopilot provides ways for an administrator to opt out and gain manual control of managing objects. For transparency and discoverability, every object managed by the autopilot is automatically labeled with platform.kubevirt.io/managed-by: virt-platform-autopilot.

    Built for GitOps compatibility

    Many administrators use Argo CD to manage their clusters. A core design goal was to ensure the autopilot works with OpenShift GitOps (Argo CD). By using specific annotations, you can define a "symbiotic" relationship: Argo CD manages your custom organizational requirements, while the autopilot handles the complex, virtualization-specific details.

    To achieve this balance, the autopilot provides four "escape hatches":

    Specific overrides (JSON patch)

    If you need to change a specific field—such as adjusting a systemd unit in a MachineConfig—while letting the autopilot manage the rest, use the platform.kubevirt.io/patch annotation. This is fully compatible with GitOps. You define the annotation of the object in the manifest managed by Argo CD. The platform.kubevirt.io/patch annotation, stored on the live object (and in the Git repo) influences autopilot behaviour.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 90-worker-swap-online
      annotations:
        platform.kubevirt.io/patch: |
          [
            {"op": "replace", "path": "/spec/config/systemd/units/0/contents", "value": "..."},
            {"op": "add", "path": "/spec/config/storage/files/-", "value": {...}}
          ]

    Field masking (loose ownership)

    Use the platform.kubevirt.io/ignore-fields annotation to forbid autopilot from altering specific fields. This prevents "edit wars" between autopilot and Argo CD, and allows you to manually tune specific parameters without losing the autopilot's management of the overall resource.

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      annotations:
        platform.kubevirt.io/ignore-fields: "/spec/liveMigrationConfig/parallelMigrationsPerCluster,/spec/featureGates/enableCommonBootImageImport"

    Full resource opt-out

    If you need to take total control of a specific resource for troubleshooting or for a unique edge case, you can stop the reconciliation loop for that object by setting platform.kubevirt.io/mode: "unmanaged" . The cluster admin or Argo CD can then take over the resource entirely without interference.

    metadata:
      annotations:
        platform.kubevirt.io/mode: unmanaged

    Root exclusion (Day 0 prevention)

    In some environments, you may want to prevent a resource from ever being created. You can use the platform.kubevirt.io/disabled-resources annotation on the HyperConverged CR. It accepts a YAML array and supports globbing (wildcards) to completely exclude objects by name or kind:

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
      annotations:
        platform.kubevirt.io/disabled-resources: |
          - kind: KubeDescheduler  # Cluster-scoped resource
            name: cluster
    
          - kind: ConfigMap
            namespace: openshift-cnv
            name: virt-tuning-*      # Wildcard for multiple configs
    
          - kind: Service
            namespace: prod-*        # Namespace wildcard
            name: metrics
    
          - kind: Secret             # Omit namespace = all namespaces
            name: credentials-*

    Get the source and back to us

    Adopting the autopilot isn't an all-or-nothing decision. You can keep your existing GitOps workflows for the parts of the platform you care about, and delegate the rest of the complex, inter-operator configuration to the autopilot. It provides the documented configuration as a service, but you always have the final say.

    The virtualization platform autopilot is currently a Development Preview, and your feedback is crucial to its evolution. Are there adjacent operators or platform configurations you find difficult to manage today? Did the escape hatches give you enough flexibility for your environment? Reach out through our standard support channels or open an issue in the virt-platform-autopilot Git repository to help shape the future of zero-ops virtualization.

    Related Posts

    • Facing a forced migration? You have a choice with OpenShift Virtualization

    • OpenShift AI observability summarizer: Transform metrics into meaning

    • Right-sizing recommendations for OpenShift Virtualization

    • Why I switched from VMware to OpenShift Virtualization

    • How to monitor OpenShift Virtualization VMs with Zabbix

    Recent Posts

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    • Red Hat UBI 8 builders have been promoted to the Paketo Buildpacks organization

    • Using eBPF in Red Hat products

    What’s up next?

    Learning Path Feature image for Red Hat OpenShift

    Bridging the gap: Integrate legacy VMs into a zero trust service mesh

    Dive into how to onboard legacy virtual machines into modernized workloads...
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility