Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Deploy hosted control planes with OpenShift Virtualization: Split hub

April 27, 2026
Prakash Rajendran
Related topics:
Artificial intelligenceKubernetesVirtualization
Related products:
Red Hat Advanced Cluster Management for KubernetesRed Hat OpenShiftRed Hat OpenShift Virtualization

    In Part 1 of this article series, we deployed Red Hat Advanced Cluster Management for Kubernetes, hosted control plane (HCP) and Red Hat OpenShift Virtualization entirely inside one Red Hat OpenShift cluster. While the all-in-one model is simple and easy to understand, enterprises rarely operate this way in production. Large organizations prefer to separate fleet management from cluster hosting, allowing different teams and infrastructure zones to scale independently and cleanly.

    OpenShift clusters

    In this topology, we need two OpenShift clusters.

    Cluster A (the Red Hat Advanced Cluster Management hub):

    • Runs Red Hat Advanced Cluster Management
    • Does not host control planes
    • Does not run worker VMs
    • Manages lifecycle and governance of many clusters

    Cluster B (management/hosting cluster multicluster engine for Kubernetes, hosted control plane, and OpenShift Virtualization):

    • Runs multicluster engine for Kubernetes
    • Runs hosted control plane
    • Runs OpenShift Virtualization
    • Creates HostedClusters and NodePools
    • Hosts both control plane pods and worker VMs

    Hosted clusters created in Cluster B are then imported into Cluster A for Day-2 operations (Figure 1).

    A diagram shows the hub and spoke architecture.
    Figure 1: Hub and spoke architecture.

    Enterprises choose this model for these reasons:

    • Clear separation of duties
      • Hub team controls governance, policy, visibility
      • Hosting platform team manages actual cluster creation + compute resources
    • Red Hat Advanced Cluster Management cluster stays lightweight
      • No workload hosting or control-plane workloads overload the hub.
    • Multicluster engine for Kubernetes cluster can scale independently
      • Add storage, CPU, worker nodes as demand for HostedClusters grows.
    • Supports multi-datacenter strategies
      • Hub cluster can run centrally; hosting cluster may run in different regions.

    Common prerequisites:

    • DNS entries for the hub cluster pointing to the IP from the same subnet as the nodes of the cluster.

    • API: api.cluster.example.com

    • Ingress: *.apps.cluster.example.com

    • Plan for firewall, LB, DNS entries across clusters

    Tools/credentials required:

    • OpenShift Installer
    • OC CLI
    • Pull Secret
    • SSH keys
    • hcp CLI
    • clusteradm CLI plug-in

    Cluster A (Red Hat Advanced Cluster Management)

    For demonstration purposes, we tested this on VMs running on vSphere using nested virtualization. 

    Nodes Sizing (Compact 3 node cluster): 3 x Master/Worker Nodes - 8 vCPU / 32G Memory / 1 x 125 GB disk

    Once all the pre-requisite are met, install the OpenShift cluster using your preferred installation method.

    Then install the Red Hat Advanced Cluster Management operator .

    Check whether all the required operators are installed on Cluster A (Figure 2).

    This table shows the operators installed on Cluster A.
    Figure 2: Installed Operators on Cluster A

    Prerequisites for Cluster B (multicluster engine for Kubernetes + hosted control plane + OCP-V):

    For the demonstration purpose, this was tested on VMs running on vSphere using nested virtualization.

    3 x Master Nodes - 8 vCPU / 32G Memory / 1 x 125 GB disk

    3 x Worker Nodes - 16 vCPU / 48G Memory / 1 x 125 GB disk + 1 x 500 GB disk

    To keep things simple, we are going to use LVM Storage Class for both etcd PVs of hosted clusters and for OCP-V requirements. But in the real world scenario, consult official Red Hat documentation for choosing the right storage classes.

    Install LVM storage operator. 

    Once the LVMS operator is installed, create the following CR.

    $ cat <<EOF | oc apply -f -
    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: lvmcluster-sample
      namespace: openshift-storage
    spec:
      storage:
        deviceClasses:
          - fstype: xfs
            thinPoolConfig:
              chunkSizeCalculationPolicy: Static
              metadataSizeCalculationPolicy: Host
              sizePercent: 90
              name: thin-pool-1
              overprovisionRatio: 10
            default: true
            name: vg1
    EOF

    Once the LVMCluster is available, you should see a new StorageClass created by name lvms-vg1 but we need to create a new StorageClass with VolumeBindingMode: Immediate.

    Note: This step is required only when you are using LVM StorageClass for OpenShift Virtualization and for testing purposes. In the production scenario, you might be choosing a different storage solution which supports RWX and VolumeBindingMode: WaitForFistConsumer is recommended.

    $ cat <<EOF | oc apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: lvm-immediate
      annotations:
        description: Provides RWO and RWOP Filesystem & Block volumes
        storageclass.kubernetes.io/is-default-class: "true"
      labels:
        owned-by.topolvm.io/group: lvm.topolvm.io
        owned-by.topolvm.io/kind: LVMCluster
        owned-by.topolvm.io/name: lvmcluster-sample
        owned-by.topolvm.io/namespace: openshift-storage
        owned-by.topolvm.io/version: v1alpha1
    provisioner: topolvm.io
    parameters:
      csi.storage.k8s.io/fstype: xfs
      topolvm.io/device-class: vg1
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    volumeBindingMode: Immediate
    EOF

    Make sure to remove the default SC annotation from lvms-vg1.

    $ oc patch storageclass lvms-vg1 \
      -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": null}}}'

    Install MetalLB operator.

    Configure IPPool & L2Advertisement as per the official documentation.

    Once you've installed the MetalLB operator, create the following:

    $ cat <<EOF | oc apply -f -
    apiVersion: metallb.io/v1beta1
    kind: MetalLB
    metadata:
      name: metallb
      namespace: metallb-system
    EOF
    
    $ cat <<EOF | oc apply -f -
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: metallb
      namespace: metallb-system
    spec:
      addresses:
      - 192.168.34.205-192.168.34.215
    EOF
    
    $ cat <<EOF | oc apply -f -
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: l2advertisement
      namespace: metallb-system
    spec:
      ipAddressPools:
       - metallb
    EOF

    Patch network operator:

    $ oc patch ingresscontroller -n openshift-ingress-operator default \
      --type=json \
      -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'

    Install OpenShift Virtualization operator and hyperconverged CR. 

    Once you've installed all the operators successfully, it’s time to test by creating a simple VM to make sure OCP Virtualization is functioning. Once VM validation is successful, proceed with next steps to configure a multicluster engine for Kubernetes.

    Install multicluster engine for the Kubernetes operator.

    Once you've installed the multicluster engine for Kubernetes successfully, make sure the hub cluster is seen as the managed cluster.

    $ oc get managedclusters local-cluster

    Check whether all the required operators are installed (Figure 3).

    This table shows the operators installed on Cluster B.
    Figure 3: Installed operators on Cluster B.

    Prepare the multicluster engine for Kubernetes cluster before importing it on Red Hat Advanced Cluster Management cluster.

    It is important to complete all the steps outlined in this official documentation to manage the multicluster engine for Kubernetes cluster from Red Hat Advanced Cluster Management and auto-discover hosted clusters imported using the policies.

    Do not proceed with the next section before completing this section. Once the multicluster engine for the Kubernetes cluster is imported, we can see Red Hat Advanced Cluster Management (Cluster A) in Figure 4.

    This table shows the cluster view of Cluster A.
    Figure 4: Cluster view from Cluster A.

    On the multicluster engine for Kubernetes cluster, there is just a local cluster (Figure 5).

    This table shows the cluster view of Cluster B.
    Figure 5: Cluster view from Cluster B.

    Create a hosted cluster on Cluster B

    You cannot create a hosted cluster from the Red Hat Advanced Cluster Management hub cluster. Connect to the multicluster engine for Kubernetes cluster and run hcp create cluster command to create the hosted cluster. 

    This single hcp create cluster command provisions the hosted cluster on the multicluster engine for Kubernetes (Cluster B) using nodepools VMs using KubeVirt.

    $ export KUBECONFIG=kubeconfig-mce
    
    $ hcp create cluster kubevirt \
        --name mce-hc1 \
        --pull-secret pull-secret.txt \
        --node-pool-replicas 2 \
        --memory 8Gi \
        --cores 2 \
        --etcd-storage-class=lvm-immediate \
        --namespace clusters \
        --release-image quay.io/openshift-release-dev/ocp-release:4.18.28-multi

    Wait for 10 to 15 minutes to have the hosted cluster created. While waiting, you can check the status on the GUI of the multicluster engine for Kubernetes (aka hosting) cluster (Cluster B).

    We can check the status of the hosted cluster from Red Hat Advanced Cluster Management (Cluster A) and Nodepools status from the multicluster engine for Kubernetes cluster (Cluster B) as in the following.

    $ oc --kubeconfig=kubeconfig-acm get managedcluster 
    NAME                    HUB ACCEPTED   MANAGED CLUSTER URLS               JOINED   AVAILABLE   AGE
    local-cluster           true           https://api.acm.example.com:6443   True     True        46h
    mce-with-ocpv           true           https://api.mce.example.com:6443   True     True        4h10m
    mce-with-ocpv-mce-hc1   true           https://192.168.34.205:6443        True     True        4h2m

    Notice that the hosted cluster mce-hc1 is prefixed with mce-with-ocpv as it was configured to import automatically with the name of the managed cluster.

    $ oc --kubeconfig=kubeconfig-mce get managedcluster 
    NAME            HUB ACCEPTED   MANAGED CLUSTER URLS               JOINED   AVAILABLE   AGE
    local-cluster   true           https://api.mce.example.com:6443   True     True        4h26m
    mce-hc1         true           https://192.168.34.205:6443        True     True        4h10m
    
    $ oc --kubeconfig=kubeconfig-mce get hostedcluster -n clusters
    NAME       VERSION   KUBECONFIG               PROGRESS    AVAILABLE   PROGRESSING   MESSAGE
    mce-hc1   4.18.28   mce-hc1-admin-kubeconfig   Completed   True        False         The hosted control plane is available
    
    $ oc --kubeconfig=kubeconfig-mce get vmi -n clusters-mce-hc1
    NAME                  AGE    PHASE     IP             NODENAME      READY
    mce-hc1-j6vtm-8c5kd   4h4m   Running   10.130.0.158   mce-worker1   True
    mce-hc1-j6vtm-gnpc9   4h5m   Running   10.130.0.157   mce-worker1   True

    We can see the Red Hat Advanced Cluster Management Cluster A in Figure 6.

    Cluster view of Cluster A.
    Figure 6: Cluster view of Cluster A.

    Finally, we can check the hosted cluster and run commands.

    $ export KUBECONFIG=kubeconfig-mce ; hcp create kubeconfig --name mce-hc1 --namespace clusters > kubeconfig-mce-hc1
    
    $ oc --kubeconfig=kubeconfig-mce-hc1 get nodes
    NAME                  STATUS   ROLES    AGE    VERSION
    mce-hc1-j6vtm-8c5kd   Ready    worker   4h7m   v1.31.13
    mce-hc1-j6vtm-gnpc9   Ready    worker   4h8m   v1.31.13
    
    $ oc --kubeconfig=kubeconfig-mce-hc1 whoami --show-console
    https://console-openshift-console.apps.mce-hc1.apps.mce.example.com

    Notice that the console is pointing to a wildcard DNS of the Cluster B.

    $ oc --kubeconfig=kubeconfig-mce-hc1 whoami --show-server
    https://192.168.34.205:6443

    Notice that the API address is pointing to the IP address provided by the MetalLB range we previously configured.

    The following lists pros & cons of this topology:

    Pros:

    • The most stable and scalable production design
    • Hub is lightweight and secure
    • Hosting resources scale independently
    • Enterprise teams can align with operational boundaries

    Cons:

    • Slightly more operational complexity
    • HostedClusters must be imported manually (unless automated)
    • Networking dependencies between clusters

    Wrap up

    In this installment, we separated Red Hat Advanced Cluster Management (hub) and the multicluster engine for Kubernetes/hosted control plane/OpenShift Virtualization (management and hosting) across two clusters. This pattern ensures clean isolation, predictable scaling, and clear team responsibilities. But some customers operate on yet another level of scale and flexibility: they split control-plane hosting from worker-node hosting.

    In the next installment, we'll discuss where an advanced architecture runs:

    • Red Hat Advanced Cluster Management Hub on Cluster A
    • Multicluster engine for Kubernetes + hosted control plane on Cluster B
    • OpenShift Virtualization (NodePool VMs only) on Cluster C
    • NodePools live on Cluster C even though control planes run on Cluster B

    This allows incredible flexibility in worker VM placement and multi-zone hosting strategies. Stay tuned for Part 3.

    Related Posts

    • Integrate Red Hat Advanced Cluster Management with Argo CD

    • Introducing incident detection in Red Hat Advanced Cluster Management for Kubernetes 2.14

    • Deploy hosted control planes with OpenShift Virtualization

    • Proximity automation with Red Hat Ansible Automation Platform and Red Hat OpenShift Virtualization

    Recent Posts

    • Deploy hosted control planes with OpenShift Virtualization: Split hub

    • Automate Infoblox DDI with Red Hat Ansible Automation Platform

    • Beyond the next token: Why diffusion LLMs are changing the game

    • From 200 lines to 15: How Helion is rewriting the rules of GPU programming

    • Sky computing with OpenShift Service Mesh and SPIRE, part 2: External and multicloud integration

    What’s up next?

    Learning Path Virtualization share image

    Red Hat OpenShift Virtualization disaster recovery

    Implement disaster recovery with storage replication and OpenShift APIs for...
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue