Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Hosted control plane operations

December 4, 2024
Valentino Uberti Marco Betti
Related topics:
ContainersDevOpsHybrid cloudKubernetesVirtualization
Related products:
Red Hat OpenShiftRed Hat OpenShift Virtualization

Hosted control plane (HCP) technology, through the API hypershift.openshift.io, provides a way to create and manage lightweight, flexible, heterogeneous Red Hat OpenShift Container Platform clusters at scale. The API exposes two user-facing resources: HostedCluster and NodePool. A HostedCluster resource encapsulates the control plane and common data plane configuration. When you create a HostedCluster resource, you have a fully functional control plane with no attached nodes. A NodePool resource is a scalable set of worker nodes attached to a HostedCluster resource.

This article focuses on hosted control plane topology using Red Hat OpenShift Virtualization and the kubevirt provider to provision the NodePool’s worker nodes of the HostedCluster. This is currently the only supported way to run OpenShift clusters on top of OpenShift Virtualization.

We will explain how to expose an application running on the hosted cluster and make it reachable from the outside using NodePort service, as this operation may seem complex at first glance.

Having OpenShift Virtualization as the hypervisor for your OpenShift clusters, together with the HCP form factor, offers several key benefits:

  • Enhanced resource utilization by consolidating hosted clusters on the same underlying bare metal infrastructure.
  • Strong isolation between hosted control planes and hosted clusters.
  • Faster cluster provisioning by eliminating the bare metal node bootstrapping process.
  • Simplified management of multiple releases under a single base OpenShift Container Platform cluster.

To install hosted clusters with the kubevirt provider, refer to this doc.

Figure 1 represents the hosted control plane with kubevirt provider high-level design.

Hosted Control Plan using OpenShift Virtualization
Figure 1: Hosted Control Plan using OpenShift Virtualization.

Let's now start deploying an application called "petclinic" on the hosted cluster using this YAML file, as shown in Figure 2.

Example application deployment
Figure 2: Example application deployment.

 The YAML file shows that the application listens on TCP ports 8080, 8443, and 8778:

 ports:
   - containerPort: 8080
     protocol: TCP
   - containerPort: 8443
     protocol: TCP
   - containerPort: 8778
     protocol: TCP 

In a typical OpenShift installation, the next step would be to expose the application using one of the following Kubernetes services:

  • ClusterIP
  • NodePort
  • LoadBalancer

Considering that the b cluster runs as virtual machines (pods) inside the software-defined network (SDN) of the OpenShift bare metal cluster, we need to find a simple way to reach our application running on the cluster from the external world.

Suppose the application running on the cluster is exposed using a ClusterIP service. In that case, the received cluster IP address belongs to the isolated hosted cluster network, and that IP address can’t be reached from the outside as it is.

When exposing the application running on the hosted cluster using a LoadBalancer Kubernetes service, different requirements must be met before the configurations:

  • MetalLB must be installed on the hosting cluster.
  • A secondary network interface controller (NIC) must be configured on the hosting cluster.
  • A Network Attachment Definition must be configured on the hosting cluster.
  • A secondary NIC must be configured on the virtual machines acting as a worker node for the hosted cluster.
  • The correct IPAddressPool must be configured on the cluster (there is no need to specify the host interface in layer 2 mode).

Exposing an application running on the hosted cluster using the NodePort service could be a more straightforward solution in some cases, especially for non http/https services. In this case, the required steps would be:

  • Create the NodePort service on the hosted cluster targeting the Pod application.
  • Create a NodePort service on the hosting cluster that targets the virtual machines' pods, which act as the cluster's worker nodes.

Focusing on the NodePort exposition scenario, let's see how to implement the required steps.

Create the NodePort service on the hosted cluster

Apply the NodePort service YAML file on the hosted cluster as shown in Figure 3.

Node port service
Figure 3: Node port service.

The Service is accessible using the IP address of each OpenShift node at a designated static port (TCP 30001, 30002, and 30003 in this example), referred to as the NodePort. To facilitate the availability of the node port, Kubernetes establishes a cluster IP address, similar to the process followed when a Service of type ClusterIP is requested.

The hosted cluster nodes are shown in Figure 4.

Hosted cluster node list
Figure 4: Hosted cluster node list.

 These are the hosted cluster nodes IP addresses:

$oc get nodes -o wide | awk {'print $1" "$3" "$6'} | column -t 
NAME                     ROLES   INTERNAL-IP
cluster1-985defa1-cqv97  worker  10.131.0.192
cluster1-985defa1-cvmn6  worker  10.128.2.82

Because we are using the HCP kubevirt provider, the hosted cluster's nodes are virtual machines from the point of view of the hosting cluster. See Figure 5.

Hosting cluster Virtual Machines
Figure 5: Hosting cluster VirtualMachines.

Do note that the IPs 10.131.0.192 and 10.128.2.82 assigned to the hosted cluster nodes equal the virtual machine IPs running on the hosting cluster. 

It should be clear now that when we create a NodePort service on the hosted cluster, we open some ports on the virtual machines running on the hosting cluster. The virtual machines running on the hosting run as a pod and are attached by default to the pod network of the hosting cluster. See Figure 6.

Hosting cluster Virtual Machines Pods
Figure 6: Hosting cluster Virtual Machines Pods.

For confirmation, the virt-launcher pod of one virtual machine has the same IP address (10.131.0.192), which is part of the pod's network of the hosting cluster, as depicted in Figure 7. 

Virt Launcher pod's labels
Figure 7: Virt Launcher pod's labels.

Create the NodePort service on the hosting cluster

We must now create a NodePort service on the hosting cluster to expose the TCP ports 30001, 30002, and 30003 listening on the virtual machine pods. For the service's podSelector label, we have different options because the hosted control plane process adds different labels for each NodePool at installation time; in this case, the following label was chosen as it selects all the virtual machines (pods) acting as OpenShift nodes for the hosted cluster:

 cluster.x-k8s.io/cluster-name=cluster1-8p5rc

After creating the NodePort service on the hosting cluster using this YAML file, we see that the NodePort service opened the TCP ports 30001, 30002, and 30003 on the nodes of the hosting cluster, as shown in Figure 8. 

NodePort Service
Figure 8: NodePort Service.

Also, we see that the NodePort service correctly selected the virtual machine pods (Figure 9).

NodePort Service selected pods
Figure 9: NodePort Service selected pods.

The hosting cluster nodes have the following IP addresses: 

$oc get nodes -o wide | awk {'print $1" "$3" "$6'} | column -t
NAME                          ROLES                 INTERNAL-IP 
ocp4-master1.aio.example.com  control-plane,master  192.168.123.101
ocp4-master2.aio.example.com  control-plane,master  192.168.123.102 
ocp4-master3.aio.example.com  control-plane,master  192.168.123.103
ocp4-worker1.aio.example.com  worker                192.168.123.104
ocp4-worker2.aio.example.com  worker                192.168.123.105
ocp4-worker3.aio.example.com  worker                192.168.123.106

To confirm that the ports are correctly opened on the hosting worker node, let's open a debug pod on one of the nodes to check if one of the wanted ports is in a listening state:

$oc debug node/ocp4-worker3.aio.example.com
Temporary namespace openshift-debug-cphkq is created for debugging node... Starting pod/ocp4-worker3aioexamplecom-debug-mzrzx
To use host binaries, run `chroot/host`
Pod IP: 192.168.123.106
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot/host
sh-5.1# ss -tlnp | grep 30002
LISTEN Ø  4096      *:30002       *:*   users: (("ovnkube", pid=6177, fd=26))

Now it's time to check if our "petclinic" application running on the hosted cluster is reachable from the outside through the hosting worker nodes IP addresses (192.168.123.104, 192.168.123.105, and 192.168.123.106) and 30001 TCP port: 

[root@ocp4-bastion ~]# curl -s 192.168.123.104:30001 | grep "pet"
<img class="img-responsive" src="/resources/images/pets.png"/>
[root@ocp4-bastion ~]# curl -s 192.168.123.105:30001 | grep "pet"
<img class="img-responsive" src="/resources/images/pets.png"/>
[root@ocp4-bastion ~]# curl -s 192.168.123.106:30001 | grep "pet"
<img class="img-responsive" src="/resources/images/pets.png"/>

As we can see, our "petclinic" application running on the hosted cluster is now accessible from the outside world.

Conclusion

Exposing an application running on the hosted cluster created by the hosted control plane using the kubevirt provider through the NodePort service brings many opportunities, and (of course) all the described procedures could be fully automated using the GitOps approach thanks to Red Hat Advanced Cluster Management and OpenShift Gitops technologies.

Have fun!

Related Posts

  • Ensure a scalable and performant environment for ROSA with hosted control planes

  • Installing Red Hat Advanced Cluster Management (ACM) for Kubernetes

  • Run OpenShift sandboxed containers with hosted control planes

  • Easily upgrade hosted OpenShift Virtualization clusters on hosted control planes

Recent Posts

  • Federated identity across the hybrid cloud using zero trust workload identity manager

  • Confidential virtual machine storage attack scenarios

  • Introducing virtualization platform autopilot

  • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

  • Best Practice Configuration and Tuning for Linux and Windows VMs

What’s up next?

Explore how the migration toolkit for containers (MTC) paves the way for seamless migration of application workloads from ROSA classic to ROSA HCP clusters, right down to the namespace level in this hand-on Learning Path.

Start the activity
Red Hat Developers logo LinkedIn YouTube Twitter Facebook

Platforms

  • Red Hat AI
  • Red Hat Enterprise Linux
  • Red Hat OpenShift
  • Red Hat Ansible Automation Platform
  • See all products

Build

  • Developer Sandbox
  • Developer tools
  • Interactive tutorials
  • API catalog

Quicklinks

  • Learning resources
  • E-books
  • Cheat sheets
  • Blog
  • Events
  • Newsletter

Communicate

  • About us
  • Contact sales
  • Find a partner
  • Report a website issue
  • Site status dashboard
  • Report a security problem

RED HAT DEVELOPER

Build here. Go anywhere.

We serve the builders. The problem solvers who create careers with code.

Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

Sign me up

Red Hat legal and privacy links

  • About Red Hat
  • Jobs
  • Events
  • Locations
  • Contact Red Hat
  • Red Hat Blog
  • Inclusion at Red Hat
  • Cool Stuff Store
  • Red Hat Summit
© 2026 Red Hat

Red Hat legal and privacy links

  • Privacy statement
  • Terms of use
  • All policies and guidelines
  • Digital accessibility

Chat Support

Please log in with your Red Hat account to access chat support.