Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Hosted control plane operations

December 4, 2024
Valentino Uberti Marco Betti
Related topics:
ContainersDevOpsHybrid CloudKubernetesVirtualization
Related products:
Red Hat OpenShiftRed Hat OpenShift Virtualization

Share:

Hosted control plane (HCP) technology, through the API hypershift.openshift.io, provides a way to create and manage lightweight, flexible, heterogeneous Red Hat OpenShift Container Platform clusters at scale. The API exposes two user-facing resources: HostedCluster and NodePool. A HostedCluster resource encapsulates the control plane and common data plane configuration. When you create a HostedCluster resource, you have a fully functional control plane with no attached nodes. A NodePool resource is a scalable set of worker nodes attached to a HostedCluster resource.

This article focuses on hosted control plane topology using Red Hat OpenShift Virtualization and the kubevirt provider to provision the NodePool’s worker nodes of the HostedCluster. This is currently the only supported way to run OpenShift clusters on top of OpenShift Virtualization.

We will explain how to expose an application running on the hosted cluster and make it reachable from the outside using NodePort service, as this operation may seem complex at first glance.

Having OpenShift Virtualization as the hypervisor for your OpenShift clusters, together with the HCP form factor, offers several key benefits:

  • Enhanced resource utilization by consolidating hosted clusters on the same underlying bare metal infrastructure.
  • Strong isolation between hosted control planes and hosted clusters.
  • Faster cluster provisioning by eliminating the bare metal node bootstrapping process.
  • Simplified management of multiple releases under a single base OpenShift Container Platform cluster.

To install hosted clusters with the kubevirt provider, refer to this doc.

Figure 1 represents the hosted control plane with kubevirt provider high-level design.

Hosted Control Plan using OpenShift Virtualization
Figure 1: Hosted Control Plan using OpenShift Virtualization.

Let's now start deploying an application called "petclinic" on the hosted cluster using this YAML file, as shown in Figure 2.

Example application deployment
Figure 2: Example application deployment.

 The YAML file shows that the application listens on TCP ports 8080, 8443, and 8778:

 ports:
   - containerPort: 8080
     protocol: TCP
   - containerPort: 8443
     protocol: TCP
   - containerPort: 8778
     protocol: TCP 

In a typical OpenShift installation, the next step would be to expose the application using one of the following Kubernetes services:

  • ClusterIP
  • NodePort
  • LoadBalancer

Considering that the b cluster runs as virtual machines (pods) inside the software-defined network (SDN) of the OpenShift bare metal cluster, we need to find a simple way to reach our application running on the cluster from the external world.

Suppose the application running on the cluster is exposed using a ClusterIP service. In that case, the received cluster IP address belongs to the isolated hosted cluster network, and that IP address can’t be reached from the outside as it is.

When exposing the application running on the hosted cluster using a LoadBalancer Kubernetes service, different requirements must be met before the configurations:

  • MetalLB must be installed on the hosting cluster.
  • A secondary network interface controller (NIC) must be configured on the hosting cluster.
  • A Network Attachment Definition must be configured on the hosting cluster.
  • A secondary NIC must be configured on the virtual machines acting as a worker node for the hosted cluster.
  • The correct IPAddressPool must be configured on the cluster (there is no need to specify the host interface in layer 2 mode).

Exposing an application running on the hosted cluster using the NodePort service could be a more straightforward solution in some cases, especially for non http/https services. In this case, the required steps would be:

  • Create the NodePort service on the hosted cluster targeting the Pod application.
  • Create a NodePort service on the hosting cluster that targets the virtual machines' pods, which act as the cluster's worker nodes.

Focusing on the NodePort exposition scenario, let's see how to implement the required steps.

Create the NodePort service on the hosted cluster

Apply the NodePort service YAML file on the hosted cluster as shown in Figure 3.

Node port service
Figure 3: Node port service.

The Service is accessible using the IP address of each OpenShift node at a designated static port (TCP 30001, 30002, and 30003 in this example), referred to as the NodePort. To facilitate the availability of the node port, Kubernetes establishes a cluster IP address, similar to the process followed when a Service of type ClusterIP is requested.

The hosted cluster nodes are shown in Figure 4.

Hosted cluster node list
Figure 4: Hosted cluster node list.

 These are the hosted cluster nodes IP addresses:

$oc get nodes -o wide | awk {'print $1" "$3" "$6'} | column -t 
NAME                     ROLES   INTERNAL-IP
cluster1-985defa1-cqv97  worker  10.131.0.192
cluster1-985defa1-cvmn6  worker  10.128.2.82

Because we are using the HCP kubevirt provider, the hosted cluster's nodes are virtual machines from the point of view of the hosting cluster. See Figure 5.

Hosting cluster Virtual Machines
Figure 5: Hosting cluster VirtualMachines.

Do note that the IPs 10.131.0.192 and 10.128.2.82 assigned to the hosted cluster nodes equal the virtual machine IPs running on the hosting cluster. 

It should be clear now that when we create a NodePort service on the hosted cluster, we open some ports on the virtual machines running on the hosting cluster. The virtual machines running on the hosting run as a pod and are attached by default to the pod network of the hosting cluster. See Figure 6.

Hosting cluster Virtual Machines Pods
Figure 6: Hosting cluster Virtual Machines Pods.

For confirmation, the virt-launcher pod of one virtual machine has the same IP address (10.131.0.192), which is part of the pod's network of the hosting cluster, as depicted in Figure 7. 

Virt Launcher pod's labels
Figure 7: Virt Launcher pod's labels.

Create the NodePort service on the hosting cluster

We must now create a NodePort service on the hosting cluster to expose the TCP ports 30001, 30002, and 30003 listening on the virtual machine pods. For the service's podSelector label, we have different options because the hosted control plane process adds different labels for each NodePool at installation time; in this case, the following label was chosen as it selects all the virtual machines (pods) acting as OpenShift nodes for the hosted cluster:

 cluster.x-k8s.io/cluster-name=cluster1-8p5rc

After creating the NodePort service on the hosting cluster using this YAML file, we see that the NodePort service opened the TCP ports 30001, 30002, and 30003 on the nodes of the hosting cluster, as shown in Figure 8. 

NodePort Service
Figure 8: NodePort Service.

Also, we see that the NodePort service correctly selected the virtual machine pods (Figure 9).

NodePort Service selected pods
Figure 9: NodePort Service selected pods.

The hosting cluster nodes have the following IP addresses: 

$oc get nodes -o wide | awk {'print $1" "$3" "$6'} | column -t
NAME                          ROLES                 INTERNAL-IP 
ocp4-master1.aio.example.com  control-plane,master  192.168.123.101
ocp4-master2.aio.example.com  control-plane,master  192.168.123.102 
ocp4-master3.aio.example.com  control-plane,master  192.168.123.103
ocp4-worker1.aio.example.com  worker                192.168.123.104
ocp4-worker2.aio.example.com  worker                192.168.123.105
ocp4-worker3.aio.example.com  worker                192.168.123.106

To confirm that the ports are correctly opened on the hosting worker node, let's open a debug pod on one of the nodes to check if one of the wanted ports is in a listening state:

$oc debug node/ocp4-worker3.aio.example.com
Temporary namespace openshift-debug-cphkq is created for debugging node... Starting pod/ocp4-worker3aioexamplecom-debug-mzrzx
To use host binaries, run `chroot/host`
Pod IP: 192.168.123.106
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot/host
sh-5.1# ss -tlnp | grep 30002
LISTEN Ø  4096      *:30002       *:*   users: (("ovnkube", pid=6177, fd=26))

Now it's time to check if our "petclinic" application running on the hosted cluster is reachable from the outside through the hosting worker nodes IP addresses (192.168.123.104, 192.168.123.105, and 192.168.123.106) and 30001 TCP port: 

[root@ocp4-bastion ~]# curl -s 192.168.123.104:30001 | grep "pet"
<img class="img-responsive" src="/resources/images/pets.png"/>
[root@ocp4-bastion ~]# curl -s 192.168.123.105:30001 | grep "pet"
<img class="img-responsive" src="/resources/images/pets.png"/>
[root@ocp4-bastion ~]# curl -s 192.168.123.106:30001 | grep "pet"
<img class="img-responsive" src="/resources/images/pets.png"/>

As we can see, our "petclinic" application running on the hosted cluster is now accessible from the outside world.

Conclusion

Exposing an application running on the hosted cluster created by the hosted control plane using the kubevirt provider through the NodePort service brings many opportunities, and (of course) all the described procedures could be fully automated using the GitOps approach thanks to Red Hat Advanced Cluster Management and OpenShift Gitops technologies.

Have fun!

Related Posts

  • Ensure a scalable and performant environment for ROSA with hosted control planes

  • Installing Red Hat Advanced Cluster Management (ACM) for Kubernetes

  • Run OpenShift sandboxed containers with hosted control planes

  • Easily upgrade hosted OpenShift Virtualization clusters on hosted control planes

Recent Posts

  • Unleashing multimodal magic with RamaLama

  • Integrate Red Hat AI Inference Server & LangChain in agentic workflows

  • Streamline multi-cloud operations with Ansible and ServiceNow

  • Automate dynamic application security testing with RapiDAST

  • Assessing AI for OpenShift operations: Advanced configurations

What’s up next?

Explore how the migration toolkit for containers (MTC) paves the way for seamless migration of application workloads from ROSA classic to ROSA HCP clusters, right down to the namespace level in this hand-on Learning Path.

Start the activity
Red Hat Developers logo LinkedIn YouTube Twitter Facebook

Products

  • Red Hat Enterprise Linux
  • Red Hat OpenShift
  • Red Hat Ansible Automation Platform

Build

  • Developer Sandbox
  • Developer Tools
  • Interactive Tutorials
  • API Catalog

Quicklinks

  • Learning Resources
  • E-books
  • Cheat Sheets
  • Blog
  • Events
  • Newsletter

Communicate

  • About us
  • Contact sales
  • Find a partner
  • Report a website issue
  • Site Status Dashboard
  • Report a security problem

RED HAT DEVELOPER

Build here. Go anywhere.

We serve the builders. The problem solvers who create careers with code.

Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

Sign me up

Red Hat legal and privacy links

  • About Red Hat
  • Jobs
  • Events
  • Locations
  • Contact Red Hat
  • Red Hat Blog
  • Inclusion at Red Hat
  • Cool Stuff Store
  • Red Hat Summit

Red Hat legal and privacy links

  • Privacy statement
  • Terms of use
  • All policies and guidelines
  • Digital accessibility

Report a website issue