Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

How to build hosted clusters on the OpenStack platform

December 19, 2024
Emilien Macchi
Related topics:
Kubernetes
Related products:
Red Hat OpenShift

Share:

    As cloud-native infrastructure evolves, managing Red Hat OpenShift clusters demands greater scalability and efficiency. Hosted control planes (HCP) represent a transformative approach in OpenShift, decoupling control planes and hosting them in lightweight, flexible pods. This innovation enhances resource utilization, multi-tenancy, and hybrid-cloud capabilities—key priorities for modern infrastructure.

    Learn more about Shift-On-Stack enhanced with hosted control planes.

    Cluster preparation

    A more comprehensive guide is available here in the project documentation. This section will highlight some requirements and steps that are specific to the OpenStack platform.

    Requirements

    The following requirements must be met:

    • Admin access to an OpenShift cluster (version 4.17+) specified by the KUBECONFIG environment variable. This cluster is referred to as the Management cluster. It can run on any platform supported by HCP. The HyperShift Operator must be installed in this Management cluster.
    • OpenStack Octavia service must be running in the cloud hosting the Hosted Cluster Infrastructure when ingress is configured with an Octavia load balancer. In the future, we'll explore other Ingress options like MetalLB.
    • The default external network (on which the kube-apiserver LoadBalancer type service is created) of the Management OpenShift Container Platform cluster must be reachable from the Hosted Clusters.
    • The Red Hat Enterprise Linux CoreOS (RHCOS) image must be uploaded to OpenStack prior to deploying a Hosted Cluster. The image can be pushed in the admin project and made available to other tenants (by being public) or can be pushed to every project used by HostedClusters.

    Prerequisites

    In addition to the above requirements, the HyperShift Operator and the HCP CLI must also be installed.

    Because OpenStack is not yet a supported platform in HCP, the following procedure must be followed:

    podman run --rm --privileged -it -v \
    $PWD:/output docker.io/library/golang:1.22 /bin/bash -c \
    'git clone https://github.com/openshift/hypershift.git && \
    cd hypershift/ && \
    make hypershift product-cli && \
    mv bin/hypershift /output/hypershift && \
    mv bin/hcp /output/hcp'
    sudo install -m 0755 -o root -g root $PWD/hypershift /usr/local/bin/hypershift
    sudo install -m 0755 -o root -g root $PWD/hcp /usr/local/bin/hcp
    rm $PWD/hypershift
    rm $PWD/hcp
    export KUBECONFIG=<path to management cluster admin kubeconfig>
    hypershift install --tech-preview-no-upgrade

    In production, the procedure would typically look like this:

    1. You need Red Hat Advanced Cluster Management for Kubernetes installed. See the documentation and follow the installation manual.
    2. Enable the HCP feature by following the procedure documented in the HCP manual.

    Note that the HyperShift Operator has to be installed with a TechPreview flag. This can be accomplished by creating a ConfigMap named hypershift-operator-install-flags in local-cluster namespace:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: hypershift-operator-install-flags
      namespace: local-cluster
    data:
      installFlagsToAdd: "--tech-preview-no-upgrade"
      installFlagsToRemove: ""
    1. Install the HCP command line interface following this procedure documented in the HCP manual (please refer to the latest version of OpenShift).

    Prerequisites for Ingress

    To get Ingress healthy in a HostedCluster without manual intervention, you need to create a floating IP that will be used by the Ingress service:

    export OS_CLOUD=<name of the credentials used for the Hosted Cluster>
    openstack floating ip create <external-network-id>

    You’ll need to create a DNS record for the following wildcard domain that needs to point to the Ingress floating IP: *.apps.<cluster-name>.<base-domain>

    Create a HostedCluster

    Multiple options are available on OpenStack so you can choose how the NodePools will be configured. See below:

    export CLUSTER_NAME=example
    export BASE_DOMAIN=hypershift.lab
    export PULL_SECRET="$HOME/pull-secret"
    export WORKER_COUNT="3"
    # "openstack-tenant-dev" is an entry in clouds.yaml
    # to create OpenStack resources in the cloud with enough
    # quotas and permissions to deploy the needed resources.
    export OS_CLOUD="openstack-tenant-dev"
    # RHCOS image name that must already exist in OpenStack Glance
    # This image can be shared across multiple projects or unique
    # per project.
    export IMAGE_NAME="rhcos"
    # Flavor for the nodepool which would be the same flavor
    # as a regular worker
    export FLAVOR="m1.large"
    # Floating IP for Ingress (previously created)
    export INGRESS_FLOATING_IP="<ingress-floating-ip>"
    # External network to use for the Ingress endpoint.
    export EXTERNAL_NETWORK_ID="5387f86a-a10e-47fe-91c6-41ac131f9f30"
    # SSH Key for the nodepool VMs
    export SSH_KEY="$HOME/.ssh/id_rsa.pub"
    hcp create cluster openstack \
      --name $CLUSTER_NAME \
      --base-domain $BASE_DOMAIN \
      --node-pool-replicas $WORKER_COUNT \
      --pull-secret $PULL_SECRET \
      --ssh-key $SSH_KEY \
      --openstack-external-network-id $EXTERNAL_NETWORK_ID \
      --openstack-node-image-name $IMAGE_NAME \
      --openstack-node-flavor $FLAVOR \
      --openstack-ingress-floating-ip $INGRESS_FLOATING_IP

    The time to deploy a HostedCluster with 3 NodePool replicas (3 virtual machines) is usually less than 15 minutes.

    You can check that it’s deployed with this command:

    oc get --namespace clusters hostedclusters
    NAME            VERSION   KUBECONFIG                       PROGRESS   AVAILABLE   PROGRESSING   MESSAGE
    example         4.18.0    example-admin-kubeconfig         Completed  True        False         The hosted control plane is available

    Since we deployed 3 replicas of the default NodePool, you can check that you have 3 virtual machines (VMs) running in OpenStack:

    openstack server list

    Accessing the HostedCluster

    CLI access to the Guest Cluster is gained by retrieving the guest cluster's kubeconfig. Below is an example of how to retrieve the guest cluster's kubeconfig using the hcp command line:

    hcp create kubeconfig --name $CLUSTER_NAME > $CLUSTER_NAME-kubeconfig

    Scale the NodePools

    Not only can you scale (up or down) the number of replicas for a given NodePool, you can also create new NodePools by specifying a name, number of replicas, and platform-specific information like the additional ports to create for each node and availability zone for the VMs.

    Here is an example of a NodePool that will be deployed in a specific OpenStack Nova Availability Zone, with additional network connectivity to the SR-IOV network for running Containerized Network Functions (CNF):

    # "openstack-tenant-nfv" is an entry in clouds.yaml
    # to create OpenStack resources in the cloud with enough
    # quotas and permissions to deploy the NFV resources.
    export OS_CLOUD="openstack-tenant-nfv"
    export NODEPOOL_NAME=$CLUSTER_NAME-cnf
    export WORKER_COUNT="3"
    export IMAGE_NAME="rhcos"
    export FLAVOR="m1.xlarge.nfv"
    # OpenStack Nova Availablity Zone
    export AZ="az1"
    # OpenStack Neutron Network UUID for SR-IOV
    export SRIOV_NEUTRON_NETWORK_ID="<NEUTRON-NETWORK-UUID>"
    hcp create nodepool openstack \
      --cluster-name $CLUSTER_NAME \
      --name $NODEPOOL_NAME \
      --node-count $WORKER_COUNT \
      --openstack-node-image-name $IMAGE_NAME \
      --openstack-node-flavor $FLAVOR \
      --openstack-node-availability-zone $AZ \
      --openstack-node-additional-port "network-id:$SRIOV_NEUTRON_NETWORK_ID,vnic-type:direct,disable-port-security:true"

    Once the NodePool has been deployed, the Cluster Instance Admin can easily deploy the SR-IOV Network Operator so CNF workloads can run on these nodes alongside with a Performance Profile.

    Destroy the HostedCluster

    To delete a HostedCluster, run the following:

    hcp destroy cluster openstack --name $CLUSTER_NAME

    The process will take a few minutes to complete and will destroy all resources associated with the HostedCluster including OpenStack resources.

    Wrapping up

    The integration of OpenStack with hosted control planes represents a pivotal advancement in the "Shift-On-Stack" journey, offering Cluster Service Providers an efficient, scalable, and streamlined way to manage Kubernetes control planes.

    For more details, read our companion article on the Red Hat Blog. In upcoming posts, we’ll dive into more advanced scenarios and share real-world examples. Stay tuned!

    Last updated: January 17, 2025

    Related Posts

    • Run OpenShift sandboxed containers with hosted control planes

    • Ensure a scalable and performant environment for ROSA with hosted control planes

    • Easily upgrade hosted OpenShift Virtualization clusters on hosted control planes

    • Extending OpenShift Virtualization connectivity options on hosted control planes in AWS

    Recent Posts

    • What qualifies for Red Hat Developer Subscription for Teams?

    • How to run OpenAI's gpt-oss models locally with RamaLama

    • Using DNS over TLS in OpenShift to secure communications

    • Scaling DeepSeek and Sparse MoE models in vLLM with llm-d

    • Multicluster authentication with Ansible Automation Platform

    What’s up next?

    Explore how the migration toolkit for containers (MTC) paves the way for seamless migration of application workloads from ROSA classic to ROSA HCP clusters, right down to the namespace level in this hands-on learning path.

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue