Feature image for Red Hat OpenShift

So far in this series, we oversaw the required Red Hat OpenShift Container Platform operators, their roles, and the format used to create the Red Hat OpenStack Services on OpenShift (RHOSO) control plane. Then, we walked through control plane deployment and data plane configuration

We’re now ready to add OpenStack compute nodes to the control plane to run virtual machines. The deployment, unsurprisingly, relies on another YAML file. 

When released, you can either join an already deployed Red Hat Enterprise Linux (RHEL) 9.4 node or pilot bare metal deployment from scratch. For the scope of this article, we are going with the first option. The RHEL compute server has two network cards (eth0 and eth1). Details about the exact process can be found here; we will focus on the YAML files used to join the compute to the control plane.

One key element is that we need in the openstack OpenShift Container Platform namespace a secret for the ssh key that the cluster can use to ssh into the compute node to complete its configuration.

The deployment of the data plane is just two commands:

oc apply -f osp-ng-dataplane-node-set-deploy.yaml
oc apply -f osp-ng-dataplane-deployment.yaml

Deployment

Main YAML

The core of the configuration is in the first YAML; the second triggers the application of the configuration. Let’s dive into the actual data plane configuration.

Overall this defines an OpenStackDataPlaneNodeSet resource, which is a part of a custom Kubernetes or OpenShift resource for deploying and managing OpenStack data plane nodes. 

apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet

The key sections in this YAML are:

  • metadata
  • spec
    • Environment variables
    • preProvisioned
    • services
    • nodes
    • networkAttachments
    • nodeTemplate

metadata

This section includes basic information about the resource, like its name: openstack-edpm-ipam and the namespace: openstack-operators it belongs to. 

metadata:
  name: openstack-edpm-ipam

spec

Environment variables: Defines environment variables such as ANSIBLE_FORCE_COLOR and ANSIBLE_VERBOSITY to control Ansible's behavior during the deployment, making logs more readable and adjusting the log detail level. 

spec:
  env:
    - name: ANSIBLE_FORCE_COLOR
      value: "True"
    - name: ANSIBLE_VERBOSITY
      value: "2"

preProvisioned

Indicates that the nodes are already provisioned (preProvisioned: true), meaning that the configuration will be applied to existing nodes rather than dynamically provisioning new ones.

  preProvisioned: true

services

Lists the services to be deployed or actions to be taken on the data plane nodes, including bootstrapping, network configuration, OS installation, and OpenStack services like ovn, neutron-metadata, libvirt, nova, and telemetry.

  services:
    - bootstrap
    - download-cache
    - configure-network
    - validate-network
    - install-os
    - configure-os
    - run-os
    - ovn
    - neutron-metadata
    - libvirt
    - nova
    - telemetry

nodes

Specifies node-specific configurations, including the hostname (edpm-compute-0), Ansible connection details, and network attachments. Each network attachment defines a network name, subnet, and optionally a fixed IP and whether it's the default route. The ansibleHost and ansibleUser define the IP and username that the deployment will use to ssh connect to the compute node.

This ansibleUser value in the edpm-compute-0 section overwrites the default user value that would be used with all the nodes in the nodeTemplate.ansible.ansibleUser section (see below):

nodes:
      edpm-compute-0:
        hostName: edpm-compute-0
        ansible:
          ansibleHost: 172.22.0.100
          ansibleUser: root
        networks:
        - name: ctlplane
          subnetName: subnet1
          defaultRoute: false
          fixedIP: 172.22.0.100
        - name: internalapi
          subnetName: subnet1
        - name: storage
          subnetName: subnet1
        - name: tenant
          subnetName: subnet1
        - name: external
          subnetName: subnet1

networkAttachments

Specifies global network attachments that apply to all nodes in the NodeSet, ensuring consistent network configuration across the data plane.

  networkAttachments:
    - ctlplane

nodeTemplate

Provides a template for configuring all compute nodes, including Ansible SSH details, management network, and a detailed Ansible playbook (edpm_network_config_template) for setting up network interfaces and VLANs.

The playbook uses Jinja templating to dynamically configure network settings based on specified variables like service_net_map, MTU settings, VLAN IDs, and network addresses. You will certainly recognize the VLAN IDs present here, matching the ones defined during the control plane creation.

Additional configurations include firewall settings, SELinux mode, container registry credentials (to pull OpenStack container images), and more, aimed at preparing and securing the OpenStack data plane nodes for operation.

The edpm_network_config_template section is based on a multiple NIC with VLANS j2 template, with specific adjustments for this deployment, but you could look for your deployment at other options from this j2 template folder:

nodeTemplate:
    ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
    managementNetwork: ctlplane
    ansible:
      ansibleUser: root
      ansiblePort: 22
      ansibleVars:
         service_net_map:
           nova_api_network: internalapi
           nova_libvirt_network: internalapi
         timesync_ntp_servers:
           - hostname: pool.ntp.org
         edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in role_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: ovs_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: interface
              name: nic1
              mtu: {{ min_viable_mtu }}
              # force the MAC address of the bridge to this interface
              primary: true
          {% for network in role_networks if network != 'external' %}
            - type: vlan
              mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
              vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
              addresses:
              - ip_netmask:
                  {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
              routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
          {% endfor %}
          {% if 'external' in role_networks or 'external_bridge' in role_tags %}
          - type: ovs_bridge
            name: br-ex
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            use_dhcp: false
            members:
            - type: interface
              name: nic2
              mtu: 1500
              primary: true
          {% endif %}
          {% if 'external' in role_networks %}
            routes:
            - ip_netmask: 0.0.0.0/0
              next_hop: {{ external_gateway_ip | default('192.168.123.1') }}
            addresses:
            - ip_netmask: {{ external_ip }}/{{ external_cidr }}
          {% endif %}
         edpm_network_config_hide_sensitive_logs: false
          #
          # These vars are for the network config templates themselves and are
          # considered EDPM network defaults (for all computes).
         ctlplane_host_routes: []
         ctlplane_dns_nameservers:
         - 172.22.0.89
         - 10.11.5.160
         ctlplane_subnet_cidr: 24
         dns_search_domains: aio.example.com
         ctlplane_vlan_id: 1
         ctlplane_mtu: 1500
         external_mtu: 1500
         external_vlan_id: 44
         external_cidr: '24'
         external_host_routes: []
         internalapi_mtu: 1500
         internalapi_vlan_id: 20
         internalapi_cidr: '24'
         internalapi_host_routes: []
         storage_mtu: 1500
         storage_vlan_id: 21
         storage_cidr: '24'
         storage_host_routes: []
         tenant_mtu: 1500
         tenant_vlan_id: 22
         tenant_cidr: '24'
         tenant_host_routes: []
         neutron_physical_bridge_name: br-osp
         # name of the first network interface on the compute node:
         neutron_public_interface_name: eth0
         role_networks:
         - internalapi
         - storage
         - tenant
         networks_lower:
           external: external
           internalapi: internalapi
           storage: storage
           tenant: tenant
         # edpm_nodes_validation
         edpm_nodes_validation_validate_controllers_icmp: false
         edpm_nodes_validation_validate_gateway_icmp: false
         gather_facts: false
         enable_debug: false
         # edpm firewall, change the allowed CIDR if needed
         edpm_sshd_configure_firewall: true
         edpm_sshd_allowed_ranges: ['172.22.0.0/16']
         # SELinux module
         edpm_selinux_mode: enforcing
         edpm_podman_buildah_login: true
         edpm_container_registry_logins:
          registry.redhat.io:
            testuser: testpassword

Once everything is to our liking, we can deploy the data plane.

Conclusion

Congratulations! We’ve now successfully added OpenStack compute nodes to the control plane to run virtual machines. Once you’ve deployed the data plane, you have reached the end of this series. 

 Next, learn more about containers, Kubernetes, and OpenShift in your browser using the Developer Sandbox for Red Hat OpenShift.