In the previous post, we oversaw the required Red Hat OpenShift Container Platform operators, their roles, and the format used to create the Red Hat OpenStack Services on OpenShift (RHOSO) control plane. In this article, let’s review the deployment process.
We’ll base our observations on the Development Preview 3 code from https://github.com/rh-osp-demo/dp-demo/.
Let’s begin with the OpenStack Operator.
The OpenStack Operator
The OpenStack Operator consists of three parts (a CatalogSource
, an OperatorGroup
, and a Subscription
), each defining a different resource for managing Operators within an OpenShift/Kubernetes cluster using the Operator Lifecycle Manager (OLM). The resources aim to set up an Operator for OpenStack, likely for managing OpenStack services within the cluster, are as follows:
---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: openstack-operator-index
namespace: openstack-operators
spec:
sourceType: grpc
secrets:
- "osp-operators-secret"
gprcPodConfig:
securityContextConfig: legacy
# adjust the repolink below to match your environment:
image: quay.apps.uuid.dynamic.redhatworkshops.io/quay_user/dp3-openstack-operator-index:latest
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openstack
namespace: openstack-operators
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openstack-operator
namespace: openstack-operators
spec:
name: openstack-operator
channel: alpha
source: openstack-operator-index
sourceNamespace: openstack-operators
These resources collectively set up an environment where the OpenStack Operator is available to be installed in the openstack-operators
namespace. The CatalogSource
provides the metadata about available operators, including the OpenStack Operator, sourced from a specified image. The OperatorGroup
defines the scope within which the Operator can operate, and the Subscription
triggers the installation and management of the OpenStack Operator according to the specified channel and source catalog.
Let’s focus on the CatalogSource
part:
metadata
:name: openstack-operator-index
- The name of theCatalogSource
.namespace: openstack-operators
- The namespace where theCatalogSource
is created.
spec
:sourceType: grpc
- Indicates that the catalog source uses gRPC to serve the index of available operators.secrets:
A list of secrets, in this case,osp-operators-secret
, that might be used by the catalog source, potentially for accessing private repositories.gprcPodConfig
: Contains configuration specific to the pod serving the gRPC requests.securityContextConfig: legacy
- Specifies a security context configuration for the pod. The exact meaning of "legacy" can depend on the cluster configuration.
image
: The container image URL for the operator index image, which should be adjusted to match the environment. This image hosts metadata about the operators available for installation, including the OpenStack operator.
Network isolation
Now that the operator is installed, let’s prepare the networking for the control plane, then the data plane.
First, we’ll work with the NNCP file used to configure the data plane network, which will configure the topology for each data plane network. It looks like the following (source file):
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: osp-enp1s0-worker-ocp4-worker1
spec:
desiredState:
interfaces:
- description: internalapi vlan interface
ipv4:
address:
- ip: 172.17.0.10
prefix-length: 24
enabled: true
dhcp: false
ipv6:
enabled: false
name: enp1s0.20
state: up
type: vlan
vlan:
base-iface: enp1s0
id: 20
- description: storage vlan interface
ipv4:
address:
- ip: 172.18.0.10
prefix-length: 24
enabled: true
dhcp: false
ipv6:
enabled: false
name: enp1s0.21
state: up
type: vlan
vlan:
base-iface: enp1s0
id: 21
- description: tenant vlan interface
ipv4:
address:
- ip: 172.19.0.10
prefix-length: 24
enabled: true
dhcp: false
ipv6:
enabled: false
name: enp1s0.22
state: up
type: vlan
vlan:
base-iface: enp1s0
id: 22
- description: Configuring enp1s0
ipv4:
address:
- ip: 172.22.0.10
prefix-length: 24
enabled: true
dhcp: false
ipv6:
enabled: false
mtu: 1500
name: enp1s0
state: up
type: ethernet
nodeSelector:
kubernetes.io/hostname: ocp4-worker1.aio.example.com
node-role.kubernetes.io/worker: ""
This YAML file defines a NodeNetworkConfigurationPolicy
for use with the NMState Operator in an OpenShift or Kubernetes environment. The policy specifies desired network configurations for nodes that match the defined nodeSelector. Here's a breakdown of the key components:
apiVersion
: Specifies the version of the NMState API used.kind
: Identifies the resource type asNodeNetworkConfigurationPolicy
, indicating that it's a policy for configuring network interfaces on nodes.metadata
:name
: The name of the policy,osp-enp1s0-worker-ocp4-worker1
, uniquely identifies it within the namespace.
spec
:desiredState
: Describes the desired network configuration for the selected nodes.interfaces
: A list of interface configurations to be applied.- The first three are VLAN interfaces (type:
vlan
) with the namesenp1s0.20
,enp1s0.21
, andenp1s0.22
. Each interface is configured with a static IPv4 address (172.17.0.10/24
,172.18.0.10/24
,172.19.0.10/24
, respectively) and specifies that IPv6 is disabled. DHCP is also disabled for IPv4, and each interface is brought to the up state. They are all based on the parent interfaceenp1s0
and have VLAN IDs 20, 21, and 22, respectively. - The fourth interface configuration applies to
enp1s0
itself, setting it as an Ethernet interface (type:ethernet
) with a static IPv4 address172.22.0.10/24
, DHCP disabled, and IPv6 disabled. The interface is also set to the up state with an MTU of 1500.
- The first three are VLAN interfaces (type:
nodeSelector
: Specifies a node's criteria for the policy to be applied. In this case, it selects a node with the hostnameocp4-worker1.aio.example.com
with a worker role.
This policy aims to configure multiple VLANs on a specific worker node's enp1s0
interface in an OpenShift or Kubernetes cluster, assigning static IPv4 addresses to each VLAN and the parent interface. It effectively segregates network traffic into different VLANs for purposes such as separating internal API traffic, storage traffic, and tenant traffic, while also configuring the parent interface for another network segment. The policy targets a specific node identified by its hostname and role, ensuring that these configurations are only applied to the intended node.
NetworkAttachDefinition (NAD) file
This YAML snippet defines a NetworkAttachmentDefinition
object, part of the Kubernetes Network Custom Resource Definition (CRD) framework enabled by the Multus CNI plugin. This CRD is used to create multiple network interfaces in a Kubernetes pod. We will configure a NAD resource for each isolated network to attach a service pod to the network:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: ctlplane
namespace: openstack
spec:
config: |
{
"cniVersion": "0.3.1",
"name": "ctlplane",
"type": "macvlan",
"master": "enp1s0",
"ipam": {
"type": "whereabouts",
"range": "172.22.0.0/24",
"range_start": "172.22.0.30",
"range_end": "172.22.0.70"
}
}
Let’s look at it:
apiVersion: k8s.cni.cncf.io/v1
: Specifies the API version for the CRD. Thek8s.cni.cncf.io/v1
indicates it's related to the CNI (Container Network Interface) plug-ins managed under the CNCF (Cloud Native Computing Foundation).kind: NetworkAttachmentDefinition
: This tells Kubernetes that the defined resource is aNetworkAttachmentDefinition
, which Multus uses to understand how to attach secondary networks to pods.metadata
: Contains metadata about the network attachment.name: ctlplane
: The name of theNetworkAttachmentDefinition
, which will be referenced by pods that want to use this network configuration.namespace: openstack
: Specifies the namespace where thisNetworkAttachmentDefinition
is created, indicating it's intended for use by pods running in theopenstack
namespace.
spec
: Defines the specification of the network attachment.config
: A JSON-formatted string specifying the network interface configuration to be attached to the pod.cniVersion
: The version of the CNI specification to use.name
: A name for this specific network configuration.type
: Specifies the CNI plug-in to use, in this case, macvlan, which allows a Kubernetes pod to have a unique MAC address via a parent host interface.master
: The master interface on the host that the macvlan interface will be created on top of. Here, it'sens224.4
, indicating a VLAN interface.ipam
: Stands for IP Address Management. It specifies how IP addresses are assigned to the pod interface.type
: The type of IPAM plugin to use, here whereabouts, which supports assigning IP addresses across multiple host nodes, avoiding IP address conflicts.range
: The CIDR range from which IP addresses will be allocated.range_start
,range_end:
Define the start and end of the IP allocation pool within the specified range.
NMState resources
As described earlier, we must define IP address pools and L2 advertisements for the NMstate Operator. We must create an IPAddressPool
resource to specify the range of IP addresses MetalLB can assign to services. Let’s have a look at our osp-ng-metal-lb-ip-address-pool
. It contains several entries, one per IP address pool we define. Let’s just pick one to detail, the ctlplane
one:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
namespace: metallb-system
name: ctlplane
spec:
addresses:
- 172.22.0.80-172.22.0.90
What do we have:
apiVersion: metallb.io/v1beta1
: Specifies the API version of MetalLB being used.kind: IPAddressPool
: Denotes the kind of Kubernetes resource. Here,IPAddressPool
is a resource type provided by MetalLB for defining a pool of IP addresses.metadata
: Begins the metadata section, which provides additional data about theIPAddressPool
resource.namespace: metallb-system
: Specifies the namespace where the resource is located. MetalLB's resources typically reside in a dedicated namespace,metallb-system
, isolated from other workloads.name: ctlplane
: The name of theIPAddressPool
resource. This name is used to identify the pool within the MetalLB configuration.
spec
: Starts the specification section that contains the actual configuration data for theIPAddressPool
.addresses
: Lists the IP address ranges that MetalLB can allocate to LoadBalancer services.- 172.22.0.80-172.22.0.90
: Defines a specific range of IP addresses (from172.22.0.80
to172.22.0.90
) that MetalLB is allowed to assign. This range should be within the network subnet accessible by the cluster and not used by other devices or services to avoid IP conflicts.
As we are using MetalLB in Layer 2 mode, define an L2Advertisement
resource. This tells MetalLB to advertise the IP addresses from your network's specified pool(s).
Let’s have a look at our osp-ng-metal-lb-l2-advertisement
YAML file. It contains several entries, let’s just pick the one relevant to ctlplane
:
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: ctlplane
namespace: metallb-system
spec:
ipAddressPools:
- ctlplane
interfaces:
- enp1s0
Here's a succinct explanation of its contents:
apiVersion: metallb.io/v1beta1
: Specifies the version of the MetalLB API being used.kind: L2Advertisement
: Indicates the resource type, anL2Advertisement
. This type controls how MetalLB advertises IP addresses to the local network.metadata
: Contains metadata about theL2Advertisement
resource.name: ctlplane
: The name of theL2Advertisement
resource.namespace: metallb-system
: The namespace where the resource is deployed, typically MetalLB's dedicated namespace.
spec
: The specification section where the advertisement behavior is defined.ipAddressPools
: Lists the names of the IP address pools that MetalLB should advertise. In this case, it references theIPAddressPool
ctlplane
, which we defined earlier.interfaces
: Specifies which network interfaces MetalLB should use to advertise IP addresses. Here, it's configured to use the interface namedenp1s0
.
This file tells MetalLB to advertise IP addresses from the ctlplane
IP address pool over the enp1s0
network interface, making these IP addresses reachable on the local network through standard L2 networking mechanisms (ARP for IPv4, NDP for IPv6).
Let’s regroup what we have seen so far before we keep deploying our OpenStack Control Plane. When deploying MetalLB, you first apply the MetalLB resource to install MetalLB itself. Then, you define one or more IPAddressPool
resources to specify the range of IPs MetalLB can manage. Finally, you use L2Advertisement
resources to control the advertisement of these IPs on your network in Layer 2 mode.
MetalLB and NAD (NetworkAttachmentDefinition
) serve different purposes. MetalLB is used to expose Kubernetes services of type LoadBalancer externally, allowing them to be accessible from outside the Kubernetes cluster. It's particularly useful in bare-metal environments where you don't have a cloud provider to provision external load balancers automatically.
NAD with Multus allows for attaching additional network interfaces to pods. This is useful in scenarios where pods need to communicate over different networks or require specific network configurations that the default Kubernetes network doesn't provide.
In essence, MetalLB simplifies external access to services, while Multus and NAD enhance pod networking capabilities within the cluster.
Data plane network configuration
The data plane network configuration file will configure the topology for each data plane network. Its YAML file contains a NetConfig
header and then various network sub-sections, each defining a network to expose to the data plane network.
Here is an extract of this sample configuration file:
apiVersion: network.openstack.org/v1beta1
kind: NetConfig
metadata:
name: openstacknetconfig
namespace: openstack
spec:
networks:
- name: ctlplane
dnsDomain: ctlplane.aio.example.com
subnets:
- name: subnet1
allocationRanges:
- end: 172.22.0.120
start: 172.22.0.100
- end: 172.22.0.200
start: 172.22.0.150
cidr: 172.22.0.0/24
gateway: 172.22.0.1
- name: internalapi
dnsDomain: internalapi.aio.example.com
subnets:
- name: subnet1
allocationRanges:
- end: 172.17.0.250
start: 172.17.0.100
excludeAddresses:
- 172.17.0.10
- 172.17.0.12
cidr: 172.17.0.0/24
vlan: 20
(...)
The YAML snippet defines a custom resource named NetConfig
under the API group network.openstack.org/v1beta1
. This is not a standard Kubernetes API group, which implies it's part of a specific operator that extends OpenShift functionality related to integrating OpenStack networking capabilities with Kubernetes.
Here's a breakdown of what this YAML does:
apiVersion: network.openstack.org/v1beta1
: Specifies the version of the API that the resource definition is compatible with. This is a custom resource definition (CRD) related to OpenStack networking under the v1beta1 version.kind: NetConfig
: This indicates the type of the resource. The resource is used to configure how networking should be set up within for OpenStack-managed resources within Kubernetes.metadata
: Contains metadata about the resource.name: openstacknetconfig
: The name of theNetConfig
resource.namespace: openstack
: This resource is in theopenstack
namespace.
spec
: The specification of the network configuration.networks
: A list of network configurations.name: ctlPlane
: Specifies the name of the network. It refers to a control plane network used for management and orchestration traffic in OpenStack.dnsDomain: ctlplane.aio.example.com
: Defines the DNS domain used for the network.subnets
: Defines subnets within thectlPlane
network.name: subnet1
: The name of the subnet.allocationRanges
: Specifies ranges within the subnet from which IP addresses can be allocated.- Lists two ranges of IP addresses for allocation:
- From 172.22.0.100 to 172.22.0.120
- From 172.22.0.150 to 172.22.0.200
- Lists two ranges of IP addresses for allocation:
cidr: 172.22.0.0/24
: Typically, the CIDR should match the network of the allocation ranges and the gateway.gateway: 172.22.0.1
: Specifies the gateway for the subnet, which is the IP address used as the default route for traffic leaving the subnet.
From the internalaip
section, we also see we can define VLAN IDs and exclusion ranges:
excludeAddresses:
IP range that the data plane should not use (these are the IP addresses used by the OCP cluster compute nodes (check the NNCP section above).vlan:
VLAN ID used by theinternalapi
network. The lack of this entry in thectlplane
section denotes using a flat network.
OpenStack Control Plane deployment
Now that we have all of our networking defined, and provided we have our storage configured (the sample file we use relies on NFS, but we did not discuss it here), we can deploy the control plane.
The control plane deployment YAML defines the different OpenStack services that should be instantiated, and for each service its configuration. The file is quite long, so copying it here is difficult, but you can check a sample file here.
Here is, from the above sample, the list of defined Services and Key Configurations (note that some services are disabled):
- DNS:
- Utilizes MetalLB for LoadBalancer with an IP of 172.22.0.89.
- Configured to use an external DNS server at 192.168.123.100.
- Cinder (block storage service):
- Database instance named
openstack
, using a secretosp-secret
. - Cinder API exposed via MetalLB with an IP of 172.17.0.80.
- NFS backend for Cinder Volumes with specific NFS configurations.
- Database instance named
- Glance (image service):
- Storage backend set to use Cinder with specific Glance and Cinder configurations.
- Glance API exposed via MetalLB with an IP of 172.17.0.80.
- Uses NFS for storage with a request of 10G.
- Keystone (identity service):
- Exposed via MetalLB with an IP of 172.17.0.80.
- Uses a database instance named
openstack
and a secretosp-secret
.
- Galera (database service):
- Enabled with storage requests set for the database and cell instances.
- Uses a secret
osp-secret
.
- Memcached:
- Deployed with a single replica.
- Neutron (networking service):
- Exposed via MetalLB with an IP of 172.17.0.80.
- Horizon (dashboard):
- Deployed with a single replica, using a secret
osp-secret
.
- Deployed with a single replica, using a secret
- Nova (compute service):
- API and Metadata services exposed via MetalLB with an IP of 172.17.0.80.
- Manila (shared file system service):
- API exposed via MetalLB with an IP of 172.17.0.80.
- OVN (Networking):
- Configuration for northbound and southbound DBs, as well as the OVN Controller.
- Placement:
- Exposed via MetalLB with an IP of 172.17.0.80.
- RabbitMQ (messaging service):
- Exposed via MetalLB with specific IPs for RabbitMQ services.
- Heat (orchestration service):
- API and Engine exposed via MetalLB with an IP of 172.17.0.80.
- Ironic (bare metal service):
- Disabled in this configuration.
- Telemetry:
- Ceilometer enabled, with configurations for autoscaling and metric storage.
- Swift (object storage service):
- Disabled in this configuration.
- Octavia (load balancer service):
- Disabled in this configuration.
- Redis:
- Disabled in this configuration.
Conclusion
In summary, MetalLB is extensively used to expose various OpenStack services externally via LoadBalancer type services, with annotations specifying address pools and IPs. Storage utilizes both Cinder (block storage) and NFS, with specific service configurations detailed for different services.
Each service utilizes a specific database instance and secrets for configuration and credentials management. Replicas and scaling define the number of replicas for certain services, indicating considerations for availability and scaling. And, finally, several services specify network attachments, indicating integration with specific network configurations for service communication.
We’ll complete the deployment process in the final part of this series: Red Hat OpenShift 101 for OpenStack admins: Data plane deployment