OpenShift Operator

Red Hat OpenShift Container Platform is a platform-as-a-service (PaaS). It orchestrates and manages containerized applications through Kubernetes. Although OpenShift Container Platform supports cloud-native applications, it also supports custom-built applications. OpenShift Container Platform can run on a hybrid cloud configuration providing the flexibility to expand and grow.

Red Hat OpenStack Platform is an infrastructure-as-a-service (IaaS). This means it is a cloud-based platform that provides virtual servers and other resources. Users either manage it through a web-based dashboard, through command-line tools, or through RESTful web services.

If you are considering Red Hat OpenShift Container Platform on OpenStack Platform, there are several advantages, including easily increasing the number of compute nodes and using dynamic storage.

In this article, I will outline the main points required to successfully install Red Hat OpenShift Container Platform on OpenStack Platform. Because my OpenStack knowledge is limited, I reached out to my colleagues for help and will not address too many OpenStack technical details here.

Prerequisites

Before beginning your installation, you will need an OpenStack Platform environment provisioned with certain requirements. These are mainly authentication and Red Hat subscription requirements. The following sections address these.

OpenStack Environment

Basically, you need an environment set up as per the link below:

https://docs.openshift.com/container-platform/3.11/install_config/configuring_openstack.html

Therefore, before proceeding, ensure you have the following:

  1. Access to a deployment instance with all the required repositories enabled, and the correct ssh keys to access your nodes
  2. Valid keystone authentication credentials
  3. Enough computing resources to create the cluster you need, as well as any potential growth requirements
  4. DNS services that automatically add new hosts that are provisioned (Personally, I had some challenges here)

OpenStack keystone authentication requirements

There are specific requirements for keystone authentication. This is to allow the OpenStack Platform cloud provider can authenticate with OpenStack Platform (primarily for Cinder storage).

The main requirement is that your OpenStack Platform user and project exist in the same domain. If you do not have them in the same domain, the installation will fail (as of OpenShift Container Platform 3.11.69).

Therefore, before attempting installation, ensure you can authenticate with the project ID and user via the following command:

$ openstack \
--os-identity-api-version "3" \
--os-auth-url "https://openstack-default.mydomain:13000/v3" \
--os-username "myorgusername" \
--os-password "mypassword" \
--os-project-id "myprojectid" \
--os-domain-name "myorg" \
server list

If the above command fails (even if you don't have any servers installed), then your OpenShift Container Platform installation will fail. This is due to the OpenShift Container Platform cloud provider being unable to authenticate for projects in a different domain to the user.

When you are running this command, please ensure you have NOT sourced the rc file (which contains all of the above details), because you will get false results.

Populate your Red Hat OpenShift inventory with your values

Here are some inventory values that will be required:

$ cat inventory/group_vars/all.yml |grep rhsub
rhsub_server: 'satellite.mydomain'
rhsub_ak: 'openshift'
rhsub_orgid: 'MyOrg'
rhsub_pool: '123456789012345678901234567890'

Provision your base stack

If you have all the required OpenStack settings, running the following playbook will create all the nodes required (as you have specified their numbers in your all.yml file):

$ source openrc.sh
$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/provision.yml

This will create your OpenShift nodes (as per your all.yml file). In this example, the following settings here used:

openshift_openstack_num_masters: 3
openshift_openstack_num_infra: 3
openshift_openstack_num_cns: 0
openshift_openstack_num_nodes: 3
openshift_openstack_num_etcd: 0

Check your base stack

After the previous playbook is complete, check whether your dynamic inventory has been updated:

$ soruce openrc.sh
$ /usr/share/ansible/openshift-ansible/playbooks/openstack/inventory.py –list

Your dynamic inventory must be set in your ansible.cfg file. You should also make sure all nodes are contactable and have correct DNS settings.

Most of the problems I encountered during the installations were due to DNS. Please make sure the hostname for all your OpenShift nodes is the fqdn hostname. All nodes must be able to resolve via DNS.

Start your installation

To begin your installation, run the following commands (I have personally increased the timeout for my Ansible playbooks):

$ ansible-playbook --timeout=120 /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/prerequisites.yml
$ ansible-playbook --timeout=120 /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/install.yml

Scaling up compute nodes

If you are scaling up OpenShift Container Platform compute nodes on a different platform, then you may follow the standard recommended procedure to add a new node to your OpenShift Container Platform cluster.

However, when you are running on OpenStack Platform, you need to get these new nodes into the dynamic inventory first, before you can actually perform a joining of a node. Performing this latter task on OpenStack Platform is not an obvious task.

The only place I have been able to find documentation for this is in the following file (on the deployment instance/jump host):

/usr/share/ansible/openshift-ansible/playbooks/openstack/configuration.md

Here I provide an excerpt contained within the file above. This excerpt refers to the exact instructions required to perform a node scale-up on OpenStack Platform:

Section: ### 2. Scale the Cluster ==>

```
$ ansible-playbook --user openshift \
-i openshift-ansible/playbooks/openstack/scaleup_inventory.py \
-i inventory \
openshift-ansible/playbooks/openstack/openshift-cluster/node-scaleup.yml
```

This will create the new OpenStack nodes, optionally create the DNS records
and subscribe them to RHN, configure the `new_masters`, `new_nodes` and
`new_etcd` groups and run the OpenShift scaleup tasks.

When the playbook finishes, you should have new nodes up and running.

Run `oc get nodes` to verify.

In my case, I was interested in scaling up from three compute nodes to five compute nodes. To do that, the node count in the all.yml file must be updated.

Step 1: Update the all.yml file (example shown here):

$ cat inventory/group_vars/all.yml | grep openshift_openstack_num_nodes
openshift_openstack_num_nodes: 3

to:

$ cat inventory/group_vars/all.yml | grep openshift_openstack_num_nodes
openshift_openstack_num_nodes: 5

Step 2: Run the OSP specific node-scaleup.yml playbook:

$ ansible-playbook \ -i /home/cloud-user/inventory \ -i /usr/share/ansible/openshift-ansible/playbooks/openstack/scaleup_inventory.py \ /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/node-scaleup.yml

Infra-node scaleup requires additional adjustments, which are available in /usr/share/ansible/openshift-ansible/playbooks/openstack/configuration.md.

I have not attempted to scale up master nodes and have not tested this.

Dynamic storage

OpenStack Platform Cinder storage is available to an OpenShift Container Platform cluster configured to utilize OpenStack Platform features.

However, you should understand that Cinder storage is a read-write-once type of storage, which means multiple pods cannot share the same storage. This aspect must be considered when designing your Red Hat OpenShift Container Platform cluster and the applications that run on them.

Conclusion

Installing Red Hat OpenShift Container Platform on OpenStack Platform provides many features and benefits. One main benefit is being able to scale up nodes with relative ease, and another is the ability to use OpenStack Platform Cinder storage.

Although good documentation is available on how to install a base cluster, the documentation on scaling up was more difficult to find. My aim in this article was to highlight all the information required to successfully install and scale up a Red Hat OpenShift cluster.

I didn't address OpenStack Platform technical details here primarily because of my own lack of expertise, but I did find that setting up Red Hat OpenShift Container Platform on OpenStack Platform is relatively straightforward once you have all the right information and have all the infrastructure services up and running (mainly DNS).

I thank my colleagues for their selfless help in completing this procedure.

Last updated: May 1, 2019