The charter of Open Innovation Labs is to help our customers accelerate application development and realize the latest advancements in software delivery, by providing skills, mentoring, and tools. Some of the challenges I frequently hear from customers are those around Platform as a Service (PaaS) environment provisioning and configuration. This article is first in the series of articles that guide you through installation configuration and usage of the Red Hat OpenShift Container Platform
(OCP) on Amazon Web Services (AWS).

This installation addresses cloud based security group creation, Amazon Route 53 based DNS, creates a server farm that survives power cycles, and configures OCP for web based authentication and persistent registry. This article and companion video (view below) eliminates the pain-points of a push button installation and validation of a four node Red Hat OCP cluster on AWS.

By the end of the tutorial, you should have a working Red Hat OCP PaaS that is ready to facilitate your team’s application development and DevOps pipeline.

Please note: The setup process uses Red Hat Ansible and an enhanced version of the openshift-ansible aws community installer.

https://www.youtube.com/watch?v=4_ckNfjg_GU

Installer Enhancements

Security group set-up necessary for a default installation

In AWS Security Groups act as a virtual firewall and are used to control traffic between nodes in the OCP cluster. The enhanced installer includes additional Ansible based logic to set up the security group based rules OCP needs for all of its infrastructural pieces.

Static private and public IP address based configuration

The Ansible based community AWS provisioner didn’t create EC2 instances with fixed IP addresses. This meant that after initial configuration the OCP environment would work, but would not survive the restart of a cluster. The enhanced installer addresses this by statically assigning in-VPC private IP addresses as well as an individual public elastic IP address to network interfaces tied to each OCP node.

DNS setup

The enhanced installer follows a domain naming scheme based on conventions. There are other entries for convenience as well, one of them is to the master node at master-1.<cluster-id>.rht-labs.com.

OCP infrastructure node set up for public DNS

In order to view OCP projects via a web browser at a domain name such as <project1>-<app1>.apps-1.openshift.rht-labs.com, the enhanced installer uses the AWS’ DNS service called Route 53 to create a wildcard DNS entry.

HTTP authentication for OCP web based management console

The stock installer comes with no authentication configuration. The enhanced installer configures web based authentication with two pre-configured user names. Please refer to the Red Hat OCP documentation on how to further customize authentication.

OCP persistent docker registry

The stock installer doesn’t set up a persistent registry, which means that any work performed on the cluster is ephemeral (i.e. doesn’t survive a restart). The enhanced installer mitigates this by configuring OCP to persist the docker registry to a directory.

Checklist for AWS and the enhanced OCP installer:

  • You possess a key pair that is imported in AWS and configured with your ssh-agent (ssh-add -l ).
  • The ssh key-pair is set up in your environment so you can authenticate without providing a password, i.e.:
  >> ssh ec2-user@bastion.rht-labs.com
  Last login: Thu Jul 28 15:28:57 2016 from some ISP lease
  [ec2-user@ip-172-31-7-21 ~]$
  • You have AWS credentials in order to fill out the values to some variables below.
  • You're picking an empty subnet for OCP install.
  • Your AWS account has limits set to be able to fit 5 more in-VPC Elastic IPs.
  • You have a domain registered and a zone set up via AWS Route 53.
  • You have Ansible v 2.1.0+, Python boto, and git installed

Create the environment

  1. Edit the text below and paste and execute in a terminal window
>> export AWS_ACCESS_KEY_ID=[YOUR KEY ID HERE]
>> export AWS_SECRET_ACCESS_KEY=[YOUR SECRET HERE]
  1. Clone our git repository and change into the cloned directory
>> git clone https://github.com/rht-labs/openshift-ansible

>> cd openshift-ansible
  1. You will need to fill out some AWS, and Red Hat Subscription related information in the command below in order to have a successful OCP cluster build.
  • cluster_id: arbitrary alphanumeric identifier that you set to associate all nodes of your cluster in AWS, please pick one that is not already present in the AWS account.
  • private_subnet: the first three octets should correspond in ip range to the VPC subnet from section 1 (in bold in section 1), pick an empty subnet
  • rhsm_username and rhsm_password: subscription manager credentials
  • rhsm_pool: pool id that contains the Red Hat OCP entitlements
>> ansible-playbook  \
-i inventory/aws/hosts \
-e \
'num_masters=1 
num_nodes=2 
cluster_id=<cluster id> 
cluster_env=dev
num_etcd=0 
num_infra=1 
deployment_type=openshift-enterprise 
private_subnet=<first three octets of subnet ###.###.###> 
rhsm_username=<RHN username> 
rhsm_password=<RHN password>
rhsm_pool=<RHSM Pool ID>
zone=<DNS zone name> 
openshift_persist_registry=true 
cli_ec2_vpc_subnet=<subnet housing private_subnet> 
cli_ec2_keypair=<ssh keypair name imported to AWS>
cli_ec2_security_groups=<security group to be created> 
cli_ec2_image=<RHEL 7 ami ID> 
cli_os_docker_vol_size=50 
cli_os_docker_vol_ephemeral=false 
cli_ec2_region=<aws region e.g. us-east-1>' \
playbooks/aws/openshift-cluster/launch.yml

Please note:

If the installer fails, please run the terminate.yml playbook with the proper cluster_id, zone, and deployment type before attempting to run the launch.yml playbook again. Each launch.yml run attempt should have a distinct cluster id.

Take these steps to prevent the dynamic inventory from selecting terminated ec2 instances as part of your cluster.

 

Validate the environment

Creating the OCP PaaS will take about one hour; once complete you will be able to log in to OCP 3.2 via:

hostname: https://master-1.<cluster_id>.<zone>:8443/
username: andrew
password: r3dh4t1!

To further configure http based authentication, please follow the Red Hat OCP user guide.

If you are new to Red Hat OCP, please follow the video to validate that your new PaaS Cluster is operational. The video guides you through project creation using a sample Node.js based container and accessing it at the url scheme

http://<project>-<app>.apps-1.<cluster_id>.<zone>

I.e. if you create a project called test and an application called myapp, your OCP cluster is called cluster-1 and the domain name is example.com, upon successfully building your new app it will be accessible at:

http://test-myapp.apps-1.cluster-1.example.com

Terminate the environment

To destroy the environment after testing:

>> ansible-playbook  \
-i inventory/aws/hosts \
-e \
'cluster_id=<cluster id>
cluster_env=dev
deployment_type=openshift-enterprise 
zone=<DNS zone name>' \
playbooks/aws/openshift-cluster/terminate.yml

This is just one piece of the activities and work that is underway at Labs. Interested in getting in on the ground floor?

In subsequent posts, I’ll introduce you to the operational aspects of the Red Hat OpenShift Container Platform and discuss more advanced configuration options.

About Matyas

Matyas Danter is a software developer with over a decade of experience, majority of which is server side web development. He is employed by Red Hat's Open Innovation Labs as a Senior Consultant. He has created applications using Java, Groovy, Grails, PHP, AngularJS, Javascript, Korn Shell, Puppet, and Ansbile that measurably improved major company wide processes for his employers. He is a contributor to jBPM, and OptaPlanner. He devotes his free time to maintaining the phpecc elliptic curve cryptography library for PHP.

Last updated: June 6, 2023