How to deploy an application using Red Hat OpenShift Service on AWS

Learn how to deploy an application on a cluster using Red Hat OpenShift Service on AWS.

It’s easy to look at the cost structure of Red Hat® OpenShift® Service on AWS (ROSA) and not know where to start. In this resource, we will review four techniques that can be applied to a cluster to fully use all possible resources and achieve the lowest possible price point on EC2 usage. 

Autoscaling

By using autoscaling in ROSA, you can optimize your instance hours by aligning the number of nodes in the cluster with your demand. 

There are two methods to autoscaling: the Cluster Autoscaler or a Horizontal Pod Autoscaler (HPA).

You can enable cluster autoscaling for your cluster in the OpenShift Cluster Manager (OCM) or using the ROSA command line interface (CLI). You can also complete this step when you first create the cluster using customizations

For an HPA, you have several options as well:

  • Create an HPA to scale pods you want to run on a Deployment or DeploymentConfig object using the web console. You can also define the amount of CPU or memory usage that your pods should target.
  • Create an HPA to scale an existing Deployment, DeploymentConfig, ReplicaSet, ReplicationController, or StatefulSet object using the OpenShift Container Platform (OCP) CLI. The HPA scales the pods associated with that object to maintain the CPU usage you specify.
  • Create an HPA to scale an existing Deployment, DeploymentConfig, ReplicaSet, ReplicationController, or StatefulSet object using the OCP CLI. The HPA scales the pods associated with that object to maintain the average memory utilization you specify.
  • Keep in mind best practices for using a horizontal pod scaler with OpenShift, including resource request configurations, cool down periods, and scaling policies.

Rightsizing

You can also optimize pod resources by allocating the appropriate CPU and memory resources to pods.

With OpenShift, rightsizing is done by setting the compute resources, CPU, and memory allocated for the containers in your pods. Each container in a pod has both requests and limits on the amount of CPU and memory that it will use.

Care must be taken to try and set requests that align as close as possible to the actual utilization of these resources. If the value is too low, then the containers may experience throttling of the resources and impact the performance. However, if the value is too high, then there is waste since those unused resources remain reserved for that single container. When actual utilization is lower than the requested value, the difference is called slack cost

There isn’t one single way to rightsize your application, but you can see some ideas of how to think about this technique on the Red Hat blog
 

Idling applications

A third technique is to optimize pod hours by terminating pods that are unnecessary during nights and weekends.

There are many deployments that only need to be available during business hours. In OpenShift, cluster administrators can idle applications to reduce resource consumption. 

If any scalable resources are not in use, OpenShift discovers and idles them by scaling their replicas to 0. The next time network traffic is directed to the resources, the resources are activated by scaling up the replicas, and normal operation continues.

You can use the oc idle command to idle a single service, or use the --resource-names-file option to idle multiple services.  

Purchasing options

The final technique we’ll look at is to replace On-Demand Instances with Spot Instances. Spot Instances allow you to use spare compute capacity at a significantly lower cost than On-Demand EC2 instances (up to 90%).

The hourly price for a Spot Instance is called a Spot price. The Spot price of each instance type in each Availability Zone is set by Amazon EC2 and is adjusted gradually based on the long-term supply of and demand for Spot Instances. Your Spot Instance runs whenever capacity is available.

Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.

Currently, only additional machine pools can be configured with Spot Instances on ROSA. The "Default" machine pool cannot be used with Spot Instances.

For ROSA, you can configure the machine pool using either the ROSA CLI or the OCM console.


Configuring a machine pool with Spot instances via rosa CLI


Please note: This option is not available in older ROSA CLI versions. Download the latest version of ROSA CLI for running this from the ROSA mirror.

Then, run the following commands:

rosa create machinepool --help | grep spot
  # Add a machine pool with spot instances to a cluster
rosa create machinepool -c mycluster --name=mp-1 --replicas=2 --instance-type=r5.2xlarge --use-spot-instances \
  --spot-max-price=0.5
  --spot-max-price string      Max price for spot instance. If empty use the on-demand price. (default "on-demand")
  --use-spot-instances         Use spot instances for the machine pool.


Configuring Machine Pool with Spot Instances via the OCM console

 

  1. Select the name of the cluster in the OCM console.
  2. Select the "Machine pools" tab and click the "Add machine pool" button.
  3. Add the Machine Pool with the Spot Instance there by checking the "Use Amazon EC2 Spot Instances" checkbox.
     
Previous resource
How to network within Red Hat OpenShift Service on AWS clusters
Next resource
How to scale within clusters on Red Hat OpenShift Service on AWS