How to deploy an application using Red Hat OpenShift Service on AWS

Learn how to deploy an application on a cluster using Red Hat OpenShift Service on AWS.

 

Throughout this learning path, you may come across some unfamiliar Red Hat OpenShift concepts. To avoid disrupting your workflow as you practice deploying your application, we have included a few explanations of these basic concepts. Feel free to come back to this resource if you encounter these concepts again if you get confused or need a reminder.

What will you learn?

  • Important aspects and terms of the Red Hat OpenShift Service

What do you need before starting?

  • Nothing! Please use this section for reference.

What is Source-to-Image?

Source-to-Image (S2I) is a toolkit and workflow for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and letting the container prepare that source code for execution. By creating self-assembling builder images, you can version and control your build environments exactly like you use container images to version your runtime environments.

How S2I works

For a dynamic language like Ruby, the build-time and run-time environments are typically the same. Starting with a builder image that describes this environment with Ruby, Bundler, Rake, Apache, GCC, and other packages needed to set up and run a Ruby application installed, S2I performs the following steps:

  1. Start a container from the builder image with the application source injected into a known directory.
  2. The container process transforms that source code into the appropriate runnable setup, in this case, by installing dependencies with Bundler and moving the source code into a directory where Apache has been preconfigured to look for the Ruby config.ru file.
  3. Commit the new container and set the image entry point to be a script (provided by the builder image) that will start Apache to host the Ruby application.

For compiled languages like C, C++, Go, or Java, the dependencies necessary for compilation might dramatically outweigh the size of the actual runtime artifacts. To keep runtime images slim, S2I enables a multiple-step build process, where a binary artifact such as an executable or Java WAR file is created in the first builder image, extracted, and injected into a second runtime image that simply places the executable in the correct location for execution.

For example, to create a reproducible build pipeline for Tomcat (the popular Java web server) and Maven:

  1. Create a builder image containing OpenJDK and Tomcat that expects to have a WAR file injected.
  2. Create a second image that layers on top of the first Maven image and any other standard dependencies, and expects to have a Maven project injected.
  3. Invoke S2I using the Java application source and the Maven image to create the desired application WAR.
  4. Invoke S2I a second time using the WAR file from the previous step and the initial Tomcat image to create the runtime image

By placing our build logic inside of images, and by combining the images into multiple steps, we can keep our runtime environment close to our build environment (same JDK, same Tomcat JARs) without requiring build tools to be deployed to production.

Goals and benefits of S2I

1. Reproducibility

Allow build environments to be tightly versioned by encapsulating them within a container image and defining a simple interface (injected source code) for callers. Reproducible builds are a key requirement to enabling security updates and continuous integration in containerized infrastructure, and builder images help ensure repeatability as well as the ability to swap runtimes.

2. Flexibility

Any existing build system that can run on Linux can be run inside of a container, and each individual builder can also be part of a larger pipeline. In addition, the scripts that process the application source code can be injected into the builder image, allowing authors to adapt existing images to enable source handling.

3. Speed

Instead of building multiple layers in a single Dockerfile, S2I encourages authors to represent an application in a single image layer. This saves time during creation and deployment and allows for better control over the output of the final image.

4. Security

Dockerfiles are run without many of the normal operational controls of containers, usually running as root and having access to the container network. S2I can be used to control what permissions and privileges are available to the builder image since the build is launched in a single container. In concert with platforms like Red Hat OpenShift, S2I can enable admins to tightly control what privileges developers have at build time.

What are routes?

A route exposes a service at a host name, like www.example.com, so that external clients can reach it by name. When a route object is created on OpenShift, it gets picked up by the built-in HAProxy load balancer in order to expose the requested service and make it externally available with the given configuration. 

If you are familiar with the Kubernetes Ingress object, you might be wondering, "What's the difference?" Red Hat created the concept of a route in order to fill this need and then contributed the design principles behind this to the community, which heavily influenced the Ingress design. A route has additional features as can be seen in Table 1.

Feature

Ingress on OpenShift

Route on OpenShift

Standard Kubernetes object

x

 

External access to services

x

x

Persistent (sticky) sessions

x

x

Load-balancing strategies (e.g., round robin)

x

x

Rate limiting and throttling

x

x

IP whitelisting

x

x

TLS edge termination for improved security

x

x

TLS re-encryption for improved security

 

x

TLS passthrough for improved security

 

x

Multiple weighted backends (split traffic)

 

x

Generated pattern-based hostnames

 

x

Wildcard domains

 

x

Table 1: Matrix of features available on Ingress on OpenShift and/or Route on OpenShift

Note: DNS resolution for a host name is handled separately from routing. As an administrator, you may have configured a cloud domain that will always correctly resolve to the router, or if using an unrelated host name you may need to modify its DNS records independently to resolve to the router.

Also of note is that an individual route can override some defaults by providing specific configurations in its annotations. See route specific annotations for more details.

What are Red Hat OpenShift image streams?

An image stream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a Docker image repository on a registry.

What are the benefits?

Using an image stream makes it easy to change a tag for a container image. To change a tag you would normally need to download the whole image, change it locally, then push it all back. Promoting applications this way and then updating the deployment object entails many steps. 

With image streams, you upload a container image once and then manage its virtual tags internally in Red Hat OpenShift. In one project you might use the dev tag and only change references to it internally. In production, you might use a prod tag and also manage it internally. In both instances, you don't have to manage the registry!

You can also use image streams in conjunction with DeploymentConfigs to set a trigger that will start a deployment as soon as a new image appears or a tag changes its reference.

See below for more details:

What is an image build?

A build is the process of transforming input parameters into a resulting object. Often, it is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process.

Red Hat OpenShift Container Platform leverages Kubernetes by creating Docker-formatted containers from build images and pushing them to a container image registry.

Build objects share common characteristics: inputs for a build, the need to complete a build process, logging the build process, publishing resources from successful builds, and publishing the final status of the build. Builds take advantage of resource restrictions, specifying limitations on resources such as CPU usage, memory usage, and build or pod execution time.

See Understanding image builds for more details.

In this section, you learned some of the foundational concepts for Red Hat OpenShift. You are now ready to deploy your application on Red Hat OpenShift Service on AWS.

Get more support

More OpenShift resources

Previous resource
Prerequisites for deploying an application to a Red Hat OpenShift Service on AWS cluster
Next resource
Deploying an application on a Red Hat OpenShift Service on AWS cluster