Red Hat OpenShift supports two workflows for building container images for applications: the source and the binary workflows. The binary workflow is the primary focus of the Red Hat OpenShift Application Runtimes and Red Hat Fuse product documentation and training, while the source workflow is the focus of most of the Red Hat OpenShift Container Platform product documentation and training. All of the standard OpenShift Quick Application Templates are based on the source workflow.
A developer might ask, “Can I use both workflows on the same project?” or, “Is there a reason to prefer one workflow over the other?” As a member of the team that developed Red Hat certification training for OpenShift and Red Hat Fuse, I had these questions myself and I hope that this article helps you find your own answers to these questions.
Comparing the binary and source workflows
Because both workflows are based on the source-to-image (S2I) feature, it may sound strange having a binary option. Actually, both workflows rely on S2I builds using the Source strategy. The key difference is that the source workflow generates deployable artifacts of your application inside OpenShift, while the binary workflow generates these binary artifacts outside OpenShift. Both of them build the application container image inside OpenShift.
In a sense, the binary workflow provides an application binary as the source for an OpenShift S2I build that generates a container image.
Take a simple application, such as the Vert.x-based "Hello, World" available on GitHub at https://github.com/flozanorht/vertx-hello.git. It can be run locally, as a standalone Java application, and it can be deployed on OpenShift using both workflows from the same sources.
Using the binary workflow
When using the binary workflow, developers would:
- Clone the project to a local folder, so they can change the code to their liking and maybe test it outside of OpenShift.
- Log in to OpenShift and create a new project.
- Use the
mvn
command, which in turn uses the Fabric8 Maven Plug-in (FMP) to build the container image and create the OpenShift resources that describe the application. Maven performs the following tasks:- Generates the application package (an executable JAR)
- Starts a binary build that creates the application container image, and streams the application package to the build pod
- Creates OpenShift resources to deploy the application
- Use either curl or a web browser to test the application.
Using the source workflow
When using the source workflow, developers would:
- Clone the project to a local folder, so they can change the code to their liking and maybe test it outside of OpenShift.
- Commit and push any changes to the origin git repository.
- Log in to OpenShift and create a new project.
- Use the
oc new-app
command to build the container image and create the OpenShift resources that describe the application.- The OpenShift client command creates OpenShift resources to build and deploy the application.
- The build configuration resource starts a source build that runs Maven to generate the application package (JAR) and create the application container image containing the application package.
- Expose the application service to the outside world.
- Use either curl or a web browser to test the application.
From an operational perspective, the main difference between both workflows lies between using the mvn
command or the oc new-app
command:
- The binary workflow relies on Maven to perform the heavy lifting, which most Java developers appreciate.
- The source workflow, on the other hand, relies on the OpenShift client (the
oc
command).
When to use each workflow
Before declaring that you prefer the binary workflow because you already know Maven, consider that developers use the OpenShift client to monitor and troubleshoot their applications, so maybe performing a few tasks using Maven is not that big a win.
The source workflow requires that all changes are pushed to a network-accessible git repository, while the binary workflow works with your local changes. This difference comes from the fact that the source workflow builds your Java package (JAR) inside OpenShift, while the binary workflow takes the Java package you built locally without touching your source code.
For some developers, using the binary workflow saves time, because they perform local builds anyway to run unit tests and perform other tasks. For other developers, the source workflow brings the advantage that all heavy work is done by OpenShift. This means that the developer’s workstation does not need to have Maven, a Java compiler, and everything else installed. Instead, developers could use a low-powered PC, or even a tablet, to write their code, commit, and initiate an OpenShift build to test their applications.
Nothing prevents you from using both workflows to meet different goals. For example, developers can use the binary workflow to test their changes in a local minishift instance, including running unit tests, while a QA environment uses the source workflow to build the application container image, triggered by a webhook, and performs a set of integration tests in a dedicated, multi-node OpenShift cluster.
Deploying an application using the binary and source workflows
Let’s experiment using both workflows with the Vert.x-based "Hello, World" application. Don't worry if you are unfamiliar with Vert.x, because the application is very simple and ready to run. Focus on the Maven POM settings that allow the same project to work with both the binary and the source workflows.
Unlike most examples you've probably seen elsewhere, my Vert.x application is configured to use supported dependencies from Red Hat OpenShift Application Runtimes instead of the upstream community artifacts. Do you need to be a Red Hat paying customer to follow the instructions? Not at all: you just need to register at the Red Hat Developers website, and you'll get a free developer's subscription to Red Hat Enterprise Linux, Red Hat OpenShift Container Platform, Red Hat OpenShift Application Runtimes, and all of Red Hat's middleware and DevOps portfolio.
Fire up your minishift instance, and let's try both the source and binary builds on OpenShift. If you do not have a minishift instance, register with Red Hat Developers, download and install the Red Hat Container Development Kit (CDK), and follow the instructions to set up minishift, which provides a VM to run OpenShift in your local machine. Or, if you have access to a real OpenShift cluster, log in to it and follow the instructions in the next sections.
Vert.x "Hello, World" using the binary workflow
First, clone the sample Vert.x application. The following commands assume you are using a Linux machine, but it should not be hard to adapt them to a Windows or Mac machine. You can run the CDK in either of them.
$ git clone https://github.com/flozanorht/vertx-hello.git $ cd vertx-hello
Because we are using supported Maven artifacts, you need to configure your Maven installation to use the Red Hat Maven Repository. The conf
folder contains a sample Maven settings.xml
file that you can copy to your ~/.m2
folder or use as an example of the changes to make.
Log in to OpenShift and create a test project. The following instructions assume you are using minishift and that your minishift instance is already running, but it should not be hard to adapt them to an external OpenShift cluster.
$ oc login -u developer -p developer $ oc new-project binary
Now comes the fun part: let the Fabric8 Maven Plug-in (FMP) do all the work.
$ mvn -Popenshift fabric8:deploy … [INFO] Building Vert.x Hello, World 1.0 … [INFO] --- fabric8-maven-plugin:3.5.38:resource (fmp) @ vertx-hello --- [INFO] F8: Running in OpenShift mode [INFO] F8: Using docker image name of namespace: binary [INFO] F8: Running generator vertx [INFO] F8: vertx: Using ImageStreamTag 'redhat-openjdk18-openshift:1.3' as builder image … [INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ vertx-hello --- … Tests run: 3, Failures: 0, Errors: 0, Skipped: 0 … [INFO] --- fabric8-maven-plugin:3.5.38:build (fmp) @ vertx-hello --- [INFO] F8: Using OpenShift build with strategy S2I … [INFO] F8: Starting S2I Java Build ..... [INFO] F8: S2I binary build from fabric8-maven-plugin detected [INFO] F8: Copying binaries from /tmp/src/maven to /deployments ... [INFO] F8: ... done [INFO] F8: Pushing image 172.30.1.1:5000/binary/vertx-hello:1.0 … … [INFO] F8: Pushed 6/6 layers, 100% complete [INFO] F8: Push successful [INFO] F8: Build vertx-hello-s2i-1 Complete … [INFO] --- fabric8-maven-plugin:3.5.38:deploy (default-cli) @ vertx-hello --- … [INFO] BUILD SUCCESS …
The configurations for the FMP are inside the Maven profile named openshift
. This profile allows you to build and run the application locally, without OpenShift. If you want to, invoke the Maven package
goal and run the JAR package from the target folder using java -jar
.
The build takes some time; most of it is for downloading Maven artifacts. Generating and pushing the container image to the internal registry takes from a few seconds to a few minutes.
The FMP may lose the connection to the builder pod and display warning messages such as the following, which you can just ignore:
[INFO] Current reconnect backoff is 1000 milliseconds (T0)
The following error message can also be ignored:
[ERROR] Exception in reconnect java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@741d5132 rejected from …
The FMP already fixed these issues and the fixes should land in Red Hat OpenShift Application Runtimes in the near future.
Despite these messages, your build is successful. The FMP also created a route to allow access to your application. Find the host name assigned to your route:
$ oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD vertx-hello vertx-hello-binary.192.168.42.180.nip.io vertx-hello 8080 None
And then test your application using curl:
$ curl http://vertx-hello-binary.192.168.42.180.nip.io/api/hello/Binary Hello Binary, from vertx-hello-binary.192.168.42.180.nip.io.
OpenShift resources with the binary workflow
Note that the FMP also creates a few OpenShift resources: a service, a deployment configuration, and application pods:
$ oc status In project binary on server https://192.168.42.180:8443 http://vertx-hello-binary.192.168.42.180.nip.io to pod port 8080 (svc/vertx-hello) dc/vertx-hello deploys istag/vertx-hello:1.0 <- bc/vertx-hello-s2i source builds uploaded code on openshift/redhat-openjdk18-openshift:1.3 deployment #1 deployed 8 minutes ago - 1 pod …
The FMP creates OpenShift resources internal defaults. You can provide resource fragment files stored in the src/main/fabric8 project
folder to override these defaults. If you want to customize a readiness probe or resource limits for your pods, you have to edit these files.
You can peek inside the OpenShift build configuration that the FMP created for you. Note that the strategy is Source
and there is a binary input.
$ oc get bc NAME TYPE FROM LATEST vertx-hello-s2i Source Binary 1 $ oc describe bc vertx-hello-s2i … Strategy: Source From Image: ImageStreamTag openshift/redhat-openjdk18-openshift:1.3 Output to: ImageStreamTag vertx-hello:1.0 Binary: provided on build …
Rebuilds with the binary workflow
The fact that the build requires a binary input means that you cannot simply start a new build using the oc start-build
command. You also cannot use OpenShift webhooks to start a new build. If you need to perform a new build of the application, use the mvn
command again:
$ mvn -Popenshift fabric8:deploy … [INFO] Building Vert.x Hello, World 1.0 … Tests run: 3, Failures: 0, Errors: 0, Skipped: 0 … [INFO] --- fabric8-maven-plugin:3.5.38:build (fmp) @ vertx-hello --- [INFO] F8: Using OpenShift build with strategy S2I … [INFO] F8: Pushed 6/6 layers, 100% complete [INFO] F8: Push successful [INFO] F8: Build vertx-hello-s2i-2 Complete … [INFO] BUILD SUCCESS …
In the end, you get a new container image and a new application pod. Though the build logs seem to imply that the FMP re-creates (or updates) the OpenShift resources, it actually leaves them unchanged. If you need the FMP to update the OpenShift resources, you need to invoke the fabric8:undeploy
Maven goal and then invoke fabric8:deploy
again.
Vert.x "Hello, World" using the source workflow
First of all, clone the sample Vert.x application. The following commands assume you are using a Linux machine, but it should not be hard to adapt them to a Windows or Mac machine. You can run the CDK in either of them. If you have already cloned from the previous instructions, you can reuse the same cloned git repository and skip the next commands.
$ git clone https://github.com/flozanorht/vertx-hello.git $ cd vertx-hello
There is no need to configure Maven to use the Red Hat Maven Repository. The OpenShift builder images are already preconfigured with them. You will not perform local Maven builds anymore.
Log in to OpenShift and create a test project. The following instructions assume you are using minishift, but it should not be hard to adapt them to an external OpenShift cluster.
$ oc login -u developer -p developer $ oc new-project source
Now comes the fun part: let OpenShift's S2I feature do all the work.
$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/flozanorht/vertx-hello.git … --> Creating resources ... imagestream "vertx-hello" created buildconfig "vertx-hello" created deploymentconfig "vertx-hello" created service "vertx-hello" created --> Success Build scheduled, use 'oc logs -f bc/vertx-hello' to track its progress. …
As suggested by the oc new-app
command, follow the OpenShift build logs, which includes the Maven build logs:
$ oc logs -f bc/vertx-hello -f Cloning "https://github.com/flozanorht/vertx-hello.git" ... … Starting S2I Java Build ..... Maven build detected Initialising default settings /tmp/artifacts/configuration/settings.xml Setting MAVEN_OPTS to -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:+UseParallelOldGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:MaxMetaspaceSize=100m -XX:+ExitOnOutOfMemoryError Found pom.xml ... … [INFO] Building Vert.x Hello, World 1.0 … [INFO] BUILD SUCCESS … Copying Maven artifacts from /tmp/src/target to /deployments ... … Pushed 6/6 layers, 100% complete Push successful
Note that the source build invokes Maven with arguments that prevent it from running the FMP. Also, note that the end result of the build is a container image. The build pushes that container image into the internal registry.
The build takes some time; most of it is for downloading Maven artifacts. Generating and pushing the container image takes from a few seconds to a few minutes.
After the build finishes, you need to create a route to allow access to your application.
$ oc expose svc vertx-hello route "vertx-hello" exposed
Now find the host name assigned to your route:
$ oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD vertx-hello vertx-hello-source.192.168.42.180.nip.io vertx-hello 8080-tcp None
And then test your application using curl
:
$ curl http://vertx-hello-source.192.168.42.180.nip.io/api/hello/Source Hello Source, from vertx-hello-source.192.168.42.180.nip.io.
OpenShift resources with the source workflow
Note that the oc new-app
command also creates a few OpenShift resources: a service, a deployment configuration, and application pods:
$ oc status In project source on server https://192.168.42.180:8443 http://vertx-hello-source.192.168.42.180.nip.io to pod port 8080-tcp (svc/vertx-hello) dc/vertx-hello deploys istag/vertx-hello:latest <- bc/vertx-hello source builds https://github.com/flozanorht/vertx-hello.git on openshift/redhat-openjdk18-openshift:1.3 deployment #1 deployed 3 minutes ago - 1 pod …
The oc new-app
command creates OpenShift resources using hard-coded defaults. If you want to customize them, for example, to specify a readiness probe or resource limits for your pods, you have two options:
- Find (or create) a suitable OpenShift template, and use this template as input for the
oc new-app
command. The templates in theopenshift
namespace are a good starting point. - Use OpenShift client commands, such as
oc set
andoc edit
to change the resources in place.
You can peek inside the OpenShift build configuration that the oc new-app
command created for you. Note that the strategy is Source
and there is a URL input.
$ oc get bc NAME TYPE FROM LATEST vertx-hello Source Git 1 $ oc describe bc vertx-hello … Strategy: Source URL: https://github.com/flozanorht/vertx-hello.git From Image: ImageStreamTag openshift/redhat-openjdk18-openshift:1.3 Output to: ImageStreamTag vertx-hello:latest …
Rebuilds with the source workflow
If you need to perform a new build of the application, use the oc start-build
command:
$ oc start-build vertx-hello build "vertx-hello-2" started
You can use the same oc logs
command to watch the OpenShift build logs. In the end, you have a new container image and a new application pod ready and running. You do not need to re-create the route and other OpenShift resources.
Note that Maven downloads all dependencies again because the build pod only uses container ephemeral storage. There is no reuse of the Maven cache between S2I builds by default. Developers performing local Maven builds rely on their local Maven cache to speed up rebuilds. OpenShift provides an incremental builds feature that allows reusing the Maven cache between S2I builds. Incremental builds are a theme of a future post.
If you plan to use source builds, I recommend that you configure a Maven repository server such as Nexus. The MAVEN_MIRROR_URL
build environment variable points to the Maven repository server. This recommendation applies to any organization developing Java applications, but OpenShift makes this need even more pressing.
Conclusion
Understanding the OpenShift binary and source workflows allows a developer to make informed decisions about when to use which one. A Continuous Integration/Continuous Delivery (CI/CD) pipeline, managed by Jenkins or any other tool, could use both of them.
The binary and the source workflows make use of the same OpenShift S2I builder images. Both build container images inside OpenShift and push them to the internal registry. For most practical purposes, the container images generated by either the binary or source workflows are equivalent.
The binary workflow may better fit current developer workflows, especially when there is heavy customization of the project's POM. The source workflow allows for cloud-based development, such as Red Hat OpenShift.io and relieves the developer from needing a high-end workstation.
Red Hat Training provides two developer-oriented courses with content about OpenShift builds:
- Red Hat OpenShift Development I: Containerizing Applications (DO288)
- Red Hat OpenShift Development II: Creating Microservices with Red Hat OpenShift Application Runtimes (DO292)
You can also get the book Deploying to OpenShift by Graham Dumpleton for free.
Last updated: February 23, 2024