In this article we'll explore how to make use of the built-in build capabilities available in Red Hat OpenShift 4 in a multi-arch compute environment, and how to make use of nodeSelectors to schedule builds on nodes of the architecture of our choosing.
In OpenShift, we can make use of the built-in Build, BuildConfig, and Build Strategies (Understanding Image Builds, Understanding BuildConfigs) objects to build container images on cluster nodes.
In a multi-arch compute cluster we can target nodes of a specific architecture with these build objects, and collect these architecture-specific container images into manifest lists of built images that can run anywhere in the cluster. These capabilities can lead to cost savings by enabling workloads to run on more cost-effective CPUs, as well as provide the flexibility to exploit hardware-specific features that benefit the workload at hand. It can also simplify the building and running of multi-architecture images by having everything available in a single cluster.
Let's take a look at building and deploying a manifest-listed image on a multi-arch compute cluster composed of four different architectures.
We’ll accomplish this in the following steps:
- Prerequisites:
- A running multi-arch compute cluster. Ours has nodes of amd64, ppc64le, arm64, and s390x. We’ll describe the deployment architecture we’re using more below.
- Creating a new Project.
- Creating a new image stream to place the four built images into, plus the manifest list.
- Importing the ruby-hello-world image into the cluster registry as a manifest listed image.
- Create and trigger four
BuildConfig
objects (one for each architecture).- Specify a Docker Build Strategy.
- Specify the
nodeSelector
. - Specify the manifest-listed Ruby image we imported earlier.
- Create and trigger a Job that combines the built images into a manifest list.
- Create and trigger a Deployment of the manifest-listed
ruby-hello-world
image onto compute nodes of each architecture.
We'll start with our cluster. The configuration looks like this:
$ oc get nodes --label-columns='kubernetes.io/arch'
NAME STATUS ROLES AGE VERSION ARCH
aarch64-018.example.com Ready worker 35d v1.28.9+416ecaf arm64
aarch64-019.example.com Ready worker 35d v1.28.9+416ecaf arm64
s390x-008.example.com Ready worker 5d2h v1.28.9+416ecaf s390x
ppc64le-001.example.com Ready worker 34d v1.28.9+416ecaf ppc64le
ppc64le-002.example.com Ready worker 33d v1.28.9+416ecaf ppc64le
x86-007.example.com Ready worker 27d v1.28.9+416ecaf amd64
x86-013.example.com Ready worker 18d v1.28.9+416ecaf amd64
x86-014.example.com Ready worker 18d v1.28.9+416ecaf amd64
x86-016.example.com Ready control-plane,master 36d v1.28.9+416ecaf amd64
x86-017.example.com Ready control-plane,master 36d v1.28.9+416ecaf amd64
x86-018.example.com Ready control-plane,master 36d v1.28.9+416ecaf amd64
x86-019.example.com Ready worker 36d v1.28.9+416ecaf amd64
x86-020.example.com Ready worker 36d v1.28.9+416ecaf amd64
We can see that there are 8 nodes of amd64, 1 node of s390x, and 2 nodes each of ppc64le and arm64. We’ll later specify these architectures as nodeSelector
s inside the BuildConfig
s to get the Builds scheduled onto them.
Create and switch to a new project:
$ oc new-project dorzel-builds-test
Then create an image stream:
$ oc create imagestream ruby-hello-world
imagestream.image.openshift.io/ruby-hello-world created
We'll be building the ruby-hello-world
image (accessible here and here).
Looking at the Dockerfile for ruby-hello-world
, we can see that this project depends on ruby:2.7-ubi8
. Since we're looking to build multi-architecture container images, we need to ensure we have met these dependencies. Let's look at what is provided in the cluster image registry by default:
$ oc get imagestreamtag ruby:2.7-ubi8 -n openshift
NAME IMAGE REFERENCE UPDATED
ruby:2.7-ubi8 image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:59f95ad7f86f04707f0046d805a5c0cab2c487720a04d03dc6149cf53e67c43c 3 weeks ago
The image we have available so far only supports one architecture:
$ oc get imagestreamtag ruby:2.7-ubi8 -n openshift -o jsonpath='{.image.dockerImageManifests}' | jq
[
{
"architecture": "amd64",
"digest": "sha256:103a5e97d15bef1a566ff7f4401e2953824ddc0f75159d61aacf12936e2c2b0f",
"manifestSize": 926,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"os": "linux"
}
]
We’ll need the full manifest list to be imported into the cluster to work with. Let’s examine the manifest itself. Since the image resides in registry.redhat.io
, we’ll need an auth file to inspect it. We can grab this auth file from the cluster’s configuration:
$ oc get imagestreamtag ruby:2.7-ubi8 -n openshift -o jsonpath='{.tag.from.name}'
registry.redhat.io/ubi8/ruby-27:latest
$ oc get secret pull-secret -n openshift-config -o json | jq -r '.data.".dockerconfigjson"' | base64 -d > docker-config.json
$ podman manifest inspect --authfile=docker-config.json registry.redhat.io/ubi8/ruby-27:latest
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 926,
"digest": "sha256:103a5e97d15bef1a566ff7f4401e2953824ddc0f75159d61aacf12936e2c2b0f",
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 926,
"digest": "sha256:7e53fb5d8f06831a9c235c174a11b06b640076eb7c33ff093609f2ceb295ec32",
"platform": {
"architecture": "arm64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 926,
"digest": "sha256:a1830cfa7062bbebfd3f3aa7ccc0e045589b03652804fcdf69c63e6a94ecc7e3",
"platform": {
"architecture": "ppc64le",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 926,
"digest": "sha256:96a3b9b1f5849894f94ff12eee8da0d4387d076ad8343ce7d99564c1c6c3896a",
"platform": {
"architecture": "s390x",
"os": "linux"
}
}
]
}
As we can see, there are architecture-specific images for ruby:2.7-ubi8
available for each architecture in our cluster. Let's import them into the cluster by changing the ImportMode on the image stream from the default “Legacy”
to “PreserveOriginal”
, which will preserve the manifest as-is from the registry while importing, as opposed to discarding the full manifest and importing a subset (in our case, a single architecture):
$ oc import-image ruby:2.7-ubi8 --confirm --import-mode="PreserveOriginal" -n openshift
imagestream.image.openshift.io/ruby imported
...
Manifests: linux/amd64 sha256:103a5e97d15bef1a566ff7f4401e2953824ddc0f75159d61aacf12936e2c2b0f
linux/arm64 sha256:7e53fb5d8f06831a9c235c174a11b06b640076eb7c33ff093609f2ceb295ec32
linux/ppc64le sha256:a1830cfa7062bbebfd3f3aa7ccc0e045589b03652804fcdf69c63e6a94ecc7e3
linux/s390x sha256:96a3b9b1f5849894f94ff12eee8da0d4387d076ad8343ce7d99564c1c6c3896a
...
Let’s check if that worked:
$ oc get imagestreamtag ruby:2.7-ubi8 -n openshift -o jsonpath='{.image.dockerImageManifests}' | jq
[
{
"architecture": "amd64",
"digest": "sha256:103a5e97d15bef1a566ff7f4401e2953824ddc0f75159d61aacf12936e2c2b0f",
"manifestSize": 926,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"os": "linux"
},
{
"architecture": "arm64",
"digest": "sha256:7e53fb5d8f06831a9c235c174a11b06b640076eb7c33ff093609f2ceb295ec32",
"manifestSize": 926,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"os": "linux"
},
{
"architecture": "ppc64le",
"digest": "sha256:a1830cfa7062bbebfd3f3aa7ccc0e045589b03652804fcdf69c63e6a94ecc7e3",
"manifestSize": 926,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"os": "linux"
},
{
"architecture": "s390x",
"digest": "sha256:96a3b9b1f5849894f94ff12eee8da0d4387d076ad8343ce7d99564c1c6c3896a",
"manifestSize": 926,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"os": "linux"
}
]
Looks good. Now that we have the dependencies available, we can start working on building our project. We can now craft our BuildConfig
objects—one for each architecture we want to build for (amd64, ppc64le, arm64, s390x). We’ll use a Docker Build Strategy here for the Ruby image, but this should work with any type of build strategy:
$ cat > buildconfig-amd64.yaml << EOF
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
name: ruby-hello-world-amd64
namespace: dorzel-builds-test
labels:
name: ruby-hello-world-amd64
spec:
triggers:
- type: ConfigChange
source:
type: Git
git:
uri: https://github.com/openshift/ruby-hello-world.git
strategy:
type: Docker
dockerStrategy:
from:
kind: ImageStreamTag
name: ruby:2.7-ubi8
namespace: openshift
output:
to:
kind: ImageStreamTag
name: ruby-hello-world:amd64
nodeSelector:
kubernetes.io/arch: amd64
EOF
Replace amd64 with the architectures for the other nodes by using a little sed
magic:
$ sed 's/amd64/ppc64le/g' buildconfig-amd64.yaml > buildconfig-ppc64le.yaml
$ sed 's/amd64/arm64/g' buildconfig-amd64.yaml > buildconfig-arm64.yaml
$ sed 's/amd64/s390x/g' buildconfig-amd64.yaml > buildconfig-s390x.yaml
And we’ve got four YAML files to instantiate into Builds by using oc create
:
$ oc create -f buildconfig-amd64.yaml
buildconfig.build.openshift.io/ruby-hello-world-amd64 created
$ oc create -f buildconfig-ppc64le.yaml
buildconfig.build.openshift.io/ruby-hello-world-ppc64le created
$ oc create -f buildconfig-arm64.yaml
buildconfig.build.openshift.io/ruby-hello-world-arm64 created
$ oc create -f buildconfig-s390x.yaml
buildconfig.build.openshift.io/ruby-hello-world-s390x created
Let's confirm that the builds completed and the image stream has container images for each architecture:
$ oc get builds
NAME TYPE FROM STATUS STARTED DURATION
ruby-hello-world-amd64-1 Docker Git@1cc3463 Complete 2 minutes ago 1m28s
ruby-hello-world-arm64-1 Docker Git@1cc3463 Complete About a minute ago 27s
ruby-hello-world-ppc64le-1 Docker Git@1cc3463 Complete About a minute ago 51s
ruby-hello-world-s390x-1 Docker Git@1cc3463 Complete About a minute ago 29s
$ oc get imagestream ruby-hello-world
NAME IMAGE REPOSITORY TAGS UPDATED
ruby-hello-world image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world amd64,ppc64le,s390x,arm64 About a minute ago
$ oc get imagestreamtags
NAME IMAGE REFERENCE UPDATED
ruby-hello-world:amd64 image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world@sha256:f954d2e4566183ec53515cd2e33c60dffaa4e5c997d82d29bae4193422091946 About a minute ago
ruby-hello-world:arm64 image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world@sha256:e5f42ffcd57a9970ce8787bb7b00eb10ee9274e8f20c335c0e74e7f60d2ce48e 2 minutes ago
ruby-hello-world:ppc64le image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world@sha256:1c02da9b34dc905745e1cd5e7043cd991caba5fb4bcd174ceed9fdf4ae8da4ab About a minute ago
ruby-hello-world:s390x image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world@sha256:5d38c780f4301ebebd4e5e1923c08b21736fcd6f3f6927df80b1d46dc91facae 2 minutes ago
Great! We’ll now define a Job to create a manifest list of the container images we’ve built. This will spin up a pod and use manifest-tool to do the creating and pushing to the cluster registry.
This pod will need authorization to access the cluster-internal registry. We can do that by logging into the registry with oc
, and creating an image pull secret out of the resulting auth file:
$ oc registry login --to=registry-docker-config.json
$ oc create secret docker-registry cluster-registry-dockerconfig --from-file=.dockerconfigjson=registry-docker-config.json
secret/cluster-registry-dockerconfig created
Now that we have the auth secret set up, we’ll be using a pre-built image for manifest-tool based on the build instructions in the manifest-tool repo. I’ve built and pushed this image to quay.io/dorzel/manifest-tool:latest. This has been built for amd64 only, so we’ll specify a nodeSelector
in the job to make sure it gets scheduled correctly.
With that in place, we’ll need to use a volumeMount
to make the docker config JSON file accessible to manifest-tool. Here’s the Job definition:
$ cat > job.yaml << EOF
apiVersion: batch/v1
kind: Job
metadata:
name: ruby-hello-world-create-manifestlist
namespace: dorzel-builds-test
spec:
selector: {}
parallelism: 1
completions: 1
template:
metadata:
name: ruby-hello-world-create-manifestlist
spec:
nodeSelector:
kubernetes.io/arch: amd64
containers:
- name: ruby-hello-world-create-manifestlist
image: quay.io/dorzel/manifest-tool:latest
command: ['manifest-tool', '--debug', '--docker-cfg=/var/run/secrets/registry-dockerconfig/.dockerconfigjson', '--insecure=true', 'push', 'from-args']
args:
- '--platforms=linux/amd64,linux/arm64,linux/ppc64le,linux/s390x'
- '--template=image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world:ARCH'
- '--target=image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world:mflist'
volumeMounts:
- mountPath: /var/run/secrets/registry-dockerconfig/
name: cluster-registry-dockerconfig
readOnly: true
restartPolicy: Never
volumes:
- name: cluster-registry-dockerconfig
secret:
secretName: cluster-registry-dockerconfig
EOF
Instantiate the Job to get the manifest list created:
$ oc create -f job.yaml
job.batch/ruby-hello-world-create-manifestlist created
Ok, let's check if the Job was successful:
$ oc get jobs
NAME COMPLETIONS DURATION AGE
ruby-hello-world-create-manifestlist 1/1 4s 21s
Now that the job has run, the ImageStreamTag
corresponding to the manifest list (mflist
tag) should exist and have four manifests:
$ oc get imagestreamtags
NAME IMAGE REFERENCE UPDATED
ruby-hello-world:amd64 image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world@sha256:f954d2e4566183ec53515cd2e33c60dffaa4e5c997d82d29bae4193422091946 5 minutes ago
ruby-hello-world:arm64 image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world@sha256:e5f42ffcd57a9970ce8787bb7b00eb10ee9274e8f20c335c0e74e7f60d2ce48e 6 minutes ago
ruby-hello-world:mflist image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world@sha256:aa9558d566587e8f07cbd7a6c4fb09dbb1ef0215e046153b4260a02d20099f0a 37 seconds ago
ruby-hello-world:ppc64le image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world@sha256:1c02da9b34dc905745e1cd5e7043cd991caba5fb4bcd174ceed9fdf4ae8da4ab 6 minutes ago
ruby-hello-world:s390x image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world@sha256:5d38c780f4301ebebd4e5e1923c08b21736fcd6f3f6927df80b1d46dc91facae 6 minutes ago
$ oc get imagestreamtag ruby-hello-world:mflist -o jsonpath='{.image.dockerImageManifests}' | jq
[
{
"architecture": "amd64",
"digest": "sha256:f954d2e4566183ec53515cd2e33c60dffaa4e5c997d82d29bae4193422091946",
"manifestSize": 1413,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"os": "linux"
},
{
"architecture": "arm64",
"digest": "sha256:e5f42ffcd57a9970ce8787bb7b00eb10ee9274e8f20c335c0e74e7f60d2ce48e",
"manifestSize": 1413,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"os": "linux"
},
{
"architecture": "ppc64le",
"digest": "sha256:1c02da9b34dc905745e1cd5e7043cd991caba5fb4bcd174ceed9fdf4ae8da4ab",
"manifestSize": 1413,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"os": "linux"
},
{
"architecture": "s390x",
"digest": "sha256:5d38c780f4301ebebd4e5e1923c08b21736fcd6f3f6927df80b1d46dc91facae",
"manifestSize": 1413,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"os": "linux"
}
]
Looks good. To see this in action, let's craft a Deployment for the Ruby image by pointing to the manifest list:
$ cat > deployment.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: dorzel-builds-test
name: ruby-hello-world-deployment
annotations:
image.openshift.io/triggers: >-
[{"from":{"kind":"ImageStreamTag","name":"ruby-hello-world:mflist","namespace":"dorzel-builds-test"},"fieldPath":"spec.template.spec.containers[?(@.name==\"container\")].image","paused":"true"}]
spec:
selector:
matchLabels:
app: ruby-hello-world-deployment
replicas: 10
template:
metadata:
labels:
app: ruby-hello-world-deployment
spec:
containers:
- name: ruby-hello-world-container
image: 'image-registry.openshift-image-registry.svc:5000/dorzel-builds-test/ruby-hello-world:mflist'
ports:
- containerPort: 8080
protocol: TCP
env: []
imagePullSecrets: []
strategy:
type: Recreate
paused: false
EOF
We’ll omit nodeSelector
s and scheduling gates on this object since the manifest-listed image can be deployed on any node. The number of replicas has been set to the number of worker nodes in the cluster, so we can see the pods deployed on all of our worker nodes. Let’s instantiate the deployment:
$ oc create -f deployment.yaml
deployment.apps/ruby-hello-world-deployment created
Observe that it happily deploys to our worker nodes of all arches (of course, if you're doing this yourself on a multi-arch compute cluster, this will look different depending on the configuration of your cluster):
$ oc get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
ruby-hello-world-deployment 10/10 10 10 8m58s
$ oc get replicasets
NAME DESIRED CURRENT READY AGE
ruby-hello-world-deployment-b67c9bfb4 10 10 10 3m48s
$ oc get pods -l app=ruby-hello-world-deployment -o json | jq '.items[] | .metadata.name + " | " + .spec.nodeName'
"ruby-hello-world-deployment-b67c9bfb4-5kjh9 | ppc64le-002.example.com"
"ruby-hello-world-deployment-b67c9bfb4-97hlb | s390x-008.example.com"
"ruby-hello-world-deployment-b67c9bfb4-fh29s | x86-019.example.com"
"ruby-hello-world-deployment-b67c9bfb4-hg8zh | aarch64-019.example.com"
"ruby-hello-world-deployment-b67c9bfb4-j58z8 | ppc64le-001.example.com"
"ruby-hello-world-deployment-b67c9bfb4-j6b86 | x86-020.example.com"
"ruby-hello-world-deployment-b67c9bfb4-jd2bd | x86-014.example.com"
"ruby-hello-world-deployment-b67c9bfb4-jm229 | aarch64-018.example.com"
"ruby-hello-world-deployment-b67c9bfb4-m4lhg | x86-007.example.com"
"ruby-hello-world-deployment-b67c9bfb4-tdpq9 | x86-013.example.com"
$ oc logs ruby-hello-world-deployment-b67c9bfb4-5kjh9
[2024-06-26 17:17:38] INFO WEBrick 1.7.0
[2024-06-26 17:17:38] INFO ruby 2.7.8 (2023-03-30) [powerpc64le-linux]
== Sinatra (v2.1.0) has taken the stage on 8080 for production with backup from WEBrick
[2024-06-26 17:17:38] INFO WEBrick::HTTPServer#start: pid=7 port=8080
And there we have it! We've successfully created and pushed a manifest-listed image from four Builds on four different architectures in our multi-arch compute cluster, and have deployed that image to our compute nodes of each architecture.
For the next post in this series, we’ll talk about Builds for Red Hat OpenShift and how that can be used to simplify and streamline the process of doing multi-arch builds.