Cost optimization remains a paramount concern for enterprises deploying containerized workloads. While x86-based instances have long been the standard, a significant opportunity for cost savings has emerged for customers running Red Hat OpenShift on Amazon Web Services (AWS). By migrating applications to Arm-based instances powered by AWS Graviton processors, organizations can unlock substantial benefits including lower compute costs and better energy efficiency, paving the way for a more economical and sustainable cloud footprint.
Why migrate from x86 to Arm?
The following table provides a quick comparison of AWS instance types based on architecture and on-demand cost:
Instance type | Architecture | vCPUs | Memory (GiB) | On-demand price us-east-2 per hour |
m6i.xlarge | x86 | 4 | 16 | $0.192 |
m6gx.large | Arm (Graviton2) | 4 | 16 | $0.145 |
Using OpenShift’s pipelines-tutorial, we can mock a cluster cost with a small workload that builds and deploys for a week. Then, using the OpenShift cost operator on each cluster, we can see the cost differences between these instance types.
Figure 1 shows the x86 total cluster cost.

Compare the x86 total cluster cost to the Arm total cluster cost, shown in Figure 2.

Cluster node types | Architecture | Total instances | Cluster cost per week |
m6i.xlarge | x86 | 6 | $164.82 |
m6gx.large | Arm (Graviton2) | 6 | $101.28 |
In this case, Arm saved approximately 39%.
How to migrate an x86 workload to Arm64 on AWS
We’ll use OpenShift’s pipelines-tutorial as an example. Here are the steps:
- Assess workload compatibility.
- Enable multi-arch support in OpenShift.
- Add 64-bit Arm MachineSets.
- Rebuild and verify container images.
1. Assess workload compatibility
Before migrating, determine whether your applications can run on 64-bit Arm architecture. Most modern applications built with portable runtimes (e.g., Java, Go, Python, Node.js) can run seamlessly on 64-bit Arm with little or no modifications. Check your container images and dependencies for 64-bit Arm compatibility.
In this case, the pipeline-tutorial can be built on both CPU architectures. Fortunately, our pipelines-tutorial doesn’t have these restrictions.
2. Enable multi-arch support in OpenShift
OpenShift supports multi-architecture workloads, allowing you to run both 64-bit x86 and 64-bit Arm based nodes in the same cluster. OpenShift’s documentation will be your guide for this process.
3. Add 64-bit Arm MachineSets
To migrate to Graviton-based EC2 instances, ensure that the OpenShift cluster is using the multi-arch release payload:
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
{"release.openshift.io/architecture":"multi","url":"https://access.redhat.com/errata/xxx"}$
Decide on a scheduling strategy: manual with taints and tolerations or via the Multiarch Tuning Operator. Because we have 1 workload (our build pipeline), we’ll go the taint and toleration routes. We’ve added this taint to our new Arm machine sets:
taints:
- effect: NoSchedule
key: newarch
value: arm64
This prevents existing x86 workloads from being scheduled to the Arm nodes.
Reimport the necessary ImageStreams
with import-mode
set to PreserveOriginal
:
oc import-image php -n openshift --all --confirm --import-mode='PreserveOriginal'
oc import-image python -n openshift --all --confirm --import-mode='PreserveOriginal'
4. Rebuild and verify container images
Note
OpenShift only supports native architecture container builds. Cross-architecture container builds are not supported.
To build 64-bit Arm compatible images, we’ve modified the openshift-pipelines tutorial to patch deployments with the Tekton Task’s PodTemplate information. This will allow us to pass a PodTemplate
for building and deploying our newly built application on the target architecture. It also makes it easy to revert back to 64-bit x86 by re-running the pipeline without the template.
Create a PodTemplate defining a toleration and a node affinity to make the builds deploy on Arm machines:
tolerations:
- key: "newarch"
value: "arm64"
operator: "Equal"
effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "kubernetes.io/arch"
operator: "In"
values:
- "arm64"
- key: "kubernetes.io/os"
operator: "In"
values:
- "linux"
Next, we’ll update 02_update_deployment_task.yaml
. This includes extract patching to include the PodTemplate’s node affinity and tolerations:
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: update-deployment
spec:
params:
- name: deployment
description: The name of the deployment patch the image
type: string
- name: IMAGE
description: Location of image to be patched with
type: string
- name: taskrun-name
type: string
description: Name of the current TaskRun (injected from context)
steps:
- name: patch
image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest
command: ["/bin/bash", "-c"]
args:
- |-
oc patch deployment $(inputs.params.deployment) --patch='{"spec":{"template":{"spec":{
"containers":[{
"name": "$(inputs.params.deployment)",
"image":"$(inputs.params.IMAGE)"
}]
}}}}'
# Find my own TaskRun name
MY_TASKRUN_NAME="$(params.taskrun-name)"
echo "TaskRun name: $MY_TASKRUN_NAME"
# Fetch the podTemplate
PODTEMPLATE_JSON=$(kubectl get taskrun "$MY_TASKRUN_NAME" -o jsonpath='{.spec.podTemplate}')
if [ -z "$PODTEMPLATE_JSON" ]; then
echo "No podTemplate found in TaskRun...Remove tolerations and affinity."
oc patch deployment "$(inputs.params.deployment)" \
--type merge \
-p "{\"spec\": {\"template\": {\"spec\": {\"tolerations\": null, \"affinity\": null}}}}"
else
echo "Found podTemplate:"
echo "$PODTEMPLATE_JSON"
oc patch deployment "$(inputs.params.deployment)" \
--type merge \
-p "{\"spec\": {\"template\": {\"spec\": $PODTEMPLATE_JSON }}}"
fi
# issue: https://issues.redhat.com/browse/SRVKP-2387
# images are deployed with tag. on rebuild of the image tags are not updated, hence redeploy is not happening
# as a workaround update a label in template, which triggers redeploy pods
# target label: "spec.template.metadata.labels.patched_at"
# NOTE: this workaround works only if the pod spec has imagePullPolicy: Always
patched_at_timestamp=`date +%s`
oc patch deployment $(inputs.params.deployment) --patch='{"spec":{"template":{"metadata":{
"labels":{
"patched_at": '\"$patched_at_timestamp\"'
}
}}}}'
We also need to update 04_pipeline.yaml
to pass the taskrun-name
to the update-deployment
task:
.
.
.
- name: update-deployment
taskRef:
name: update-deployment
params:
- name: deployment
value: $(params.deployment-name)
- name: IMAGE
value: $(params.IMAGE)
- name: taskrun-name //add these
value: $(context.taskRun.name) //lines
Now we can redeploy the UI and API using the arm64.yaml
PodTemplate. This will force all parts of the build pipeline and deployment to our tainted 64-bit Arm nodes.
tkn pipeline start build-and-deploy \
--prefix-name build-deploy-api-pipelinerun-arm64 \
-w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/01_pipeline/03_persistent_volume_claim.yaml \
-p deployment-name=pipelines-vote-api \
-p git-url=https://github.com/openshift/pipelines-vote-api.git \
-p IMAGE=image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-api-arm64
--use-param-defaults \
--pod-template arm64.yaml
tkn pipeline start build-and-deploy \
--prefix-name build-deploy-ui-pipelinerun-arm64 \
-w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/01_pipeline/03_persistent_volume_claim.yaml \
-p deployment-name=pipelines-vote-ui \
-p git-url=https://github.com/openshift/pipelines-vote-ui.git \
-p IMAGE=image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-ui-arm64 \
--use-param-defaults \
--pod-template arm64.yaml
Once the pods are up and running, you can safely remove the x86 worker nodes from the cluster, and remove the taints from the Arm worker nodes (if you choose to do so).
You can also migrate the 64-bit x86 control plane to 64-bit Arm before or after you migrate your workloads. See the OpenShift documentation for more information.
Success! Cost savings achieved with OpenShift on Graviton AWS
It is possible for organizations to have substantial cost savings after migrating OpenShift workloads to 64-bit Arm instances on AWS. Workloads such as web services, microservices, and data processing pipelines often perform better at lower costs on 64-bit Arm without code changes.
Migrating OpenShift x86 workloads to AWS Graviton (64-bit Arm) instances can significantly cut cloud spending. With OpenShift’s multi-arch capabilities, the transition is smoother than ever. By following these steps, you can start realizing savings benefits right away.
Are you ready to make the switch? Start by evaluating your workloads and take advantage of AWS Graviton’s cost-efficient computing today!