This blog post details how to build a container image using Buildpacks in a CI/CD flow with the Tekton Pipeline engine, a tool that automates building and deploying software.
This is part of a series on building your applications with Cloud Native Computing Foundation (CNCF) Buildpacks. Catch up on the previous articles here:
- The journey to enable UBI with the Paketo Buildpacks
- Building applications with Paketo Buildpacks and Red Hat UBI container images
- Running applications with Paketo Buildpacks and Red Hat UBI container images in OpenShift
How Tekton Pipelines work
A Tekton Pipeline is defined using Kubernetes Custom Resources, which are custom objects that extend Kubernetes to manage specific tasks. Key building blocks include Task
and Pipeline
, which define and run your CI/CD workflows. In short, a Task is made of individual Steps, while a Pipeline brings together multiple Tasks.
To do this, you will typically create a Pipeline
that packages different tasks to be executed (such as git clone, build, push, or test), along with their order and any optional conditions to execute them. The TaskRun
or PipelineRun
resources that accompany the CI/CD flow will trigger the creation and execution of the CI/CD flow on the cluster. These resources also allow you to customize the parameters of the pipeline and its individual tasks. From a technical point of view, the Tekton engine converts a task into a pod with multiple containers, one for each step. Similarly, a pipeline and its tasks are also managed as pods.
To support buildpacks technology, the Cloud Native Buildpacks project created a reference task that can perform Buildpacks builds. This can be done with or without image extensions on top of the lifecycle tool. Extensions help Buildpacks authors enhance build and run processes, for example, by applying a Dockerfile to install a specific version of the Java runtime, Node.js, and more.
This Task calls individual lifecycle binaries—prepare, analyze, detect, restore, build or extender, export, and run—each in a separate container during the build process. For that reason, the task is called Buildpacks phases. This task also reinforces the security because the container's processes are executed separately using the UID/GID of the builder image or, when required, with additional Linux container capabilities.
The parameters that customize the task are defined as part of the task's documentation, as explained in the Task README.
Setup
Note
Before you install Tekton, we recommend reviewing the CI/CD concepts using the project documentation.
Prerequisites
Before getting started, make sure you have the oc client tool installed. You will also need access to an OpenShift cluster (version 4.16 or newer) where the OpenShift Pipelines operator (version 1.18 or newer) has been deployed and you have at least the Admin role.
Define and apply Tekton resources
The scenario that we will detail here is pretty simple and can be summarized as follows.
Pipeline:
- Task: Git clone.
- Task: Build the container image using Buildpacks and push it on a container registry. Export the SHA of the image build.
- Task: Print the SHA of the image.
In order to set up this project, you will have to define the following Kubernetes or Tekton YAML resource files:
PersistentVolumeClaim
: Stores the cloned project and Buildpacks files.Secret
andServiceAccount
(both optional): Provides your container registry credentials.Pipeline
: Defines the parameters to be customized and tasks to be used and their order.PipelineRun
: Triggers the Pipeline and CI/CD flow.
Claiming a storage volume
Create a resources.yml
file as follows that defines a PersistentVolumeClaim
, its size, and access mode. Tekton will use this PVC to mount to the different workspaces of the flow.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: buildpacks-source-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
Registry authorization
Note
If you are pushing the image to a container registry that requires authentication (for example, docker.io
, ghcr.io
, quay.io
), you must define the credentials within a Kubernetes Secret resource, as I'll explain hereafter.
Next, create a Secret containing the username, password, and container registry server:
oc create secret docker-registry registry-user-pass \
--docker-username=<USERNAME> \
--docker-password=<PASSWORD> \
--docker-server=<REGISTRY_URL, e.g. https://index.docker.io/v1/ > \
--namespace default
Create a ServiceAccount
using a sa.yml
file that provides the link of the Secret. Tekton will then mount the registry's credentials as a Docker configuration file from the Secret, and the Service Account will be attached to the created pods and containers.
apiVersion: v1
kind: ServiceAccount
metadata:
name: buildpacks-service-account
secrets:
- name: registry-user-pass
Define the CI/CD flow: Pipeline
Now, create a pipeline.yml
file that defines the structure of the CI/CD flow and relevant resources. The syntax of a Pipeline is quite simple and includes the following elements:
- Params: This is where you will declare the parameters to customize the pipeline and tasks.
- Workspaces: We bind these to a PVC, ConfigMap, or Secret.
- Tasks: A task encapsulates the logic for actions or steps, and it can be referenced externally from a Git or HTTP repository, or embedded directly.
- Results: This feature exports and shares information between the steps of a task or between tasks of a pipeline.
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: buildpacks-test-pipeline
spec:
params:
- name: git-url
type: string
description: URL of the project to git clone
- name: source-subpath
type: string
description: The subpath within the git project
- name: image
type: string
description: image URL to push
- name: builder
type: string
description: buildpacks builder image URL
- name: env-vars
type: array
description: env vars to pass to the lifecycle binaries
workspaces:
- name: source-workspace # Directory where application source is located. (REQUIRED)
tasks:
- name: fetch-repository # This task fetches a repository from github, using the `git-clone` task
taskRef:
resolver: http
params:
- name: url
value: https://raw.githubusercontent.com/tektoncd/catalog/refs/heads/main/task/git-clone/0.9/git-clone.yaml
workspaces:
- name: output
workspace: source-workspace
params:
- name: url
value: "$(params.git-url)"
- name: deleteExisting
value: "true"
- name: buildpacks # This task uses the `buildpacks phases` task to build the application
taskRef:
name: buildpacks-phases
runAfter:
- fetch-repository
workspaces:
- name: source
workspace: source-workspace
params:
- name: APP_IMAGE
value: "$(params.image)"
- name: SOURCE_SUBPATH
value: "$(params.source-subpath)"
- name: CNB_BUILDER_IMAGE
value: "$(params.builder)"
- name: CNB_ENV_VARS
value: "$(params.env-vars[*])"
- name: display-results
runAfter:
- buildpacks
taskSpec:
steps:
- name: print
image: docker.io/library/bash:5.1.4@sha256:b208215a4655538be652b2769d82e576bc4d0a2bb132144c060efc5be8c3f5d6
script: |
#!/usr/bin/env bash
set -e
echo "Digest of created app image: $(params.DIGEST)"
params:
- name: DIGEST
params:
- name: DIGEST
value: $(tasks.buildpacks.results.APP_IMAGE_DIGEST)
Deploy the resources
Once complete, you can deploy the YAML files created on the cluster within your namespace:
oc apply -f resources.yml -f sa.yml -f pipeline.yml
Create and apply the PipelineRun
Create a run.yml
file that defines the PipelineRun
. The structure of this resource is similar to the Pipeline as it contains params and workspaces, but here they are used to customize the Pipeline referenced and bind the workspace(s) with a Kubernetes resource. As mentioned earlier, we will also reference the ServiceAccount
to be used as part of the taskRunTemplate
section.
Note
Different parameters of the pod's specification, created by Tekton, can be personalized using the podTemplate, as demonstrated below, to specify the GID of the filesystem create by the Git clone task.
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: buildpacks-test-pipeline-run
spec:
taskRunTemplate:
serviceAccountName: buildpacks-service-account # Only needed if you set up authorization
podTemplate:
securityContext:
fsGroup: 65532
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: buildpacks-source-pvc
params:
- # The url of the git project to clone (REQUIRED).
name: git-url
value: https://github.com/quarkusio/quarkus-quickstarts
- # This is the path within the git project you want to build (OPTIONAL, default: ".")
name: source-subpath
value: "getting-started"
- # This is the builder image we want the task to use (REQUIRED).
name: builder
value: paketobuildpacks/builder-ubi8-base:0.1.42
- name: image
value: <REGISTRY/IMAGE NAME, eg gcr.io/test/image > # This defines the name of the output image
Deploy it:
oc apply -f run.yml
Follow the CI/CD build
To follow the process of the flow, execute the following command and check the events:
oc events --for="pipelinerun/buildpacks-test-pipeline-run"
...
LAST SEEN TYPE REASON OBJECT MESSAGE
5m6s (x2 over 5m6s) Normal Started PipelineRun/buildpacks-test-pipeline-run
...
5m6s Normal Running PipelineRun/buildpacks-test-pipeline-run Tasks Completed: 0 (Failed: 0, Cancelled 0), Incomplete: 3, Skipped: 0
4m52s (x2 over 4m52s) Normal Running PipelineRun/buildpacks-test-pipeline-run Tasks Completed: 1 (Failed: 0, Cancelled 0), Incomplete: 2, Skipped: 0
3m4s Normal Running PipelineRun/buildpacks-test-pipeline-run Tasks Completed: 2 (Failed: 0, Cancelled 0), Incomplete: 1, Skipped: 0
2m58s Normal Succeeded PipelineRun/buildpacks-test-pipeline-run Tasks Completed: 3 (Failed: 0, Cancelled 0), Skipped: 0
Note
As the process to follow a flow is better visualized using a UI, we recommend opening the OpenShift Dashboard and the Pipeline view.
To get more information about what a Tekton Task or step is doing, you can retrieve the pod’s logs. For instance, the Buildpacks "extender" step logs show that it installs OpenJDK 21 (according to the Dockerfile packaged within the builder image) and ultimately, that the Maven build succeeded."
oc logs pod/buildpacks-test-pipeline-run-buildpacks-pod
Defaulted container "step-get-labels-and-env" out of: step-get-labels-and-env, step-prepare, step-analyze, step-detect, step-restore, step-extender, step-build, step-export, step-results, prepare (init), place-scripts (init)
and for the extender-step only
oc logs pod/buildpacks-test-pipeline-run-buildpacks-pod -c step-extender
...
2025-06-27T11:32:25.067007701Z time="2025-06-27T11:32:25Z" level=info msg="Performing slow lookup of group ids for root"
2025-06-27T11:32:25.067243910Z time="2025-06-27T11:32:25Z" level=info msg="Running: [/bin/sh -c echo ${build_id}]"
2025-06-27T11:32:25.095150183Z 9e447871-e415-4018-a860-d5a66d925a57
2025-06-27T11:32:25.096877516Z time="2025-06-27T11:32:25Z" level=info msg="Taking snapshot of full filesystem..."
2025-06-27T11:32:25.280396774Z time="2025-06-27T11:32:25Z" level=info msg="Pushing layer oci:/kaniko/cache/layers/cached:a035cdb3949daa8f4e7b2c523ea0d73741c7c2d5b09981c261ebae99fd2f3233 to cache now"
2025-06-27T11:32:25.280572023Z time="2025-06-27T11:32:25Z" level=info msg="RUN microdnf --setopt=install_weak_deps=0 --setopt=tsflags=nodocs install -y openssl-devel java-21-openjdk-devel nss_wrapper which && microdnf clean all"
2025-06-27T11:32:25.280577315Z time="2025-06-27T11:32:25Z" level=info msg="Cmd: /bin/sh"
2025-06-27T11:32:25.280578398Z time="2025-06-27T11:32:25Z" level=info msg="Args: [-c microdnf --setopt=install_weak_deps=0 --setopt=tsflags=nodocs install -y openssl-devel java-21-openjdk-devel nss_wrapper which && microdnf clean all]"
...
[WARNING] [io.quarkus.deployment.configuration] Configuration property 'quarkus.package.type' has been deprecated and replaced by: [quarkus.package.jar.enabled, quarkus.package.jar.type, quarkus.native.enabled, quarkus.native.sources-only]
[INFO] [io.quarkus.deployment.QuarkusAugmentor] Quarkus augmentation completed in 1650ms
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 18.011 s
[INFO] Finished at: 2025-07-07T15:41:35Z
[INFO] ------------------------------------------------------------------------
Removing source code
Restoring multiple artifacts
Paketo Buildpack for Executable JAR 6.13.2
https://github.com/paketo-buildpacks/executable-jar
Process types:
executable-jar: java -jar /workspace/source/getting-started/quarkus-run.jar (direct)
task: java -jar /workspace/source/getting-started/quarkus-run.jar (direct)
web: java -jar /workspace/source/getting-started/quarkus-run.jar (direct)
Test the image build
Once the application is built, you can pull and run it locally:
docker | podman pull <REGISTRY/IMAGE NAME>
docker | podman run -it <REGISTRY/IMAGE NAME>
Wrap up
In this article, we explored how to build container images using Buildpacks within a CI/CD workflow, leveraging the Tekton Pipeline engine. By defining Kubernetes Custom Resources like Tasks and Pipelines, you can automate the entire process, from fetching your code to pushing the built image to a container registry without the need to use Containerfiles (e.g., Dockerfiles). This approach not only streamlines your application development and deployment but also reinforces security by isolating processes. By combining Tekton and Buildpacks, you have a robust framework for efficient and secure CI/CD, bringing your cloud-native applications to life.