Typically, building containers and images from a standard Dockerfile requires root access and permissions. This can create a challenge when working with public or shared clusters. For example, cluster admins don't often allow permissions to run this type of workload, as it might compromise the security of the entire cluster.
In these situations, many developers use a build tool such as kaniko to simplify the effort. Kaniko can build your images without requiring root access. This capability makes kaniko a feasible alternative for building containers and images in any kind of environment; for example, standard Kubernetes clusters, Google Kubernetes Engine, and public or shared clusters. Kaniko can also automatically push your images to a specified image registry.
This article shows you how to use kaniko to build a container image in a Red Hat OpenShift cluster and push the image to a registry.
Prerequisites
To perform a kaniko build on a Red Hat OpenShift cluster, ensure that the following prerequisites are in place:
- Access to an active OpenShift cluster (with admin access).
- Access to a source code repository that is either local or hosted somewhere, such as GitHub.
- A valid Dockerfile for your target source directory. The Dockerfile can exist anywhere, as long as a fully-qualified URL is available.
Note: All the oc
commands in this article also work with kubectl
, whether you are working with an OpenShift cluster or a Kubernetes cluster, or without a cluster.
Setup and configuration for kaniko on OpenShift
Once the prerequisites are set up, configured, and active, you can perform a kaniko build on an OpenShift cluster and push the image to a registry.
Log in to the OpenShift cluster
To start, log in to your OpenShift cluster as follows:
$ oc login --token=token --server=server-url
Create a new project
Create your own project using:
$ oc new-project project-name
Create a secret using the credentials to your registry
To push your image to an external registry (such as Docker Hub), create a secret named regcred
using the following oc
command:
$ oc create secret docker-registry regcred \
--docker-server=your-registry-server \
--docker-username=your-name \
--docker-password=your-pword \
--docker-email=your-email
Replace the italicized values in this command with the following information:
your-registry-server
: the fully qualified domain name (FQDN) of your private Docker registry (https://index.docker.io/v1/ for Docker Hub)your-name
: your Docker usernameyour-pword
: your Docker passwordyour-email
: your Docker email address
Note: Push your image to an internal registry through a pod on a cluster using a service account. For example, you can acquire login credentials for your service account, such as for Builder, through the cluster's console.
From the list of available secrets in your namespace, pick a builder-dockercfg
secret, and expose the base64 credentials using the Reveal Values button on the OpenShift console.
Locate the URL for your target image registry and copy the authorization token. Use it to prepare a new config.json
file by replacing image-registry-url and auth-token with the appropriate values. For example:
{
"auths": {
"image-registry-url": {
"auth": "auth-token"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.8 (darwin)"
},
"experimental": "disabled"
}
Once the config.json
file is ready, create a secret as follows, naming it regcred
:
$ oc create secret generic regcred --from-file=.dockerconfigjson=path/to/config.json --type=kubernetes.io/dockerconfigjson
Clone a source code repository
In the local file system, git clone
your source code repository. For example, in an empty directory enter the following:
git://github.com/openshift/golang-ex.git
Next, download the corresponding Dockerfile from its location and place it at the root of this directory, if it doesn't already exist there.
If a specific subdirectory within your cloned repo hosts the code used to build an image (as opposed to the entire cloned directory), place the Dockerfile at the root of that subdirectory. Together the directory containing your source code and Dockerfile now represent your build context.
Note: When using the Dockerfile present in the mentioned repository, drop the line that says USER nobody
, to avoid permission issues.
Make sure that the paths mentioned after /kaniko/build-context
against the --dockerfile
and --context
parameters in the openshift-pod.yaml
file accurately represents the directory structure present inside kaniko-build-context.tar.gz
. The paths must be an exact match.
Compress the build context into a tar.gz file
Once the build context is ready, compress it into a tar.gz
file as follows:
$ tar -czvf kaniko-build-context.tar.gz path/to/folder
Create an openshift-pod.yaml file with two containers
Create an openshift-pod.yaml
file that has two containers, as shown in Figure 1.
If you are pushing to Docker Hub, you could set the destination as follows:
--destination=docker.io/your-dockerhub-username/image-name:image-tag
If you are pushing to the internal registry, set the destination as shown here:
--destination=image-registry.openshift-image-registry.svc:5000/your-project-name/image-name:image-tag
Apply the pod to the cluster
Use the following command to apply the pod to your cluster:
$ oc apply -f path/to/openshift-pod.yaml
The command should return:
pod/kaniko created
Check the status of the cluster
To check the cluster's status, run the following command:
$ oc get pods
Here is an example of what it displays:
NAME READY STATUS RESTARTS AGE
kaniko 0/1 Init:0/1 0 50s
Copy tar.gz from the local file system to the kaniko-init container
Copy the tar.gz
file that you created earlier from the local file system to the kaniko-init
container currently running in the pod:
$ oc cp path/to/kaniko-build-context.tar.gz kaniko:/tmp/kaniko-build-context.tar.gz -c kaniko-init
Extract the copied tar.gz file on the mounted path to the shared volume
From inside your kaniko-init
container, extract the copied tar.gz
file into the mounted path pointing to the shared volume inside of the kaniko pod. This allows it to be accessed by other containers with access to this shared volume.
Check your work
You should see the pushed image reflected in your target registry. Additionally, you can take a closer look inside the container at any time. (I found this to be quite useful while attempting to debug the process.) To begin, start a bash session inside your kaniko-init
container and take a look:
$ oc exec kaniko -c kaniko-init -it /bin/bash
Once the extraction process is complete, you can shut down the init container, at which point the kaniko container takes over. Then create a file that serves as a trigger:
$ oc exec kaniko -c kaniko-init -- touch /tmp/complete
When you run oc get pods
again, the output displays whether everything is working well:
NAME READY STATUS RESTARTS AGE
kaniko 1/1 Running 0 6m57s
Next, run the following oc
command to get a more detailed look at what's going on inside the kaniko container:
$ oc describe pod kaniko
Alternatively, you can look at the pod logs within the OpenShift console.
After the pod has reached a completed state, if you pushed it to an external registry, you should be able to log into your registry and find the newly pushed image there. If you pushed to the internal registry, you should be able to navigate to Builds —> ImageStreams (within the OpenShift console's Administrator view) to find the newly pushed image there.
You can delete the pod if needed using oc delete pod kaniko
.