Using API keys securely in your OpenShift microservices and applications

This is a transcript of a session I gave at EMEA Red Hat Tech Exchange 2017, a gathering of all Red Hat solution architects and consultants across EMEA. It is about considerations and good practices when creating images that will run on OpenShift. This third part focuses on how you can make your images easier to consume by application developers or release managers.


When you specify an image for starting containers or creating child images you need to provide the version you want to use. If you don't the version with the "latest" tag is used.

Let's have a look at the way Red Hat creates a version hierarchy. This is a good example of a strategy that you could reuse for your own images.

A very important aspect is maintaining backward compatibility within a tag for downstream consumers. The publication of a new version of the image should not break the child images.

Red Hat image versions are aligned with the product, which is part of the container. Here is, for instance, the way rhel7/rhel is tagged:

7.4-81 / 7.4 / latest
7.3-97 / 7.3

Note that there isn't a 7 tag, as compatibility is not guaranteed between major releases only between minor releases. See the bottom of this article. The tags 7.4-81, 7.4 and the latest reference the same image. The image consumer is free to point to one of these tags with the consequence:

  • latest: The consumer gets a different image every time a new version is pushed.
  • 7.4: The consumer gets a different image every time a new version of the minor release 7.4 is pushed. Child images get patches automatically.
  • 7.4-81: The consumer does not get updates.

For anything that you aim to validate and run in production, you should target a stable tag and not use the latest. The recommendation is to point to the minor release: 7.4 in the example so that your image is automatically updated with patches. If you go for a specific release: 7.4-81 you will need to have a suitable workflow in place for patching your own images.

You may use the latest tag inside the project where the image is developed for seeing the latest changes automatically. More rarely, you may want, during your development phase, to be on the latest version, whenever it is made available, of an image you consume.


The next aspect in making your image consumable is obviously documentation. A user guide is certainly something useful but there are also things you can do to the image or OpenShift level.

Providing a template with a quick start shows how users can run applications based on your image.

Using the image metadata, you can have the most important points documented with the image itself:

  • You can use labels to add a description, to provide contact information of the maintainer, the address of the authoritative source, build, release and component information, etc.
  • Exposing important ports in the Dockerfile gives also important information on how the image should be run and your application connected to.
  • Same with exposing volumes. The image consumer is then aware of where data gets written inside the image that may need to be persisted.
  • Setting environment variables like PATH, JAVA_HOME and sound defaults for the configuration of your image also help for an easy start.
  • Finally, you must specify with CMD or ENTRYPOINT how your image process starts.

Here is an extract of the metadata available with the RHEL7 image. The complete set is available here.


    com.redhat.component rhel-server-docker
    distribution-scope public
    vendor Red Hat, Inc.
    description The Red Hat Enterprise Linux Base image...

Extension points

In part 2 we had the first glimpse at extension points. It is important to enable the image consumer to cover scenarios and configurations that either cannot be foreseen by the image creator or where the number of combinations would make it unmanageable. Extension points aim to avoid the need of rewriting layers you have created as part of your image.

Injecting environment information at runtime

This can be done in two ways: with setting environment variables and with mounting files into the container file system at startup.

Environment variables can just be added to the deployment configuration or provided by ConfigMap. You may use this way to specify the address of a service that is called by your application.

Files can be mounted on the container from ConfigMap to provide for instance logging configuration or from Secrets to provide certificates or other credentials required by the application.

Configuration at build time

If you have created a builder image, you may also want the user to inject the build configuration. You can, for instance, allow specifying a maven repository with an environment variable. This may however not be sufficient and your builder image should allow the user to inject complete settings.xml with the sources.

You may have defined in an assemble script the compilation of the application and the configuration of the image. A level of flexibility in term of libraries and drivers provisioned into the final image can be provided with image sourcing (confer part 2 of this blog) but still, it is a good idea to allow the image user to adapt the build process by extending or replacing some of its logic. This can be done for instance by making scripts sourced or called in the assemble script replaceable by scripts provided by the user with the application sources.

External builds

In part 2 we also had a glimpse at allowing the user to build the application externally and to only have the container image built on OpenShift. The rationale behind that is that companies may have invested in automated and integrated CI/CD pipelines and related infrastructure before the introduction of container technologies. External builds allow them to go on with using this infrastructure as they are moving to a container as a service platform.

There are two sensible approaches for that. The first one consists in streaming the application artifacts into the builder image from the CI tool (Jenkins for instance) through a binary build.

The second approach is to download the artifacts from a company repository. This can be done with curl or wget but for java applications where you may already have maven in the builder image the maven dependencies plugin is a convenient way.

Last updated: April 3, 2023