Configuring mKahaDB persistence storage for ActiveMQ

Container images usually come with pre-defined tools or services with minimal or limited possibilities of further configuration. This brought us into a way of thinking of how to provide images that contain reasonable default settings but are, at the same time, easy to extend. And to make it more fun, this would be possible to achieve both on a single Linux host and in an orchestrated OpenShift environment.

Source-to-image (S2I) has been introduced three years ago to allow developers to build containerized applications by simply providing source code as an input. So why couldn’t we use it to make configuration files as an input instead? We can, of course!

Creating an Extensible Image

Creating S2I builder images was already described in an article by Maciej Szulik and creating images that are extensible and flexible enough to be adjusted with custom configuration is not much different. So let’s focus on the bits that are essential for making an image configurable.

Required Scripts

The two scripts that every builder image must provide are; assemble and run, both are included in the s2i/bin/ directory.


The assemble script defines how the application image is assembled.

Let’s look at the official Software Collections nginx S2I builder image to see its default behavior. When you open the assemble script (snippet below), you see that by default the nginx builder image looks in the nginx-cfg/ and nginx-default-cfg/ directories within the provided source code where it expects to find your configuration files used for creating a customized application image.

if [ -d ./nginx-cfg ]; then
  echo "---> Copying nginx configuration files..."
  if [ "$(ls -A ./nginx-cfg/*.conf)" ]; then
    cp -v ./nginx-cfg/*.conf "${NGINX_CONFIGURATION_PATH}"
    rm -rf ./nginx-cfg

if [ -d ./nginx-default-cfg ]; then
  echo "---> Copying nginx default server configuration files..."
  if [ "$(ls -A ./nginx-default-cfg/*.conf)" ]; then
    cp -v ./nginx-default-cfg/*.conf "${NGINX_DEFAULT_CONF_PATH}"
    rm -rf ./nginx-default-cfg


The run script is responsible for running the application container. In the nginx case, once the application container is run, the nginx server is started on the foreground.

exec /usr/sbin/nginx -g "daemon off;"


To tell s2i where it should expect the scripts, you need to define a label in the Dockerfile:

LABEL io.openshift.s2i.scripts-url="image:///usr/libexec/s2i"

Alternatively, you can also specify custom assemble and run scripts by collocating them with your source/config files;  the scripts baked in the builder image would then be overridden.

Optional Scripts

Since S2I provides quite a complex set of capabilities, you should always provide documentation on how users are expected to use your image. Test configuration (or application) might come in handy as well.


This script outputs instructions on how to use the image when the container is run.

test/run and test/test-app

These scripts will test the application source code and run of the builder image.

To make the creation of these images easier, you can take advantage of the S2I container image template.

Extending an Image with Custom Configuration

Now let’s have a look at how you can use such an image in the real world.

Again, we’re going to take the above nginx image to demonstrate how to adjust it to build a containerized application with custom configuration.

One of the advantages of using source-to-image for configuration is that it can be used on any standalone Linux platform as well as in an orchestrated OpenShift environment.

On Red Hat Enterprise Linux, it is as easy as running the following command:

$ s2i build --context-dir=1.12/test/test-app/ nginx-sample-app

The s2i build command takes the configuration files in the test/test-app/ directory and injects them in the output nginx-sample-app image.

Note that by default, s2i takes a repository as an input and looks for files in its root directory. In this case, the configuration files are in a subdirectory, hence specifying the --context-dir.

$ docker run --rm -p 8080:8080 nginx-sample-app

Running the container will then show you a website informing you that your Nginx server is working.

And similarly in OpenShift:

$ oc new-app --context-dir=1.12/test/test-app/ --name nginx-test-app

The oc new-app command creates and deploys a new image nginx-test-app modified with the configuration provided in the test/test-app/ directory.

After creating a route, you should see the same message as above.

Advantages of Using an S2I Builder Image for Extension

To sum it up, there are a number of reasons to leverage S2I for your project.

  • Flexibility - You can customize a service to fit your needs by providing a configuration file that rewrites the values used in a container by default. And it doesn’t end there: do you want to install an additional plugin that is not included in your database image by default or install and run arbitrary commands? S2I-enabled images allow for this.
  • Any platform - You can use S2I for building standalone containers on several Linux platforms that provide the s2i RPM package, including Red Hat Enterprise Linux, CentOS, and Fedora or take advantage of the s2i binary. The S2I build strategy is one of the integrated build strategies in OpenShift, so you can easily leverage it for building containers deployed in an orchestrated environment as well.
  • Separated images and service configuration - Although having a clear distinction between images and service configuration allows you to perform adjustments that are more complex, the build reproducibility remains preserved at the same time.

Pull and Run a Flexible Image Now

The following images are now available as S2I builders from the Red Hat Container Catalog and can be easily extended as demonstrated above:

More images will appear in the Catalog soon. In the meantime, you can try out their upstream counterparts.

The source-to-image project contains extensive documentation with examples, so head over to the project’s GitHub page if you’d like to learn more.


Join the Red Hat Developer Program (it’s free) and get access to related cheat sheets, books, and product downloads.

Last updated: November 1, 2023