Project Hummingbird container images are standard container images. These images fit into your existing workflow and tools with minimal disruption and act as drop-in replacements for other container images you might be used to, such as those from Docker Hub.
But Project Hummingbird container images are distroless. What does this mean, exactly? Do distroless containers provide benefits beyond standard container images based on popular Linux distributions such as Debian, Alpine, and Red Hat Enterprise Linux? Do they miss anything? As a developer, should you use Project Hummingbird containers for most tasks or only for specific use cases?
This article answers these questions using concrete examples run on Fedora Linux with the podman command in rootless mode. For readability, this article omits most command outputs, including them only to highlight specific results.
You can adapt my examples to any environment that runs Open Container Initiative (OCI)-compliant containers, including Ubuntu Linux, Windows, and macOS using Podman Desktop or Docker Community Edition.
What is Project Hummingbird?
Project Hummingbird provides a collection of minimal, hardened, container images with a reduced attack surface. This focus on security combined with a highly automated update workflow aims to minimize Common Vulnerabilities and Exposures (CVE) counts, targeting near-zero vulnerabilities.
Available images include programming language runtimes (Python, Go, Node.js, Rust, PHP), databases (PostgreSQL, MariaDB), web servers (httpd, Caddy, nginx), tools (curl, git), and base runtime images.
Project Hummingbird represents the larger community effort that creates Red Hat Hardened Images. These images are freely available and redistributable just like Red Hat Universal Base Images (UBI). Red Hat Enterprise Linux and Red Hat OpenShift users receive support for Red Hat Hardened Images containers under their standard service-level agreement (SLA).
Running a database server with Project Hummingbird
Let's start by showing a few typical tasks for a developer using containers to illustrate how many Project Hummingbird images fit into existing tools and workflows.
Suppose you need a relational database, so you pull the official MariaDB container image and start a container from it. You wouldn't usually expose your database to the world, so I'm not using port forwarding; I'm using a local virtual network instead.
$ podman network create network-hub
$ podman pull docker.io/library/mariadb:11.8
$ podman run --name dbserver-hub -d \
--network network-hub \
-e MARIADB_ROOT_PASSWORD=s3cret@hub \
docker.io/library/mariadb:11.8You can then start another container to run the database client from the same database container image:
$ podman run --name dbclient-hub --rm -it \
--network network-hub \
docker.io/library/mariadb:11.8 \
mariadb -hdbserver-hub -uroot -ps3cret@hubFrom now on, you can use database administrator commands, such as CREATE DATABASE, to initialize your test database.
You can use the corresponding Project Hummingbird image in the same way. I changed names of containers, local networks, local ports, and the database password so you can run both side-by-side.
$ podman network create network-bird
$ podman pull registry.access.redhat.com/hi/mariadb:11.8
$ podman run --name dbserver-bird -d \
--network network-bird \
-e MARIADB_ROOT_PASSWORD=s3cret@bird \
registry.access.redhat.com/hi/mariadb:11.8
$ # give a few moments for the database server initialization
$ podman run --name dbclient-bird --rm -it \
--network network-bird \
registry.access.redhat.com/hi/mariadb:11.8 \
mariadb -hdbserver-bird -uroot -ps3cret@birdProject Hummingbird images might not function exactly like their counterparts on DockerHub. There might be slight differences in naming and tagging conventions, or in usage due to image hardening.
Running a web server with Project Hummingbird
Next, suppose you need a web server container to run a client-side HTML application. Later in your workflow, you might embed application files and a web server in a single container. For now, configure your web server container to use HTML files from a local volume.
The following example uses the official NGINX image from Docker Hub.
$ mkdir html-hub
$ echo "Mock web app using image from Docker" > html-hub/index.html
$ podman pull docker.io/library/nginx1.28
$ podman run --name webserver-hub -d \
-p 127.0.0.1:8000:80 --network network-hub \
-v $PWD/html-hub:/usr/share/nginx/html:z \
docker.io/library/nginx:1.28I set up port forwarding so you can test the application in a local web browser. For simplicity, you can verify that your web server is up and running by using curl:
$ curl 127.0.0.1:8000
Mock web app using image from DockerIn this example, the Project Hummingbird image functions differently than the Docker Hub version. Because Project Hummingbird images do not run root processes inside the container, they cannot listen on port 80. The image exposes port 8080 instead.
$ mkdir html-bird
$ echo "Mock web app using image from Hummingbird" > html-bird/index.html
$ podman pull registry.access.redhat.com/hi/nginx:1.28
$ podman run --name webserver-bird -d \
-p 127.0.0.1:8001:8080 --network network-bird \
-v $PWD/html-bird:/usr/share/nginx/html:z \
registry.access.redhat.com/hi/nginx:1.28
$ curl 127.0.0.1:8001
Mock web app using image from HummingbirdImage hardening often affects volume file permissions. Developers running rootless containers might already be familiar with these requirements.
Shells, binaries, and layers on minimized images
You cannot inspect the contents of most Project Hummingbird images by using a shell. For example, because a shell is not required to run a web server, Project Hummingbird removes it along with common user and file management tools that you would run from interactive shells and shell scripts.
$ podman exec -it webserver-bird bash -c "ls /usr/bin | wc -l"
Error: crun: executable file `bash` not found in $PATH: No such file or directory: OCI runtime attempted to invoke a command that was not foundYou could try variations such as /bin/sh or /usr/bin/bash to the same effect. It is not a matter of file and path names; the image simply does not contain a shell. Consequently, you need a different strategy to explore the file contents of most Project Hummingbird container images.
You can explore the contents of a Project Hummingbird image by using the podman extract command. It outputs the file contents of a container in the tar format.The following counts the number of binaries inside the two web server images:
$ podman export webserver-hub | tar t | grep /bin/ | wc -l
270
$ podman export webserver-bird | tar t | grep /bin/ | wc -l
31While it might be interesting to compare the number of binary files between Project Hummingbird and Docker Hub containers (such as commands, shared libraries, and even the container image size), be aware that these are not reliable indicators of the attack surface.
Several factors affect image size and file count, including compiler flags, presence of locate and language files, and optional software features. Using alternative non-GNU shells and utilities does not necessarily improve security and might even reduce it.
You might notice that a Project Hummingbird image typically contains more layers than its Docker Hub equivalent.
$ podman history docker.io/library/nginx:1.28 | wc -l
19
$ podman history registry.access.redhat.com/hi/nginx:1.28 | wc -l
69It is common advice to minimize the number of layers in a container image, for example by merging RUN instructions on your Containerfiles. Guess what: the number of layers is not a security attribute.
Project Hummingbird uses more layers because it rechunks containers using chunkah. This process reorganizes large image layers into smaller content-addressable layers based on related groups of files, which makes those layers shareable between multiple images.
As a result, when you pull or update a Project Hummingbird image, you download less data than you would with non-chunked container images.
How does chunkah find these groups of related files? It identifies them through their originating packages.
Distroless, but from packages
Most Project Hummingbird images do not contain package management commands, but they all contain an RPM database:
$ podman export webserver-bird | tar t | grep /bin/ | grep -E 'rpm|dnf'
$ podman export webserver-bird | tar t | grep 'rpmdb'
usr/lib/sysimage/rpm/rpmdb.sqliteYou can extract and list the contents of the package manager database to get information about the packages in a Project Hummingbird image.
$ mkdir temp-rpm
$ podman export webserver-bird | tar -x -C temp-rpm --strip-components=4 usr/lib/sysimage/rpm/rpmdb.sqlite
$ rpm -qa --nosignature --dbpath $PWD/temp-rpm/
gpg-pubkey-c6e7f081cf80e13146676e88829b606631645531-66b6dccf
hummingbird-gpg-keys-20251124-1.8.hum1.noarch
hummingbird-repos-20251124-1.8.hum1.noarch
hummingbird-release-20251124-1.8.hum1.noarch
setup-2.15.0-29.hum1.noarch
filesystem-3.18-56.hum1.x86_64
libgcc-15.2.1-7.hum1.x86_6
...
$ rpm -qa --dbpath $PWD/temp-rpm/ | wc -l
47Notice that the previous example uses the --nosignature option of the rpm command, which turns off digital signature checks. On most Linux distributions, you cannot import the Red Hat product signing keys to turn on signature validation. The required key, RPM-GPG-KEY-redhat-release (Key 4), requires support for quantum cryptographic algorithms.
If you want to keep RPM signature checks on, you can use a Project Hummingbird build container to run the rpm command:
$ podman run --name rpm -it --rm -v ./temp-rpm:/mnt:z registry.access.redhat.com/hi/core-runtime:latest-builder rpm -qa --dbpath /mnt/ | wc -l
48This command counts one more item because of an error message related to the /mnt/.rpm.lock file. This is fine because we do not need RPM transitions; we are not installing or updating packages.
The Linux community has decades of experience using package managers. These tools provide a runtime-neutral abstraction to the process of installing and configuring software. Building containers without taking advantage of Linux package managers is counterproductive. Additionally, tools like security scanners rely on package manager metadata.
Project Hummingbird images are built from Hummingbird's own package repositories, which follow upstream projects very closely. At the end of the build process, package manager binaries are stripped out of the image.
Project Hummingbird uses a faster package testing and validation process than general-purpose Linux distributions. Project Hummingbird packages must work well together only in the context of each individual container which includes them. Different Project Hummingbird images can provide different versions of the same dependency, as long as all those versions are actively maintained upstream.
In contrast, a general-purpose Linux distribution must verify that all packages work together with the kernel, system services, and other components required to run a bare-metal computer or a virtual machine instance. This is one reason general-purpose distributions often struggle to support multiple versions of the same software or library.
Builder image variants and native package managers
Some Project Hummingbird container images offer a builder variant for use with multi-stage Containerfiles. A future article will focus on the recommended practices for building application images using Project Hummingbird. For now, it is sufficient to know that:
- Standard Project Hummingbird images provide the minimal dependencies required for execution and exclude anything not strictly necessary at runtime.
- A builder variant retains a shell, RPM package managers, user and filesystem management commands, and other utilities commonly required to build and install applications. That is, it retains binaries and files that were stripped out from its standard image.
If you're building a simple application using an interpreted language runtime that only requires native dependencies, you do not need a builder image variant and a multi-stage build.
On the other hand, if your application requires additional dependencies pre-packaged as RPMs in a DNF repository, then you can use a builder image to add those dependencies to your application image.
You can verify that a builder image contains the settings for accessing Project Hummingbird package repositories. The following example uses the Node.js builder and runtime images:
$ podman create --name node-bird registry.access.redhat.com/hi/nodejs:24
$ podman create --name node-bird-builder registry.access.redhat.com/hi/nodejs:24-builder
$ podman run --name test --rm registry.access.redhat.com/hi/nodejs:24-builder dnf repolist
repo id repo name
public-hummingbird-x86_64-rpms public-hummingbird-x86_64-rpms
$ podman run --name test --rm registry.access.redhat.com/hi/nodejs:24-builder dnf repoinfo public-hummingbird-x86_64-rpms
...
URLs :
Base URL : https://koji-s3-cache.hummingbird-project.io/packages.redhat.com/api/pulp-content/public-hummingbird/x86_64/
...It might surprise you that standard, non-builder variants of the Project Hummingbird Node.js images include the npm package manager, even though they exclude RPM and DNF:
$ podman export node-bird | tar t | grep bin/npm
usr/bin/npm
...You would find similar characteristics on the Project Hummingbird images for other interpreted language runtimes, such as Python and PHP.
This is an example of when the Project Hummingbird's design goal of minimizing the attack surface conflicts with another design goal, of being compatible with official Docker Hub images. Keeping native package managers like npm and pip violates the policy of minimizing the attack surface but preserves compatibility with official Docker Hub images.
Organizations requiring stricter container image hardening use multi-stage builds to remove native package managers from application images.
The beginning of a secure software supply chain
Project Hummingbird is only one part of your security journey. Project Hummingbird alone does not protect your applications from risks like:
- Vulnerabilities in your code, such as buffer overflows and script injection
- Application dependencies affected by known CVEs
- Containers tampered with by malicious actors who publish their adulterated copies in public container registries
- Supply chain attacks where dependencies are compromised at build time
Security-conscious organizations and their developers use a software development lifecycle similar to Red Hat's process for building Project Hummingbird container images. Your dependencies would have cryptographic provenance and attestation, and your final application containers would be digitally signed, and a digitally signed software bill of materials (SBOM) describes the components.
You can download and view the SBOM for any Project Hummingbird image. These files are not really designed for human consumption, but to be input to other tools, and to provide a lot of information for auditing processes. The easiest way to get the SBOM for a Project Hummingbird container image is by using the cosign tool, which you can download and install from its GitHub releases page.
The following example lists the artifacts included in the MariaDB database image using its SBOM.
$ cosign download sbom --platform linux/amd64 registry.access.redhat.com/hi/nginx:1.28 > webserver-bird.sbom
$ cat webserver-bird.sbom | jq -r '.packages[].name' | sort | uniq | wc -l
50An SBOM is expected to contain duplicated names because it describes the complete dependency graph of all included software artifacts. The number of artifacts in the SBOM, is slightly higher than the number of packages in the RPM database. This discrepancy is explained by the fact that a single RPM package may provide multiple software artifacts, and container image builds might include software artifacts that were not installed using RPM packages.
In the end, SBOMs provide detailed, standardized descriptions of software components to enable more reliable security scans and audits.
Next steps with Project Hummingbird
Project Hummingbird images support most developer container tasks. These images provide base images for building your applications and include common components like databases and utilities. You can also use Project Hummingbird images for CI/CD pipelines and containerized "toolbox" environments. While hardening requires minor changes to Containerfiles and workflows, it's a relatively small price to pay for peace of mind and better integration with secure development processes.
A future article will focus on using Project Hummingbird images, including builder variants, and trusted libraries from the Calunga project to improve application security.
You can review the Project Hummingbird community documentation for instructions and tools used to create and verify its SBOMs. You can also explore the Konflux project. It provides the infrastructure Project Hummingbird uses to operationalize its software development lifecycle and offers capabilities like provenance and attestation to help you avoid tampering risks and improve auditability.