This article provides tips and best practices for creating secure Dockerfiles that are highly maintainable. Like code, Dockerfiles change over time and, therefore, should be written in such a way that makes them easy to update in the future. It is also important that the images that they create are secure and do not contain unnecessary vulnerabilities that increase the attack surface for your application. The image produced should be as small as possible because the image(s) must be stored remotely and transported in the network. Also, they must not be blotted. Finally, the Dockerfile, like any well-written code, should be easy to understand and use.
10 Tips and best practices for Dockerfiles
The following list describes tips and best practices for creating secure Dockerfiles that are highly maintainable.
1. Use the current release base upstream image
Always use the most current release base upstream image to provide security. Red Hat recommends:
- Use the latest release of a base image. This release should contain the latest security patches available when the base image is built. When a new release of the base image is available, rebuild the application image to incorporate the base image's latest release because that release contains the latest fixes.
- Conduct vulnerability scanning. Scan a base or application image to confirm that it doesn't contain any known security vulnerabilities.
2. Use a specific image tag or version
Use a specific tag or version for your image, not "latest". This gives your image traceability. When troubleshooting the running container, the exact image will be obvious.
- Do this:
- Don't do this:
3. Run images as USER
For security purposes, always ensure that your images run as non-root by defining
USER in your Dockerfile. Additionally, set the permissions for the files and directories to the user. Because the Docker daemon runs as root, the Docker images run as root by default. This means if a process in the container goes rogue or gets hijacked and accesses the host, it will run with root access. This is certainly not secure.
However, Podman is daemonless and rootless by design and, therefore, more secure.
The following is an example.
USERto your Dockerfile.
- Skipped configurations are indicated by: ...
... USER 1001 RUN chown -R 1001:0 /some/directory chmod -R g=u /some/directory ...
4. Choose base images without the full OS
Always choose the smallest base images that do not contain the complete or full-blown OS with system utilities installed. You can install the specific tools and utilities needed for your application in the Dockerfile build. This will reduce possible vulnerabilities and the attack surface of your image.
5. Use multi-stage Dockerfiles
Build images using multi-stage Dockerfiles to keep the image small. For example, for a Java application running in Open Liberty, use one stage to do the compile and build, and another stage to copy the binary artifact(s) and dependencies into the image, discarding all nonessential artifacts. Another example is, for an Angular application, run the npm install and build in one stage and copy the built artifacts in the next stage.
- Example: Open Liberty Java application
FROM registry.access.redhat.com/ubi8/openjdk-8:latest as builder USER 0 WORKDIR /tmp/app COPY src/ src/ COPY pom.xml pom.xml RUN mvn clean package ... FROM quay.io/ohthree/open-liberty:220.127.116.11 ... COPY --from=builder /tmp/app/src/main/liberty/config/server.xml /config/ COPY --from=builder /tmp/app/target/*.war /config/apps/ RUN \ chown -R 1001:0 /config && \ chmod -R g=u /config # Run as non-root user USER 1001 EXPOSE 9081
6. Use Docker ignore file
.dockerignore file to ignore files that do not need to be added to the image.
7. Scan for vulnerabilities
Scan your images for known vulnerabilities.
- Podman integrates with multiple open-source scanning tools. For example, you can use Synk or Trivy.
- Docker integrates with its own plugin local machine. Install the plugin, then run the following command:
$ docker scan myappimage:1.0
8. Automate scans
Automated scanning tools should also be implemented in the CI pipeline and on the enterprise registry. We also recommend deploying runtime scanning on applications in case a vulnerability is uncovered in the future.
9. Organize your Docker commands
Organize your Docker commands, especially the
COPY command, in such a way that the files that change most frequently are at the bottom. This will speed up the build process. The reason for this is to take advantage of the Docker build process and speed up future builds.
Each Docker build command creates a layer that is cached to be reused in the next build, designed to speed up subsequent builds. The caveat is that, in the subsequent build, if a command encounters a change, all commands after that will run and recreate new layers and cached, replacing the old ones even if they did not contain any changes. Having the most volatile
COPY statements later in the Dockerfile maximize build caching.
10. Concatenate RUN commands
RUN commands to make your Dockerfile more readable and create fewer layers. Fewer layers mean a smaller container image. As mentioned previously, each
RUN statement in the Dockerfile creates a layer that gets cached. Concatenating reduces the number of layers.
The following are examples of what to do and not to do.
- Don't do this:
... RUN yum --disablerepo=* --enablerepo=”rhel-7-server-rpms” RUN yum update RUN yum install -yl httpd ...
- Do this instead:
... RUN yum --disablerepo=* --enablerepo=”rhel-7-server-rpms” && yum update && yum install -yl httpd ...
- Even better, do this for readability:
... RUN yum --disablerepo=* --enablerepo=”rhel-7-server-rpms” && \ yum update && \ yum install -yl httpd ...
Find more resources
We hope that these tips will help you build more secure Dockerfiles. Visit the Docker website for more information. See what we are doing on the Red Hat Developers Site. You can learn more about containerizing applications at Red Hat DO 180 training. If you have a question, feel free to comment below. We welcome your feedback.