As developers increasingly make use of containers, securing them becomes more and more important. Gartner has named container security one of its top 10 concerns for this year in this report, which isn’t surprising given their popularity in producing lightweight and reusable code and lowering app dev costs.
In this article, I’ll look at the three basic steps involved in container security: securing the build environment, securing the underlying container hosts, and securing the actual content that runs inside each container. To be successful at mastering container security means paying attention to all three of these elements.
If you step back a moment, container security isn’t all that different from ordinary application security. If you replace the appropriate words in the above paragraph, you could have written this post 10, 20, or even 30 years ago with a few other modifications. But containers do have a few oddities and new twists that are worth highlighting. To get started, I suggest you listen to the recorded talk by Red Hat’s Dan Walsh about general container security considerations.
1. Securing the build environment
Let’s start with securing the build environment itself. As with any app dev process, adding security at the beginning of a project makes the most sense, rather than having to bolt something on after most of the code has been written. Using the right methods from the start increases your effectiveness as a programmer and makes for a smoother application development process.
This first component has three separate pieces of its own: First, you need to understand your DevOps workflows. This includes how you construct your containers, where you obtain their code, and how often the underlying code changes. One of the attractions of containers is their “just-in-time” aspects, where you can pull code from a variety of online sources. How do you know that this code has been properly vetted for general use, and then how do you know that your own particular DevOps process isn’t introducing some specific corner case that will open up a backdoor? That should be your focus in these workflow exercises.
Part of securing your workflow is being able to use discovery tools (e.g., Red Hat’s Quay.io) to ensure that the containers are managed securely and scale up properly. This tool automatically scans each container for security vulnerabilities. This article walks you through how to use Quay and what to expect.
The second situation involves examining your access rules and permissions for both users and the actual apps themselves. If you track security breaches, you will realize this is a common theme just among ordinary SaaS apps. How many unsecured web services storage containers have been leaked online, thanks to no actual password or “access all” permissions that haven’t been locked down? Far too many, and this can be true in the container world, where the number of apps can be overwhelming.
One must-do item is to examine the level of granularity you’ll need for the appropriate access controls, both in terms of delivering the right levels of security for your apps as well as for the ultimate users. Do you know which portions of your code have root-level access? How about which portions actually need root-level access? The different answers could mean a more or less secure container, and the optimum answer is as few as possible, approaching zero. If you use LDAP for your ordinary user and app access controls, you might want to review the suggestions in this article about how to validate LDAP parameters and enable LDAP authentication in Red Hat OpenShift.
Finally, with hardening your build environment there is the ability to use runtime protection. Like the tools for ordinary apps, some of these tools focus on static scans, while others offer continuous integration using your chosen development environment. The continuous method is better, given the dynamic nature of container code, and it could also be a major time saver when you have to perform an app audit. A good runtime protection container tool should be able to flag abnormal behavior, remediate potential threats, and isolate peculiar events for further forensic analysis.
2. Securing the underlying container hosts
Once your build environment is secure, the next step is to harden the underlying hosts that run the container servers and services. Take a closer look at what your container provider offers in the way of security. These options may be part of the native Linux container and OpenShift security features, such as policies to prevent abuse of resources, setting up access control groups, and ensuring that you remove root access everywhere, or at least where it isn’t really needed. Many are the same familiar security practices that are part of the virtual machine world, so they shouldn’t come as a surprise. They just have a slightly different context from what you might have been used to before getting involved with containers. One recommended best practice, for example, is to only run containers with read-only images.
3. Securing the content that runs inside each container
The final step is to secure the content that is inside the containers. This isn't really all that different from securing ordinary apps but it does have a few oddities. You should limit the various Linux OS features that are running within your container, for example. Linux has a general-purpose, OS-level security screening tool called seccomp that is worth reviewing as well. You should also enforce image source integrity protection so you can track what content has changed in your containers and know who was responsible.
I realize this is a lot of work, and moreover the work will involve getting familiar with multiple tools and multiple application and container and OS constructs, too. A good place to start is to examine the numerous third-party container security tools. Some of these are available from open source vendors and others from commercial vendors that can also assist in your security journey.
Last updated: July 1, 2020