In any cloud-native platform, one of the most critical security tasks is managing sensitive information, such as API keys, database passwords, and tokens. Red Hat OpenShift and Kubernetes provide robust mechanisms for handling this data. Choosing the correct method is crucial to maintaining a strong security posture. This article provides a clear guide to best practices for consuming secrets in applications, making sure sensitive data remains protected.
Environment variables vs. volumes
OpenShift provides two primary methods for an application inside a pod to access data from a Secret object:
- Environment variables: The secret’s key-value pairs inject directly into the container as environment variables accessible to the running process.
- Volume mounts: The secret mounts as a set of files into the container’s filesystem. The application code reads the secret data from these files. This is the recommended best practice.
While both methods are officially supported, their security and operational implications are quite different.
Mounting a secret as a volume
Let's look at a practical example. First, we will define a secret object containing multiple credentials.
apiVersion: v1
kind: Secret
metadata:
name: my-app-credentials
type: Opaque
stringData:
# Keys in the secret
database-user: "db_user"
database-pass: "S3cr3tPassword!"
api-key: "ABC-XYZ-123"
Next, we will create a pod that only uses the specific keys it needs by mounting them as files in a volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app-pod
spec:
containers:
- name: my-app
image: ubi8/ubi-minimal
command: ["sleep", "infinity"]
volumeMounts:
# This is where the volume is mounted inside the container
- name: app-secret-volume
mountPath: "/etc/secrets"
readOnly: true
volumes:
# This defines the volume and links it to our Secret object
- name: app-secret-volume
secret:
secretName: my-app-credentials
# Project only the specific keys this pod needs
items:
- key: database-user
path: db_user.txt
- key: database-pass
path: db_pass.txt
When this pod starts, it creates an /etc/secrets
directory containing two files, db_user.txt
and db_pass.txt
. The sensitive api-key
is never exposed to this pod, adhering to the principle of least privilege.
Volume mounts are more secure
Mounting secrets as files into a volume is inherently more secure and operationally flexible than using environment variables. This is because files on a volume are subject to standard file system permissions and are not as easily exposed as environment variables. The risks associated with environment variables stem from their broad visibility and persistence in a running process.
The following are operational advantages of volumes:
- Automatic updates: When a secret mounts as a volume, the files update automatically if the secret object on the cluster changes. This allows for dynamic secret rotation, such as changing a database password, without restarting the application pod. In contrast, secrets injected as environment variables are immutable, and you must completely rebuild the pod to pick up a new value.
- Granular control: As shown in the example, you can choose to project only specific keys from a secret into the pod, preventing the overexposure of credentials to a workload that doesn't need them.
Environment variables security risks
While injecting secrets as environment variables is a common and convenient method, it's important to understand that once the secret is injected, its value is accessible within the container’s process. If not managed properly, this information can potentially be exposed in logs, process lists, or through other means.
This issue creates several specific risks:
- Accidental exposure in logs: Many applications, logging frameworks, and debugging tools have a tendency to dump all environment variables for context when an error occurs. A single crash dump or verbose log entry could inadvertently leak credentials into a logging system, where they might be stored long-term and accessible to a wider audience than intended.
- Process inspection: Anyone who gains shell access to the running container can simply run the env command or inspect the
/proc/self/environ
file to view all environment variables and their secret values in plain text. - Unintended inheritance: Any subprocess or shell script executed by the main application will automatically inherit all of the parent’s environment variables. This unintentionally broadens the exposure of the secret to other processes that may not need it and might not be as secure.
- Third-party code: If the application uses third-party libraries or packages, you have no guarantee that they won’t inspect the environment and potentially exfiltrate sensitive data.
Given these risks, it's best to avoid using environment variables for sensitive data. Volume mounts are a more secure alternative you should prioritize whenever possible.
OpenShift has built-in security
It's important to note that OpenShift's layered, defense-in-depth security model provides a strong baseline of protection. For an attacker to exploit a secret in an environment variable, they would first need to compromise the pod. This initial breach is actively prevented by the following:
- Authentication and authorization: Restricts who can access resources in the first place. Only authenticated users can interact with the cluster, and role-based access control (RBAC) ensures they can only access resources they are explicitly authorized to see. A regular user cannot simply read the secrets from another project.
- Security context constraints (SCCs): Enforces strict runtime permissions on all pods, such as preventing them from running as root or accessing the host node by default, etc.
These controls mean that using environment variables is not a direct flaw, but rather a practice that can turn a minor security incident into a major data breach.
A hardened approach to secrets
To ensure the platform remains as secure as possible, we recommend the following hardened approach to secret management.
- Default to volume mounts: For all applications, use volume mounts to consume secrets. This is the most secure, flexible, and operationally sound method.
- Audit existing workloads: Review applications that currently use environment variables for secrets. Assess the risk and, where possible, create a plan to refactor them to use volume mounts.
- Leverage external secret stores: For the highest level of security, move secrets out of the cluster entirely. Add-ons like the external secrets operator can synchronize secrets from an external provider, such as AWS Secrets Manager or HashiCorp Vault, into native secret objects. Alternatively, the secrets store CSI driver can mount secrets directly into pods as volumes, making sure credentials are never stored within the OpenShift
etcd
database.
By making this a critical hardening step, you can close a potential vector for data exposure, ensuring the platform remains as secure as possible.
Final thoughts
This article explained the most security-focused approach to handling sensitive data, such as API keys and database passwords, within OpenShift and Kubernetes environments. Volume mounts offer superior security and operational benefits, including automatic updates and precise control. Conversely, environment variables introduce risks like accidental logging and process exposure. To enhance platform security, we recommend prioritizing volume mounts, reviewing existing workloads, and integrating external secret stores.