Kubernetes + OpenShift featured image

Security is especially critical for Software-as-a-Service (SaaS) environments, where the platform is used by many different people who need the confidence that their data is stored safely and kept private from unrelated users. This article focuses on security concerns for containers on your SaaS deployment running in Kubernetes environments such as Red Hat OpenShift. The article is the fifth in a series called the SaaS architecture checklist that covers the software and deployment considerations for SaaS applications.

Security controls and practices for SaaS

Within modern enterprise environments, security needs to be built into the full life cycle of planning, development, operations, and maintenance. Good security controls and practices are critical to meeting compliance and regulatory requirements and making sure that transactions are reliable and high-performing. Security in SaaS can be broken down into five main layers: hardware, operating system, containers, Kubernetes, and networking. Figure 1 shows these layers and the security controls that address threats at each layer.

SaaS layers and their security features in Kubernetes and OpenShift.
Figure 1: SaaS layers and their security features in Kubernetes and OpenShift.

Security needs to be addressed at every layer because any vulnerability in one layer could be exploited to compromise other layers. For each layer, Kubernetes and OpenShift have security controls and features that will be covered in this article. Future articles will go into more detail on specific SaaS security topics. If there are any SaaS topics for which you would like to see an article, let us know in the comments.

Security at the hardware layer

Securing a SaaS environment often starts with identifying where the application is going to run and the security concerns for that environment. A secure environment includes the actual data center as well as the hardware itself, including disk encryption, secure boot, BIOS-level passwords, and the use of hardware security modules (HSMs). Secrets and identity management are discussed later in this article.

While a lot of attention is paid to using encryption to protect data in transit as it goes over the network, it is also critical to protect data at rest as it is stored on physical storage devices in data centers. The risks to data at rest are much higher in data centers where you lack control over access to the facility and where third-party contractors may be employed. Use disk encryption to secure data at rest by protecting the data stored on the physical server from unintended access.

An HSM is typically a physical device that securely stores digital keys through encryption to protect sensitive data. HSMs are used to manage and safeguard security credentials, keys, certificates, and secrets while at rest and in transit. The HSM provides an increased level of protection over software-only approaches such as a secrets vault.

Cloud HSMs are available from the major cloud providers to provide increased protection in cloud environments. HSMs are recommended to manage secrets in SaaS environments.

Protect access to the server by enabling secure boot and using BIOS-level passwords. Secure boot is a firmware security feature of the Unified Extensible Firmware Interface (UEFI) that makes sure that only immutable and signed software can be run during boot.

For more information, check out:

Operating system security

Every Kubernetes cluster runs on top of some underlying operating system (OS). Security features and hardening at the OS layer help protect the overall cluster, so it is important to enable and use OS-level controls.

When it comes to security hardening at the OS level, Red Hat OpenShift has two distinct advantages. First, Security-Enhanced Linux (SELinux) is integrated and enabled out of the box. Second, OpenShift runs on Red Hat Enterprise Linux CoreOS, a unique OS image tuned for SaaS use.

Security-enhanced Linux

SELinux is a security architecture for Linux systems that grants administrators finer-grained control over access to system resources than is available with default Linux. SELinux defines mandatory access controls for applications, processes, and files on a system. On a Kubernetes node, SELinux adds an important layer of protection against container-breakout vulnerabilities.

Thus, one of the most effective security measures is to enable and configure SELinux, which Red Hat has made standard on all OpenShift clusters. It is considered a best practice to use SELinux in SaaS environments. In OpenShift, SELinux enhances container security by ensuring true container separation and mandatory access control.

For more information, see:

A hardened OS for containers: Red Hat Enterprise Linux CoreOS

OpenShift's operating system, Red Hat Enterprise Linux CoreOS, is based on Red Hat Enterprise Linux and uses the same kernel, code, and open source development processes. This special version ships with a specific subset of Red Hat Enterprise Linux packages, designed for use in OpenShift 4 clusters. The key features that make this operating system more secure are:

  • Based on Red Hat Enterprise Linux: The underlying OS is primarily Red Hat Enterprise Linux components, which means it has the same quality, security, control measures, and support. When a fix is pushed to Red Hat Enterprise Linux, that same fix is pushed to Red Hat Enterprise Linux CoreOS.

  • Controlled immutability: Red Hat Enterprise Linux CoreOS is managed via OpenShift APIs, which leads to more hands-off operating system management. Management is primarily performed in bulk for all nodes throughout the OpenShift cluster. The latest state of the Red Hat Enterprise Linux CoreOS system is stored on the cluster, making it easy to add new nodes or push updates to all nodes. Given the OS's centralized management and transactional nature, only a few system settings can be modified on a Red Hat Enterprise Linux CoreOS installation.

  • Command-line container tools: Red Hat Enterprise Linux CoreOS includes container tools compatible with the Open Container Initiative (OCI) specification to build, copy, and manage container images. Many container runtime administration features are available through Podman. The skopeo command copies, authenticates, and signs images. The crictl command lets you view and troubleshoot containers and pods.

  • Robust transactional updates: Red Hat Enterprise Linux CoreOS offer the rpm-ostree upgrade process, which assures that an upgrade takes place atomically. If something goes wrong, the original OS can be restored in a single rollback.

    OpenShift handles OS upgrades through the Machine Config Operator (MCO), which encompasses a complete OS upgrade instead of individual packages as in traditional Yum upgrades. OpenShift also updates nodes via a rolling update to mitigate the updates' impact and maintain cluster capacity. During installation and upgrades, the latest immutable filesystem tree is read from a container image, written to disk, and loaded to the bootloader. The machine will reboot into the new OS version, guaranteeing an atomic update.

  • Security during cluster installation: Red Hat Enterprise Linux CoreOS minimizes security decisions during installation. Two security features are considered pre-first boot decisions for cluster operations: support for FIPS cryptography and full disk encryption (FDE). After the cluster is bootstrapped, the cluster can further be configured for other node-level changes.

Container layer

The container layer in Kubernetes and OpenShift isolates processes from one another and from the underlying OS. Instead of traditional software design, where all the components are linked, deployed together, and ultimately dependent on each other, containers are independent, resulting in smaller impacts. If one container goes down, it can easily be replaced. If a container image is found to have a security flaw, the flaw is isolated to that image and requires updating only that image rather than the whole cluster.

Red Hat OpenShift has many features that improve container security for multitenant environments.

Container engine

A container engine provides tools for creating container images and starting containers. In OpenShift, the default container engine is CRI-O, which supports containers conforming to OCI and libcontainerd. The container engine focuses on the features needed by Kubernetes's Container Runtime Interface (CRI). This customized container engine shrinks the surface available to a security attack, because the container engine does not contain unneeded features such as direct command-line use or orchestration facilities.

We have also aligned the CRI more with Kubernetes: Updates to CRI-O are made to work better with the current Kubernetes release.

Container security in the Linux kernel

The kernel offers features to ensure the security of containers and everything else running on the OS. First off, all containers are launched inside a namespace that creates an isolated sandbox segregating the containers, files systems, processes, and networking.

The next feature is control groups (cgroups), which isolate hardware resource sharing between containers and nodes of the OpenShift cluster. The use of cgroups prevents any single process or container from using up all the available resources on a host.

Finally, as we discussed earlier, Red Hat Enterprise Linux CoreOS enables SELinux, which prevents a container from breaking its isolation and thus interfering indirectly with other containers on the same host.

Cluster security on Kubernetes and Red Hat OpenShift

The cluster level controls how Kubernetes deploys hosts, manages shared resources, controls intercontainer communications, manages scaling, and controls access to the cluster. An OpenShift cluster is made up of a control plane, worker nodes, and any additional resources needed. The following subsections cover some of the security concerns for the different aspects of the cluster.

Control plane isolation

It is considered a best practice to isolate the cluster's control plane nodes from the worker nodes. This is usually done using separate hardware for the control plane to mitigate the impact of any misconfiguration, resource management problems, or vulnerabilities.

Identity management

Every Kubernetes cluster needs some form of identity management. Out of the box, Red Hat OpenShift comes with a default OAuth provider, which is used for token-based authentication. This provider has a single kubeadmin user account, which you can use to configure an identity provider via a custom resource (CR). OpenShift supports OpenID Connect and LDAP standard identity providers. After identities are defined, use role-based access control (RBAC) to define and apply permissions.

Cluster access control

Before users interact with the cluster, they first must authenticate via the OAuth server. Internal connections to the API server are authenticated using X.509 certificates.

Security context constraints

Security context constraints (SCCs) are an OpenShift security feature that limits a pod's resource access and allowable actions. SCCs let administrators control much of the pod's configuration, such as the SELinux context of a container, whether a pod can run privileged containers, and the use of host directories as volumes. In OpenShift, SCCs are enabled by default and cannot be disabled. SCCs can improve isolation in SaaS deployments and reduce the impact of potential vulnerabilities.

Pod SCCs are determined by the group that the user belongs to as well as the service account, if specified. By default, worker nodes and the pods running on them receive an SCC type of restricted. This SCC type prevents pods from running as privileged and requires them to run under a UID that is selected at runtime from a preallocated range of UIDs.

Secrets

In SaaS deployments, the tenants need to secure their sensitive data on the cluster. This is handled with Secret objects on OpenShift. Secret objects hold sensitive information such as passwords, OCP client configuration files, private source repository credentials, etc. This way of using Secret objects decouples the sensitive content from the pods.

When the sensitive content is needed, it can be mounted to the container via a volume plugin, or the system can use the secrets to perform the action on behalf of the pod. Key properties of secrets include:

  • Secret data can be created by one entity, such as a configuration tool, and referred to by another, such as an application.
  • Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node.
  • Secret data can be shared within a namespace.
  • Secret data can optionally be encrypted at rest.

For more information, read Providing sensitive data to pods.

Red Hat Advanced Cluster Security for Kubernetes

In addition to the standard security features in Red Hat OpenShift, Red Hat offers additional products to enhance the security of the platform. One of those is Red Hat Advanced Cluster Security for Kubernetes (previously StackRox). Red Hat Advanced Cluster Security for Kubernetes protects your vital applications across the build, deploy, and runtime stages. It deploys in your infrastructure and easily integrates with DevOps tooling and workflows. This integration makes it easy to apply security and compliance policies.

Red Hat Advanced Cluster Security adds to OpenShift's built-in security by improving the following core tenants of security:

  • Improving visibility of the environment, so administrators can more easily detect issues as they happen.
  • Managing vulnerabilities once they have been identified by deploying fixes via an integrated CI/CD pipeline.
  • Ensuring compliance with industry standards and best practices.
  • Adding robust network segmentation to restrict network traffic to only the necessary uses.
  • A risk-based ranking of each deployment to determine the likelihood of a security risk, helping to ensure that the highest risk deployments get immediate remediation first.
  • Identifying misconfigurations and evaluating role-based access control (RBAC) access for users via configuration management, to ensure that the configuration meets best practices.
  • Runtime detection and response to automatically identify abnormal actions that could indicate a security breach or misuse of the environment.

To learn more, see A Brief Introduction to Red Hat Advanced Cluster Security for Kubernetes.

Networking layer

The networking layer is the outermost layer of a security architecture. The network is where most IT security attacks occur, due to misconfiguration and vulnerabilities. Proper planning and configuration of the network security layer components ensure that the environment is secure. Kubernetes has software-defined networking (SDN) controls that can improve network security in SaaS deployments. Red Hat OpenShift provides additional controls that build on what's available in Kubernetes.

Network policy

A network policy controls the traffic between pods by defining the permissions they need in order to communicate with other pods and network endpoints. OpenShift expands on policies by logically grouping components and rules into collections for easy management.

It is worth noting that network policies are additive. Therefore, when you create multiple policies on one or more pods, the union of all rules is applied regardless of the order in which you list them. The resulting pod behavior reflects every allow and deny rule for ingress and egress.

Container network interface

In a Kubernetes cluster, by default, pods are attached to a single network and have a single container network interface (CNI). The CNI manages the network connectivity of containers and removes resources when containers are deleted.

Kubernetes uses SDN plugins to implement the CNI. They manage the resources of the network interfaces for new pods. The CNI plugins set up proper networking constructs for pod-to-pod and pod-to-external communication and enforce network policies.

Openshift networking security features

OpenShift offers the following additional features and components to secure networks for cloud-native deployments:

  • Network operations: OpenShift includes a set of operators that manage networking components to enforce best practices and mitigate human errors.
  • Multiple network interfaces: The Kubernetes default is for all pods to use a single network and a single primary network interface, but with OpenShift, you can configure additional network interfaces. This allows network optimization to improve performance and enhances isolation to improve security.
  • Ingress security enhancements: OpenShift exposes the cluster to external resources or clients via a route resource. Routes provide advanced features not found in a standard Kubernetes Ingress controller, including TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments.
  • Egress security enhancements: While the default OpenShift rule allows all egress traffic to leave the cluster with no restrictions, OpenShift has tools for fine-grained control and filtering of outbound traffic. OpenShift lets you control egress traffic via an egress firewall, egress routers, and egress static IP addresses.
  • Service mesh: Red Hat OpenShift Service Mesh, based on the Istio project, adds a transparent layer to existing application network services running in a cluster, allowing complex management and monitoring without requiring changes to the services. The service mesh does this by deploying a sidecar proxy alongside the relevant services to intercept and manage all network communications. With Red Hat OpenShift Service Mesh, you can create a network with the following services: discovery, load balancing, service-to-service authentication, failure recovery, metrics, monitoring, A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.

For more information, see the Red Hat OpenShift Security Guide.

Partner with Red Hat to build your SaaS

This article covered controls that can be used to improve the security of your SaaS deployment at the hardware, OS, container, Kubernetes cluster, and network levels. Future articles will go deeper into SaaS security topics.

Red Hat SaaS Foundations is a partner program designed for building enterprise-grade SaaS platforms on Red Hat OpenShift or Red Hat Enterprise Linux, and deploying them across multiple cloud and non-cloud footprints. Email us to learn more.

Last updated: October 30, 2023