Hosted control planes, available via the multi-cluster engine, has been generally available since Red Hat OpenShift 4.14. Hosted control planes reduce costs and improve productivity for organizations adopting a multi-cluster approach.
OpenShift sandboxed containers, based on Kata Containers, and generally available since OpenShift 4.8, is a native Kubernetes CRI runtime that provides an additional layer of isolation for workloads through hardware virtualization.
When used together, hosted control planes and OpenShift sandboxed containers offer several benefits, such as speed, separation of concerns, and the necessary hardening to run multi-tenant workloads with stringent security constraints.
This article provides a detailed guide on how to configure and run sandboxed workloads for OpenShift clusters with hosted control planes, maximizing efficiency and workload isolation.
Prerequisites
In order to conduct this process, you need an OpenShift cluster already running, and a hosted control plane already configured. You don't need a node pool yet.
We’ll use the oc
and hcp
command-line tools. Access credentials must be already set up.
We recommend that you have already practiced setting up a node pool for your hosted control plane on your current cluster. Apart from that, completing these instructions should be as simple as creating a few Kubernetes resources from files.
Create MachineConfig
The first resource we will create is a MachineConfig. All the nodes using this MachineConfig will install the sandboxed-containers Red Hat Enterprise Linux CoreOS extension. Create the file sandboxed-containers-mc.yaml
, with the following content:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: sandboxed-containers-mc
spec:
extensions:
- sandboxed-containers
To link a MachineConfig to a node pool, we need to put the MachineConfig in a ConfigMap. Here we create the ConfigMap sandboxed-containers-mc-cm
out of the sandboxed-containers-mc.yaml
file:
oc create configmap sandboxed-containers-mc-cm -n clusters --from-file config=sandboxed-containers-mc.yaml
Create a node pool
Now it's time to create a new node pool. The new node pool must use the sandboxed-containers MachineConfig. We also want to have a dedicated label for all the nodes in the node pool.
Depending on the platform you are going to deploy on, the create nodepool
command has different arguments. Use the same command and arguments that you already practiced when creating other node pools. Add the --render
argument, and save the output to a file, node-pool.yaml
:
hcp create nodepool [...] --render > node-pool.yaml
Now we have the file node-pool.yaml
. We extend the file with the additional config and label, as in the snippet below:
[...]
spec:
config:
- name: "sandboxed-containers-mc-cm"
nodeLabels:
node-role.kubernetes.io/kata-oc: ""
[...]
Finally we create the node pool by applying the file node-pool.yaml
:
oc create -f node-pool.yaml
We wait to see the new nodes up and running before continuing to the next section. You may see the nodes rebooting once, as part of the Red Hat Enterprise Linux CoreOS extension installation process.
Depending on the platform you deploy on, additional steps might be necessary. For example, when using the “None” platform, you need further manual steps to deploy the workers. This is covered in the hosted control planes documentation.
Create RuntimeClass
In the previous section, we created a node pool. The MachineConfig mechanism installed the sandboxed-containers extension on the new nodes. Now, for pods to run in the sandboxed runtime, we need to create the kata
RuntimeClass.
Create the file runtime-class.yaml
with the following content:
apiVersion: node.k8s.io/v1
handler: kata
kind: RuntimeClass
metadata:
name: kata
overhead:
podFixed:
cpu: 250m
memory: 350Mi
scheduling:
nodeSelector:
node-role.kubernetes.io/kata-oc: ""
Prepare to access the hosted cluster. In the following steps we use oc
with the --kubeconfig
option to make sure we’re running commands on the hosted cluster. See the following example:
oc --kubeconfig ${HOSTED_CLUSTER_NAME}.kubeconfig ...
Apply the kata
RuntimeClass to the hosted cluster:
oc --kubeconfig ${HOSTED_CLUSTER_NAME}.kubeconfig apply -f runtime-class.yaml
Run a sandboxed workload
Everything should be set. In the previous step, we created the kata
RuntimeClass to select the sandboxed-containers runtime. Now we will create a simple pod that uses the kata
RuntimeClass. Then we show that the pod is running in a sandbox as expected.
We start a pod with the kata
RuntimeClass. For the sake of the example, a web server keeps the pod running indefinitely. Be mindful that you have to use the hosted cluster kubeconfig here, like in the previous step.
oc --kubeconfig ${HOSTED_CLUSTER_NAME}.kubeconfig \
run --image=registry.fedoraproject.org/fedora \
--overrides='{"spec":{"runtimeClassName":"kata"}}' \
test-pod-sandboxed -- python3 -m http.server
We inspect the pod to tell which node it’s running on:
oc describe pod test-pod-sandboxed | grep 'Node:'
We get a shell on the worker node where the pod is running on. On the worker node, we show the qemu-kvm
processes:
ps -ef | grep qemu
Check the output of the previous command, you should see the process of the Kata sandbox, something like the following:
qemu-kvm -name sandbox-...
That’s it. QEMU is running your pod in a virtual machine sandbox, managed with a hosted control plane!
More Information
For more information, visit the documentation:
If you need further assistance, you can reach out to us through the following methods:
- OpenShift Commons slack
- OpenShift users Kubernetes slack channel
- Through your Red Hat account representative