Bridging the gap: Integrate legacy VMs into a zero trust Service Mesh

Dive into how to onboard legacy virtual machines into modernized workloads via Red Hat OpenShift Service Mesh.

Go to console

Now that our environment is prepared, there are a few specialized things we need to configure for the virtual machine (VM) itself. This includes authentication, definitions, and leveraging templates, in order to make the transition for our VM as smooth as possible. 

Prerequisites:

 In this lesson, you will:

  • Create Istio authentication for your virtual machine.
  • Create a WorkloadGroup CustomResource (CR).
  • Leverage VM templates.

Create the dedicated Service Account

Before defining the VM manifest, we must address a critical security difference: authenticating the VM to the Istio Pilot. Unlike standard Kubernetes workloads that use the default service account token, the VM needs a token whose audience is specifically aligned with Istio's security requirements, not Kubernetes. This dedicated token is necessary for the VM to securely join the mesh and receive its configuration.

We'll start by creating a dedicated Service Account (SA). This SA will serve two purposes:

  • VM Runtime identity: It will be the identity under which the OpenShift Virtualization resource runs the VM.
  • Token source: It will be the source for the specially minted JWT used for Pilot authentication.

The Service Account and our VM namespace can be created by executing the following commands:

cat <<'EOF' | oc create -f -
apiVersion: v1
kind: Namespace
metadata:
  labels:
    istio.io/rev: default
  name: vms
EOF
cat <<'EOF' | oc create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mesh
  namespace: vms
EOF

Create the Istio-specific Service Account token

We use the OpenShift (Kubernetes) API to generate a token for the ServiceAccount, ensuring we set the audience to istio and store it in a secret for later usage.

Execute the following command to define the token and its intended audience and to assign the API response as a value to a secret. 

The token can be created by executing the following commands:

cat <<'EOF' > create-istio-token.sh
#!/bin/bash

DURATION=${DURATION:-"600s"}
NAMESPACE=${NAMESPACE:-"vms"}

oc -n ${NAMESPACE} create secret generic mesh-istio-token \
 --from-literal=istio-token="$(oc -n ${NAMESPACE} create token mesh \
   --audience=istio-ca --duration=${DURATION} -o jsonpath='{.status.token}')" \
 --dry-run=client -o yaml > mesh-istio-token.yml
EOF

Followed by:

chmod +x create-istio-token.sh

Then input this command:

./create-istio-token.sh 

And finally: 

oc create -f mesh-istio-token.yml

Define the WorkloadGroup CustomResource (CR)

The next critical step in integrating your VM into the service mesh is creating a WorkloadGroup CR. In a standard microservice deployment, Istio's Sidecar Injector handles the bootstrapping of the Envoy proxy. 

However, since we are not using automatic Sidecar injection for this VM, the WorkloadGroup will act as the blueprint for configuring the Envoy instance running inside your legacy operating system. It effectively defines the non-Kubernetes workload's identity and characteristics within the mesh. This includes the configuration blueprint, which tells the Istio Pilot how to provision and manage the Envoy proxy running on the external VM, as well as identity mapping to assign the specific account and service context to the VM workload. It also covers the Envoy bootstrapping that holds the data needed for startup scripts to begin the local Envoy process, as well as connect it to the mesh control plane. 

You will need to define the WorkloadGroup YAML, referencing the service account you created (mesh) and specifying the mesh details (e.g., network and cluster name) established during the Pilot configuration phase.

The WorkloadGroup can be created by executing the following command:

cat <<'EOF' | oc create -f -
apiVersion: networking.istio.io/v1beta1
kind: WorkloadGroup
metadata:
  name: mesh
  namespace: vms
spec:
  metadata:
    labels:
      vm.kubevirt.io/name: mesh
  template:
    network: corporate
    serviceAccount: mesh
EOF

To streamline the deployment of your legacy workloads, we'll utilize OpenShift Virtualization templates. This approach allows you to define a consistent configuration, ensuring that the booting VM is ready to join the service mesh with minimal post-boot commands.

Crucially, the template and subsequent setup rely on specific disk naming conventions for successful bootstrapping:

Disk type

Expected Device Name Pattern

Purpose

Boot/data disks

vd{x} (e.g., vda, vdb)

Must use virtio drivers to ensure high performance.

Secret/ConfigMap Disks

sd{x} (e.g., sda, sdb)

Used for mounting configuration sources, like the DataVolume containing the servicemesh-driver-disk.iso and the secret holding the Istio token.

 

The template can be created by inputting the following command:

cat <<'EOF' | oc create -f -
apiVersion: template.openshift.io/v1
kind: Template
metadata:
  annotations:
    description: RHEL mesh template
    iconClass: icon-rhel
    name.os.template.kubevirt.io/redhat: RHEL
    openshift.io/display-name: RHEL VM
  labels:
    os.template.kubevirt.io/redhat: "true"
    template.kubevirt.io/type: vm
    workload.template.kubevirt.io/server: "true"
  name: mesh
  namespace: vms
objects:
- apiVersion: kubevirt.io/v1
  kind: VirtualMachine
  metadata:
    annotations:
      description: Mesh VM
    labels:
      app: ${NAME}
      os.template.kubevirt.io/redhat: "true"
      vm.kubevirt.io/template: mesh
    name: ${NAME}
  spec:
    running: false
    template:
      metadata:
        annotations:
          vm.kubevirt.io/flavor: small
          vm.kubevirt.io/os: rhel
          vm.kubevirt.io/workload: server
        labels:
          kubevirt.io/domain: ${NAME}
          kubevirt.io/size: small
          sidecar.istio.io/inject: "false"
      spec:
        domain:
          cpu:
            cores: 1
            sockets: 1
            threads: 1
          devices:
            disks:
            - disk:
                bus: virtio
              name: rootdisk
            - disk:
                bus: virtio
              name: cloudinitdisk
            - disk:
                bus: virtio
              name: servicemesh-driver-disk
            interfaces:
            - masquerade: {}
              model: virtio
              name: default
            networkInterfaceMultiqueue: true
            rng: {}
          features:
            acpi: {}
            smm:
              enabled: true
          firmware:
            bootloader:
              efi: {}
          machine:
            type: q35
          memory:
            guest: 2Gi
        hostname: ${NAME}
        networks:
        - name: default
          pod: {}
        terminationGracePeriodSeconds: 180
        volumes:
        - containerDisk:
            image: registry.redhat.io/rhel9/rhel-guest-image:9.6
          name: rootdisk
        - cloudInitNoCloud:
            userData: |-
              #cloud-config
              user: cloud-user
              password: '${CLOUD_USER_PASSWORD}'
              chpasswd: { expire: False }
              bootcmd:
                - "mkdir -p /var/run/secrets/tokens /var/run/secrets/kubernetes.io/serviceaccount /etc/certs/istio-ca"
                - "mount -o,uid=994,gid=994,ro /dev/sda /var/run/secrets/tokens"
                - "mount -o,uid=1000 /dev/sdb /var/run/secrets/kubernetes.io/serviceaccount"
                - "mount -o,uid=994,gid=994,ro /dev/sdc /etc/certs/istio-ca"
                - "mount /dev/vdc /mnt && dnf install -y /mnt/iptables* /mnt/glib* /mnt/lib* /mnt/istio-1.${ISTIOREL}.0-sidecar.rpm && umount /mnt"
                - "ln -s /etc/certs/istio-ca/root-cert.pem /etc/certs/root-cert.pem"
          name: cloudinitdisk
        - name: ${SERVICE_ACCOUNT_NAME}-istio-token-disk
          secret:
            optional: true
            secretName: ${SERVICE_ACCOUNT_NAME}-istio-token
        - name: ${SERVICE_ACCOUNT_NAME}-disk
          serviceAccount:
            serviceAccountName: ${SERVICE_ACCOUNT_NAME}
        - configMap:
            name: istio-ca-root-cert
          name: istio-ca-disk
        - name: servicemesh-driver-disk
          persistentVolumeClaim:
            claimName: servicemesh-driver-disk
parameters:
- description: Name for the new VM
  from: mesh-[a-z0-9]{16}
  generate: expression
  name: NAME
- description: Name of the Service Account to mount into the virtual machine
  name: SERVICE_ACCOUNT_NAME
  required: true
- description: The Istio release version your Pilot is based on (24, 26, ...)
  name: ISTIOREL
  required: true
- description: Randomized password for the cloud-init user
  from: '[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}'
  generate: expression
  name: CLOUD_USER_PASSWORD
EOF 

Finalize the VM template with Istio integration

 We've finalized all the necessary components: 

  • The Service Mesh Pilot configuration
  • The East-West Gateway
  • The WorkloadGroup CR
  • The custom servicemesh-driver-disk.iso
  • The specialized Istio authentication token

Now, it's time to bring the integrated VM online using our prepared template.

The bootcmd section of your VM template is the final crucial piece, ensuring the VM self-configures into the service mesh upon first boot. The startup script executes the following critical, automated steps:

  1. Istio binary deployment: Installs the Istio sidecar RPMs onto the immutable VM image. This ensures the necessary Envoy binary and configuration tools are available, even without Sidecar injection.
  2. Root certificate injection: Securely injects the Service Mesh's Root Certificate into the file system at /etc/certs/root-cert.pem. This is essential for the Envoy proxy to establish a secure, verified mTLS connection with the Pilot.
  3. Token injection: Places the Istio-specific authentication token (the one with the Istio audience) into the expected path at /var/run/secrets/tokens/istio-token. This token is used to authenticate the VM's identity when contacting the Pilot.

Note

While the generic Kubernetes ServiceAccount token isn't strictly required for joining the mesh, retaining it offers flexibility. It enables you to use the istioctl CLI tool from inside the VM for diagnostics or to perform administrative actions, provided the underlying ServiceAccount possesses the required ClusterRole/Role bindings.

Previous resource
Create dependencies and prepare the environment
Next resource
Create a VM to join the Service Mesh

Developer Preview

Openshift Virtualization VM workloads in OpenShift Service Mesh is a developer preview feature at the moment.  

If you want to know more about what the developer preview feature means or its scope of support

What is a developer preview feature?