Cross-cloud identity framework with SPIFFE/Spire on OpenShift

Address cross-cloud identity challenges with SPIFFE/SPIRE on OpenShift by deploying and working with applications in a no-cost OpenShift cluster.

Access the Developer Sandbox

Now that you have deployed SPIFFE/SPIRE to Red Hat OpenShift and created the necessary assets within the AWS cloud, the final step is to deploy a sample application to access the S3 bucket using the identity provided by SPIFFE/SPIRE.

In this lesson, you will:

  • Deploy a sample application to access the S3 bucket using the identity provided by SPIFFE/SPIRE.

Set up a namespace and permissions

Create a namespace called demo for the sample application:

oc apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: null
  name: demo
EOF

The SPIFFE Workload API is made available to applications using a CSI driver. Most OpenShift environments use the restricted-v2 security context constraint (SCC) by default.  Apply the following policies only when running in IBM Cloud to enable the workload to access the restricted-v2 SCC.

oc apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: null
  name: system:openshift:scc:restricted-v2-csi
rules:
- apiGroups:
  - security.openshift.io
  resourceNames:
  - restricted-v2-csi
  resources:
  - securitycontextconstraints
  verbs:
  - use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: null
  name: system:openshift:scc:restricted-v2-csi
  namespace: demo
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:openshift:scc:restricted-v2-csi
subjects:
- kind: ServiceAccount
  name: demo
  namespace: demo
EOF

Deploy the sample application

Note: This section requires the following environment variables, defined above:

  • ROLE_ARN
  • S3_BUCKET

Finally, apply the following to create a service account called demo and deploy the sample application:

oc apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: demo
  namespace: demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo
  namespace: demo
  labels:
    app: demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        identity_template: "true"
        app: demo
    spec:
      hostPID: false
      hostNetwork: false
      dnsPolicy: ClusterFirstWithHostNet
      serviceAccountName: demo
      containers:
        - name: demo
          image: docker.io/tsidentity/spire-demo:latest

          env:
          - name: SPIFFE_ENDPOINT_SOCKET
            value: "/spiffe-workload-api/spire-agent.sock"
          - name: AWS_ROLE_ARN
            value: "${ROLE_ARN}"
          - name: S3_AUD
            value: "mys3"
          - name: "S3_CMD"
            value: "aws s3 cp s3://${S3_BUCKET}/test -"
          - name: AWS_WEB_IDENTITY_TOKEN_FILE
            value: "/tmp/token.jwt"
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            readOnlyRootFilesystem: false 
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault
          volumeMounts:
          - name: spiffe-workload-api
            mountPath: /spiffe-workload-api
          - name: empty 
            mountPath: /.aws
      volumes:
      - name: spiffe-workload-api
        csi:
          driver: "csi.spiffe.io"
          readOnly: true
      - name: empty
        emptyDir: {}
EOF

Confirm that the application is running within the demo namespace.

# Obtain a list of pods in the demo namespace
oc get pods -n demo

Rerun the command until a pod with a Running status is achieved.

This demo image contains everything you need to execute the demo, including the AWS client, SPIRE client, and also a demoscript for automating the demo. The image for the demo container can be found in this repository.

All the demo variables including the S3 bucket location and the IAM access role are passed as container environmental variables.

Execute the workload

Obtain a shell session in the running demo pod:

oc -n demo rsh deployment/demo

This container has a nice demoscript that bootstraps all the required commands for you based on the environment variables injected within the container. To run the script, just type:

demo-s3.sh 

Move through the demo by continuing to press the space bar.  The result should look similar to the following:

$ /opt/spire/bin/spire-agent api fetch jwt -audience mys3 -socketPath /spiffe-workload-api/spire-agent.sock
token(spiffe://mc-ztna-04-9d995c4a8c7c5f281ce13d5467ff6a94-0000.us-east.containers.appdomain.cloud/ns/demo/sa/demo):
eyJhbGciOiJSUzI1NiIsImtpZCI6ImxCRXl0d05MMkpibWxGa1JIaHUybzFoTHFxVEtnWWVDIiwidHlwIjoiSl
. . . . 
$ /opt/spire/bin/spire-agent api fetch jwt -audience mys3 -socketPath /spiffe-workload-api/spire-agent.sock | sed -n '2p' | xargs > /tmp/token.jwt
$ AWS_ROLE_ARN=arn:aws:iam::203747186855:role/role-mc-ztna-demo AWS_WEB_IDENTITY_TOKEN_FILE=/tmp/token.jwt aws s3 cp s3://mc-ztna-demo/test -
my secret message

So, what did the demo script illustrate?

  1. First, the spire-agent CLI was utilized to obtain a JWT token from the workload API. The workload API is served via the CSI driver and mounted within the container at /spiffe-workload-api/spire-agent.sock.

  2. Then two environment variables (AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE) are set based on an injected environment variable and a file representing the output of the obtained JWT token.

  3. Finally, the file stored in the AWS S3 bucket is retrieved and then contents are printed to the screen, verifying that we were able to access the content using the SPIFFE/SPIRE framework.

For more information about the demo application, please refer to the following article.

Clean up

To remove the resources that were created as part of this use case, utilize the steps included in the following sections:

AWS clean up

The section describes commands to clean up resources created in the AWS provider.

Note: This section requires the following environment variables, defined above:

  • OIDC_SERVER
  • S3_BUCKET
aws s3 rb s3://$S3_BUCKET --force
aws iam delete-role-policy --role-name role-$S3_BUCKET --policy-name policy-$S3_BUCKET
aws iam delete-role --role-name role-$S3_BUCKET
export OIDC_ARN=`aws iam list-open-id-connect-providers --output=json | jq '.OpenIDConnectProviderList[].Arn' | grep $OIDC_SERVER | tr -d '"'`
aws iam delete-open-id-connect-provider --open-id-connect-provider-arn $OIDC_ARN

OpenShift clean up

kubectl -n demo delete deploy demo
kubectl -n demo delete sa demo
kubectl delete ns demo
helm --namespace spire-server uninstall spire
helm --namespace spire-server uninstall spire-crds
kubectl delete ns spire-server 

Summary

This learning path demonstrated a simple use case of cross-cloud access using SPIRE. In future lessons, we will demonstrate more complex use cases, such as communication between services across multiple public and on-prem cloud platforms. These use cases leverage SPIRE capabilities such as nesting and federation for scaling across clouds. We will also demonstrate how SPIRE integrates with Sigstore and provides identities to the worker nodes that are used for signing and deploying images within a trusted software supply chain framework.

Ready to learn more? Read this developer e-book: Build a Foundation of security with Zero Trust and automation

Previous resource
Configure AWS services