Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Set up Red Hat AMQ Streams custom certificates on OpenShift

 

December 18, 2019
Federico Valeri
Related topics:
Kubernetes
Related products:
Streams for Apache KafkaRed Hat OpenShift Container Platform

Share:

    Secure communication over a computer network is one of the most important requirements for a system, and yet it can be difficult to set up correctly. This example shows how to set up Red Hat AMQ Streams' end-to-end TLS encryption using a custom X.509 CA certificate on the Red Hat OpenShift platform.

    Prerequisites

    You need to have the following in place before you can proceed with this example:

    • An OpenShift cluster up and running with at least four CPUs and 5GB of memory.
    • A custom X.509 CA certificate in PEM format (along with its chain).
    • An active Red Hat Customer Portal account.
    • The Red Hat AMQ Streams 1.3.0 Installation and Example package.
    • An OpenShift user with the cluster-admin role.

    The procedure

    Before we start, let's define a few handy variables:

    USER="developer"
    PROJECT="streams"
    CA_USER="system:admin"
    RA_SECRET="reg-auth-secret"
    CLUSTER="my-cluster"
    

    Set up a new project

    The first step after this is to log in as cluster-admin and create a new project to host our clusters. We need this role because we have to install custom resource definitions (CRDs) that are required by the Cluster Operator (CO). We then give full admin rights to the user to let them manage the project once ready:

    $ oc login -u $CA_USER
    $ oc new-project $PROJECT
    $ oc adm policy add-role-to-user admin $USER
    

    To be able to download images from the Red Hat Container Registry, we also need to add an authentication Secret (use your credentials here):

    $ oc create secret docker-registry $RA_SECRET \
          --docker-server=registry.redhat.io \
          --docker-username=<portal-username> \
          --docker-password=<portal-password>
    $ oc secrets link default $RA_SECRET --for=pull
    

    Then, unzip the Installation and Examples distribution package and replace the default project's name with yours:

    TMP="/tmp/$PROJECT" && rm -rf $TMP && mkdir -p $TMP
    $ unzip -qq amq-streams-1.3.0-ocp-install-examples.zip -d $TMP
    $ sed -i -e "s/namespace: .*/namespace: $PROJECT/g" $TMP/install/cluster-operator/*RoleBinding*.yaml
    

    Now, we are ready to install all required CRDs and the Strimzi CO:

    $ oc apply -f $TMP/install/cluster-operator
    $ oc secrets link strimzi-cluster-operator $RA_SECRET --for=pull
    $ oc set env deploy/strimzi-cluster-operator STRIMZI_IMAGE_PULL_SECRETS=$RA_SECRET
    
    $ oc set env deploy/strimzi-cluster-operator STRIMZI_NAMESPACE=$PROJECT
    $ oc apply -f $TMP/install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml
    $ oc apply -f $TMP/install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml
    $ oc apply -f $TMP/install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml
    $ oc apply -f $TMP/install/strimzi-admin
    $ oc adm policy add-cluster-role-to-user strimzi-admin $USER
    

    Configure the custom certificate

    After these commands finish, we can configure our custom X.509 CA certificate. I expect that you already have the following files:

    • rootca.pem: The root Certificate Authority (CA) in our domain (optional).
    • intermca.pem: An intermediate CA used to sign the certificate in a specific context (optional).
    • myca.pem: Our custom CA certificate to use with Apache Kafka.
    • myca-prk.pem: The private key for our custom CA certificate.

    All CAs in the chain should be configured as a CA in the X509v3 Basic Constraints. This means that you cannot use a classic non-CA certificate to replace the self-generated certificate (see also additional notes at the end). The reason for this is that it is used to sign certificates for inter-broker communication.

    After printing out your custom certificate you should be able to see this property:

    $ openssl x509 -inform pem -in myca.pem -noout -text
    ...
    X509v3 Basic Constraints: 
        CA:TRUE
    

    When you have a valid CA certificate, create a bundle file like this:

    $ cat myca.pem intermca.pem rootca.pem > bundle.pem
    

    Then, create all required Secrets and labels containing our custom CA. This must be done before creating our custom cluster (next step):

    $ oc create secret generic $CLUSTER-cluster-ca-cert --from-file=ca.crt=bundle.pem
    $ oc label secret $CLUSTER-cluster-ca-cert strimzi.io/kind=Kafka strimzi.io/cluster=$CLUSTER
    
    $ oc create secret generic $CLUSTER-cluster-ca --from-file=ca.key=myca-prk.pem
    $ oc label secret $CLUSTER-cluster-ca strimzi.io/kind=Kafka strimzi.io/cluster=$CLUSTER
    
    $ oc create secret generic $CLUSTER-clients-ca-cert --from-file=ca.crt=bundle.pem
    $ oc label secret $CLUSTER-clients-ca-cert strimzi.io/kind=Kafka strimzi.io/cluster=$CLUSTER
    
    $ oc create secret generic $CLUSTER-clients-ca --from-file=ca.key=myca-prk.pem
    $ oc label secret $CLUSTER-clients-ca strimzi.io/kind=Kafka strimzi.io/cluster=$CLUSTER
    

    Finally, we can deploy our cluster definition. Note how we set generateCertificateAuthority to instruct the CO not to generate the self-signed CA that otherwise would overwrite our previous configuration.

    Example: Ephemeral cluster creation (not for production)

    Here we create a small ephemeral cluster just for the sake of this example. Do not use the exact same setup for production:

    $ oc create -f - <<EOF
    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: $CLUSTER
    spec:
      kafka:
        version: "2.3.0"
        replicas: 1
        config:
          num.partitions: 1
          default.replication.factor: 1
          log.message.format.version: "2.3"
        clusterCa:
          generateCertificateAuthority: false
        clientsCa:
          generateCertificateAuthority: false
        listeners:
          plain: {}
          tls: {}
          external:
            type: route
        readinessProbe:
          initialDelaySeconds: 30
          timeoutSeconds: 10
        livenessProbe:
          initialDelaySeconds: 30
          timeoutSeconds: 10
        template:
            pod:
              terminationGracePeriodSeconds: 120
        storage:
          type: ephemeral
        resources:
          requests:
            cpu: "1000m"
            memory: "2Gi"
          limits:
            cpu: "1000m"
            memory: "2Gi"
        tlsSidecar:
          resources:
            limits:
              cpu: "100m"
              memory: "128Mi"
            requests:
              cpu: "100m"
              memory: "128Mi"
      zookeeper:
        replicas: 1
        readinessProbe:
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe:
          initialDelaySeconds: 15
          timeoutSeconds: 5
        storage:
          type: ephemeral
        resources:
          requests:
            cpu: "500m"
            memory: "1Gi"
          limits:
            cpu: "500m"
            memory: "1Gi"
        tlsSidecar:
          resources:
            limits:
              cpu: "100m"
              memory: "128Mi"
            requests:
              cpu: "100m"
              memory: "128Mi"
      entityOperator:
        topicOperator:
          resources:
            limits:
              cpu: "250m"
              memory: "256Mi"
            requests:
              cpu: "250m"
              memory: "256Mi"
        userOperator:
          resources:
            limits:
              cpu: "250m"
              memory: "256Mi"
            requests:
              cpu: "250m"
              memory: "256Mi"
        tlsSidecar:
          resources:
            limits:
              cpu: "100m"
              memory: "128Mi"
            requests:
              cpu: "100m"
              memory: "128Mi"
    EOF
    

    Once the cluster is up and running, you might want to check that the custom CA is correctly loaded:

    $ oc get pods
    $ oc logs strimzi-cluster-operator-<uuid>
    $ oc logs $CLUSTER-kafka-0 -c kafka
    

    Set up the Java client

    Create and use a truststore in Java KeyStore (JKS) format for one-way TLS authentication:

    $ oc extract secret/$CLUSTER-cluster-ca-cert --keys=ca.crt --to=- > ca.pem
    keytool -import -noprompt -alias root -file ca.pem -keystore truststore.jks -storepass secret
    

    If you want to access Kafka from outside OpenShift, then you also need to use this bootstrap URL:

    $ echo $(oc get routes $CLUSTER-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'):443
    

    Additional notes

    We already know that most security teams won't easily release CA certificates. We are working on an enhancement to provide the option to use a non-CA certificate for Kafka listeners, leaving the internal self-generated CA to secure the inter-broker communication.

    Beware that when using a custom CA as explained in this post, you are responsible for the certificate renewals. This process is fully automated when using self-generated certificates. In any case, after the renewal, you will have to recreate the client's truststore as described before.

    Last updated: March 29, 2023

    Recent Posts

    • Customize RHEL CoreOS at scale: On-cluster image mode in OpenShift

    • How to set up KServe autoscaling for vLLM with KEDA

    • How I used Cursor AI to migrate a Bash test suite to Python

    • Install Python 3.13 on Red Hat Enterprise Linux from EPEL

    • Zero trust automation on AWS with Ansible and Terraform

    What’s up next?

     

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue