Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Securely connect Red Hat Integration Service Registry with Red Hat AMQ Streams

April 7, 2021
Roman Martin Gil
Related topics:
SecurityJavaKubernetesEvent-Driven

Share:

    Red Hat Integration Service Registry is a datastore based on the Apicurio open source project. In my previous article, I showed you how to integrate Spring Boot with Service Registry. In this article, you'll learn how to connect Service Registry to a secure Red Hat AMQ Streams cluster.

    Connecting Service Registry with AMQ Streams

    Service Registry includes a set of pluggable storage options for storing APIs, rules, and validations. The Kafka-based storage option, provided by Red Hat AMQ Streams, is suitable for production environments where persistent storage is configured for a Kafka cluster running on Red Hat OpenShift.

    Security is not optional in a production environment, and AMQ Streams must provide it for each component you connect to. Security is defined by authentication, which ensures a secure client connection to the Kafka cluster, and authorization, specifying which users can access which resources. I will show you how to set up authentication and authorization with AMQ Streams and Service Registry.

    The AMQ Streams Operators

    AMQ Streams and Service Registry provide a set of OpenShift Operators available from the OpenShift OperatorHub. Developers use these Operators to package, deploy, and manage OpenShift application components.

    The AMQ Streams Operators provide a set of custom resource definitions (CRDs) to describe the components of a Kafka deployment. These objects—namely, Zookeeper, Brokers, Users, and Connect—provide the API that we use to manage our Kafka cluster. AMQ Streams Operators manage authentication, authorization, and the user's life cycle.

    The AMQ Streams Cluster Operator manages the Kafka schema reference resource, which declares the Kafka topology and features to use.

    The AMQ Streams User Operator manages the KafkaUser schema reference resource. This resource declares a user for an instance of AMQ Streams, including the user's authentication, authorization, and quota definitions.

    The Service Registry Operator

    The Service Registry Operator provides a set of CRDs to describe service registry deployment components such as storage, security, and replicas. Together, these objects provide the API to manage a Service Registry instance.

    The Service Registry Operator uses the ApicurioRegistry schema reference to manage the service registry life cycle. The ApicurioRegistry declares the service registry topology and main features. The Apicurio Operator manages the ApicurioRegistry object.

    Authentication with AMQ Streams

    Red Hat AMQ Streams supports the following authentication mechanisms:

    • SASL SCRAM-SHA-512
    • Transport Layer Security (TLS) client authentication
    • OAuth 2.0 token-based authentication

    These mechanisms are declared in the authentication block in each listener's Kafka definition. Each listener implements the authentication mechanism defined, so the client applications must authenticate with the mechanism identified.

    Two ways to authenticate an AMQ Streams cluster

    First off, let's see how to activate each mechanism in the AMQ Streams cluster. We need to identify in Service Registry the authentication mechanism activated in the AMQ Streams cluster. Service Registry only allows SCRAM-SHA-512 and TLS as authentication mechanisms. I'll show you how to configure authentication using each of these mechanisms.

    Authenticating a Kafka cluster with SCRAM-SHA-512

    The following Kafka definition declares a Kafka cluster secured with SCRAM-SHA-512 authentication for the secured listener (secured using the TLS protocol):

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
     name: my-kafka
    spec:
     kafka:
       listeners:
         tls:
           authentication:
             type: scram-sha-512
    

    Applying this configuration creates a set of secrets where the TLS certificates are stored. The secret we need to know to allow the secured connections is declared as my-kafka-cluster-ca-cert. Note that we will need this value later on.

    The following KafkaUser definition declares a Kafka user with SCRAM-SHA-512 authentication:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaUser
    metadata:
     name: service-registry-scram
     labels:
       strimzi.io/cluster: my-kafka
    spec:
     authentication:
       type: scram-sha-512
    

    Applying this configuration creates a secret (the user's name) and stores it where the user credentials are stored. This secret contains the generated password to authenticate to the Kafka cluster:

    $ oc get secrets
    NAME                    TYPE      DATA   AGE
    service-registry-scram  Opaque    1      4s
    

    Authenticating a Kafka cluster with TLS

    The following Kafka definition declares a Kafka cluster secured with TLS authentication for the secured listener (secured with the TLS protocol):

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
     name: my-kafka
    spec:
     kafka:
       listeners:
         tls:
           authentication:
             type: tls
    

    Applying this configuration creates a set of secrets where the TLS certificates are stored. The secret we need to know to allow the secured connections is declared as my-kafka-cluster-ca-cert. Note that we will need this value later on.

    The following KafkaUser definition declares a Kafka user with TLS authentication:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaUser
    metadata:
     name: service-registry-tls
     labels:
       strimzi.io/cluster: my-kafka
    spec:
     authentication:
       type: tls
    

    Applying this configuration creates a secret (the user's name) and stores it where the user credentials are stored. This secret contains the valid client certificates to authenticate to the Kafka cluster:

    $ oc get secrets
    NAME                    TYPE      DATA   AGE
    Service-registry-tls    Opaque    1      4s
    

    Service Registry authentication

    To identify the authentication mechanism activated in the AMQ Streams cluster, we need to deploy the Service Registry accordingly, using the ApicurioRegistry definition:

    apiVersion: apicur.io/v1alpha1
    kind: ApicurioRegistry
    metadata:
      name: service-registry
    spec:
      configuration:
        persistence: "streams"
        streams:
          bootstrapServers: "my-kafka-kafka-bootstrap:9093"
    

    Note: At the time of this writing, Service Registry can only connect to the AMQ Streams TLS listener (normally in port 9093) when the authentication mechanism is activated in that listener. The ApicurioRegistry definition's boostrapServers property must point to that listener port.

    Service Registry authentication using SCRAM-SHA-512

    The following ApicurioRegistry definition declares a secured connection with a user with SCRAM-SHA-512 authentication:

    apiVersion: apicur.io/v1alpha1
    kind: ApicurioRegistry
    metadata:
      name: service-registry
    spec:
      configuration:
        persistence: "streams"
        streams:
          bootstrapServers: "my-kafka-kafka-bootstrap:9093"
          security:
            scram:
              user: service-registry-scram
              passwordSecretName: service-registry-scram
              truststoreSecretName: my-kafka-cluster-ca-cert

    We need to identify the following values in this object:

    • User: The username to be securely connected.
    • PasswordSecretName:  The name of the secret where the password is saved.
    • TruststoreSecretName: The name of secret with the certificate authority (CA) certificates for the deployed Kafka cluster.

    Service Registry authentication using TLS

    The following ApicurioRegistry definition declares a secured connection with a user with a TLS authentication:

    apiVersion: apicur.io/v1alpha1
    kind: ApicurioRegistry
    metadata:
      name: service-registry
    spec:
      configuration:
        persistence: "streams"
        streams:
          bootstrapServers: "my-kafka-kafka-bootstrap:9093"
          security:
            tls:
              keystoreSecretName: service-registry-tls
              truststoreSecretName: my-kafka-cluster-ca-cert

    The values that we need to identify in this object are:

    • KeystoreSecretName: The name of the user secret with the client certificates.
    • TruststoreSecretName: The name of the secret with the CA certificates for the deployed Kafka cluster.

    Authorization with AMQ Streams

    AMQ Streams supports authorization using SimpleACLAuthorizer globally for all listeners used for client connections. This mechanism uses access control lists (ACLs) to define which users have access to which resources.

    Denial is the default access control if authorization is applied in the Kafka cluster. The listener must declare different rules for each user that wishes to operate within the Kafka cluster.

    The Kafka definition

    The following Kafka definition activates authorization in the Kafka cluster:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
     name: my-kafka
    spec:
     kafka:
       authorization:
         type: simple

    The KafkaUser definition

    An ACL is declared for each user in the KafkaUser definition. The acls section (see below) includes a list of resources, where each resource is declared as a new rule:

    • Resource type: Identifies the type of object managed in Kafka; objects include topics, consumer groups, clusters, transaction IDs, and delegation tokens.
    • Resource name: Identifies the resource where the rule is applied. The resource name could be defined as a literal, to identify one resource, or as a prefix pattern, to identify a list of resources.
    • Operation: Declares the kind of operations allowed. A full list of operations available for each resource type is available here.

    For a Service Registry user to work successfully with our secured AMQ Streams cluster, we must declare the following rules specifying what the user is allowed to do:

    • Read its own consumer group.
    • Create, read, write, and describe on a global ID topic (global-id-topic).
    • Create, read, write, and describe on a storage topic (storage-topic).
    • Create, read, write, and describe on its own local changelog topics.
    • Describe and write transactional IDs on its own local group.
    • Read on a consumer offset topic (__consumer_offsets).
    • Read on a transaction state topic (__transaction_state).
    • Write idempotently on a cluster.

    The ACL definition

    Here is an example ACL definition:

        acls:
          # Group Id to consume information for the different topics used by the Service Registry.
          # Name equals to metadata.name property in ApicurioRegistry object
          - resource:
              type: group
              name: service-registry
            operation: Read
          # Rules for the Global global-id-topic
          - resource:
              type: topic
              name: global-id-topic
            operation: Read
          - resource:
              type: topic
              name: global-id-topic
            operation: Describe
          - resource:
              type: topic
              name: global-id-topic
            operation: Write
          - resource:
              type: topic
              name: global-id-topic
            operation: Create
          # Rules for the Global storage-topic
          - resource:
              type: topic
              name: storage-topic
            operation: Read
          - resource:
              type: topic
              name: storage-topic
            operation: Describe
          - resource:
              type: topic
              name: storage-topic
            operation: Write
          - resource:
              type: topic
              name: storage-topic
            operation: Create
          # Rules for the local topics created by our Service Registry instance
          # Prefix value equals to metadata.name property in ApicurioRegistry object
          - resource:
              type: topic
              name: service-registry-
              patternType: prefix
            operation: Read
          - resource:
              type: topic
              name: service-registry-
              patternType: prefix
            operation: Describe
          - resource:
              type: topic
              name: service-registry-
              patternType: prefix
            operation: Write
          - resource:
              type: topic
              name: service-registry-
              patternType: prefix
            operation: Create
          # Rules for the local transactionalsIds created by our Service Registry instance
          # Prefix equals to metadata.name property in ApicurioRegistry object
          - resource:
              type: transactionalId
              name: service-registry-
              patternType: prefix
            operation: Describe
          - resource:
              type: transactionalId
              name: service-registry-
              patternType: prefix
            operation: Write
          # Rules for internal Apache Kafka topics
          - resource:
              type: topic
              name: __consumer_offsets
            operation: Read
          - resource:
              type: topic
              name: __transaction_state
            operation: Read
          # Rules for Cluster objects
          - resource:
              type: cluster
            operation: IdempotentWrite

    Note that activating authorization in AMQ Streams does not affect the ApicurioRegistry definition. It is only related to the correct ACL definitions in the KafkaUser objects.

    Summary

    Connecting Service Registry's security capabilities to secure AMQ Streams clusters enables your production environment to prompt a warning about your security requirements. This article introduced the Service Registry and AMQ Streams components concerned with security requirements and showed you how to apply them successfully.

    For a deeper understanding and analysis, please refer to the following references:

    • Using AMQ Streams on OpenShift, Chapter 12: Security
    • Using the User Operator of AMQ Streams (Red Hat Integration 2020-Q2 documentation)
    • Getting started with Service Registry (Red Hat Integration 2020-Q2 documentation)
    Last updated: April 5, 2021

    Recent Posts

    • Alternatives to creating bootc images from scratch

    • How to update OpenStack Services on OpenShift

    • How to integrate vLLM inference into your macOS and iOS apps

    • How Insights events enhance system life cycle management

    • Meet the Red Hat Node.js team at PowerUP 2025

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue