Red Hat Integration Service Registry is a datastore based on the Apicurio open source project. In my previous article, I showed you how to integrate Spring Boot with Service Registry. In this article, you'll learn how to connect Service Registry to a secure Red Hat AMQ Streams cluster.
Connecting Service Registry with AMQ Streams
Service Registry includes a set of pluggable storage options for storing APIs, rules, and validations. The Kafka-based storage option, provided by Red Hat AMQ Streams, is suitable for production environments where persistent storage is configured for a Kafka cluster running on Red Hat OpenShift.
Security is not optional in a production environment, and AMQ Streams must provide it for each component you connect to. Security is defined by authentication, which ensures a secure client connection to the Kafka cluster, and authorization, specifying which users can access which resources. I will show you how to set up authentication and authorization with AMQ Streams and Service Registry.
The AMQ Streams Operators
AMQ Streams and Service Registry provide a set of OpenShift Operators available from the OpenShift OperatorHub. Developers use these Operators to package, deploy, and manage OpenShift application components.
The AMQ Streams Operators provide a set of custom resource definitions (CRDs) to describe the components of a Kafka deployment. These objects—namely, Zookeeper, Brokers, Users, and Connect—provide the API that we use to manage our Kafka cluster. AMQ Streams Operators manage authentication, authorization, and the user's life cycle.
The AMQ Streams Cluster Operator manages the Kafka schema reference resource, which declares the Kafka topology and features to use.
The AMQ Streams User Operator manages the KafkaUser schema reference resource. This resource declares a user for an instance of AMQ Streams, including the user's authentication, authorization, and quota definitions.
The Service Registry Operator
The Service Registry Operator provides a set of CRDs to describe service registry deployment components such as storage, security, and replicas. Together, these objects provide the API to manage a Service Registry instance.
The Service Registry Operator uses the ApicurioRegistry schema reference to manage the service registry life cycle. The ApicurioRegistry
declares the service registry topology and main features. The Apicurio Operator manages the ApicurioRegistry
object.
Authentication with AMQ Streams
Red Hat AMQ Streams supports the following authentication mechanisms:
- SASL SCRAM-SHA-512
- Transport Layer Security (TLS) client authentication
- OAuth 2.0 token-based authentication
These mechanisms are declared in the authentication
block in each listener's Kafka
definition. Each listener implements the authentication mechanism defined, so the client applications must authenticate with the mechanism identified.
Two ways to authenticate an AMQ Streams cluster
First off, let's see how to activate each mechanism in the AMQ Streams cluster. We need to identify in Service Registry the authentication mechanism activated in the AMQ Streams cluster. Service Registry only allows SCRAM-SHA-512 and TLS as authentication mechanisms. I'll show you how to configure authentication using each of these mechanisms.
Authenticating a Kafka cluster with SCRAM-SHA-512
The following Kafka
definition declares a Kafka cluster secured with SCRAM-SHA-512 authentication for the secured listener (secured using the TLS protocol):
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-kafka spec: kafka: listeners: tls: authentication: type: scram-sha-512
Applying this configuration creates a set of secrets where the TLS certificates are stored. The secret we need to know to allow the secured connections is declared as my-kafka-cluster-ca-cert
. Note that we will need this value later on.
The following KafkaUser
definition declares a Kafka user with SCRAM-SHA-512 authentication:
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: service-registry-scram labels: strimzi.io/cluster: my-kafka spec: authentication: type: scram-sha-512
Applying this configuration creates a secret (the user's name) and stores it where the user credentials are stored. This secret contains the generated password to authenticate to the Kafka cluster:
$ oc get secrets NAME TYPE DATA AGE service-registry-scram Opaque 1 4s
Authenticating a Kafka cluster with TLS
The following Kafka
definition declares a Kafka cluster secured with TLS authentication for the secured listener (secured with the TLS protocol):
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-kafka spec: kafka: listeners: tls: authentication: type: tls
Applying this configuration creates a set of secrets where the TLS certificates are stored. The secret we need to know to allow the secured connections is declared as my-kafka-cluster-ca-cert
. Note that we will need this value later on.
The following KafkaUser
definition declares a Kafka user with TLS authentication:
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: service-registry-tls labels: strimzi.io/cluster: my-kafka spec: authentication: type: tls
Applying this configuration creates a secret (the user's name) and stores it where the user credentials are stored. This secret contains the valid client certificates to authenticate to the Kafka cluster:
$ oc get secrets NAME TYPE DATA AGE Service-registry-tls Opaque 1 4s
Service Registry authentication
To identify the authentication mechanism activated in the AMQ Streams cluster, we need to deploy the Service Registry accordingly, using the ApicurioRegistry
definition:
apiVersion: apicur.io/v1alpha1 kind: ApicurioRegistry metadata: name: service-registry spec: configuration: persistence: "streams" streams: bootstrapServers: "my-kafka-kafka-bootstrap:9093"
Note: At the time of this writing, Service Registry can only connect to the AMQ Streams TLS listener (normally in port 9093) when the authentication mechanism is activated in that listener. The ApicurioRegistry
definition's boostrapServers
property must point to that listener port.
Service Registry authentication using SCRAM-SHA-512
The following ApicurioRegistry
definition declares a secured connection with a user with SCRAM-SHA-512 authentication:
apiVersion: apicur.io/v1alpha1 kind: ApicurioRegistry metadata: name: service-registry spec: configuration: persistence: "streams" streams: bootstrapServers: "my-kafka-kafka-bootstrap:9093" security: scram: user: service-registry-scram passwordSecretName: service-registry-scram truststoreSecretName: my-kafka-cluster-ca-cert
We need to identify the following values in this object:
- User: The username to be securely connected.
- PasswordSecretName: The name of the secret where the password is saved.
- TruststoreSecretName: The name of secret with the certificate authority (CA) certificates for the deployed Kafka cluster.
Service Registry authentication using TLS
The following ApicurioRegistry
definition declares a secured connection with a user with a TLS authentication:
apiVersion: apicur.io/v1alpha1 kind: ApicurioRegistry metadata: name: service-registry spec: configuration: persistence: "streams" streams: bootstrapServers: "my-kafka-kafka-bootstrap:9093" security: tls: keystoreSecretName: service-registry-tls truststoreSecretName: my-kafka-cluster-ca-cert
The values that we need to identify in this object are:
- KeystoreSecretName: The name of the user secret with the client certificates.
- TruststoreSecretName: The name of the secret with the CA certificates for the deployed Kafka cluster.
Authorization with AMQ Streams
AMQ Streams supports authorization using SimpleACLAuthorizer
globally for all listeners used for client connections. This mechanism uses access control lists (ACLs) to define which users have access to which resources.
Denial is the default access control if authorization is applied in the Kafka cluster. The listener must declare different rules for each user that wishes to operate within the Kafka cluster.
The Kafka definition
The following Kafka
definition activates authorization in the Kafka cluster:
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-kafka spec: kafka: authorization: type: simple
The KafkaUser definition
An ACL is declared for each user in the KafkaUser
definition. The acls
section (see below) includes a list of resources, where each resource is declared as a new rule:
- Resource type: Identifies the type of object managed in Kafka; objects include topics, consumer groups, clusters, transaction IDs, and delegation tokens.
- Resource name: Identifies the resource where the rule is applied. The resource name could be defined as a literal, to identify one resource, or as a prefix pattern, to identify a list of resources.
- Operation: Declares the kind of operations allowed. A full list of operations available for each resource type is available here.
For a Service Registry user to work successfully with our secured AMQ Streams cluster, we must declare the following rules specifying what the user is allowed to do:
- Read its own consumer group.
- Create, read, write, and describe on a global ID topic (
global-id-topic
). - Create, read, write, and describe on a storage topic (
storage-topic
). - Create, read, write, and describe on its own local changelog topics.
- Describe and write transactional IDs on its own local group.
- Read on a consumer offset topic (
__consumer_offsets
). - Read on a transaction state topic (
__transaction_state
). - Write idempotently on a cluster.
The ACL definition
Here is an example ACL definition:
acls: # Group Id to consume information for the different topics used by the Service Registry. # Name equals to metadata.name property in ApicurioRegistry object - resource: type: group name: service-registry operation: Read # Rules for the Global global-id-topic - resource: type: topic name: global-id-topic operation: Read - resource: type: topic name: global-id-topic operation: Describe - resource: type: topic name: global-id-topic operation: Write - resource: type: topic name: global-id-topic operation: Create # Rules for the Global storage-topic - resource: type: topic name: storage-topic operation: Read - resource: type: topic name: storage-topic operation: Describe - resource: type: topic name: storage-topic operation: Write - resource: type: topic name: storage-topic operation: Create # Rules for the local topics created by our Service Registry instance # Prefix value equals to metadata.name property in ApicurioRegistry object - resource: type: topic name: service-registry- patternType: prefix operation: Read - resource: type: topic name: service-registry- patternType: prefix operation: Describe - resource: type: topic name: service-registry- patternType: prefix operation: Write - resource: type: topic name: service-registry- patternType: prefix operation: Create # Rules for the local transactionalsIds created by our Service Registry instance # Prefix equals to metadata.name property in ApicurioRegistry object - resource: type: transactionalId name: service-registry- patternType: prefix operation: Describe - resource: type: transactionalId name: service-registry- patternType: prefix operation: Write # Rules for internal Apache Kafka topics - resource: type: topic name: __consumer_offsets operation: Read - resource: type: topic name: __transaction_state operation: Read # Rules for Cluster objects - resource: type: cluster operation: IdempotentWrite
Note that activating authorization in AMQ Streams does not affect the ApicurioRegistry
definition. It is only related to the correct ACL definitions in the KafkaUser
objects.
Summary
Connecting Service Registry's security capabilities to secure AMQ Streams clusters enables your production environment to prompt a warning about your security requirements. This article introduced the Service Registry and AMQ Streams components concerned with security requirements and showed you how to apply them successfully.
For a deeper understanding and analysis, please refer to the following references:
- Using AMQ Streams on OpenShift, Chapter 12: Security
- Using the User Operator of AMQ Streams (Red Hat Integration 2020-Q2 documentation)
- Getting started with Service Registry (Red Hat Integration 2020-Q2 documentation)