Red Hat AMQ Streams

Red Hat AMQ Streams is an enterprise-grade Apache Kafka (event streaming) solution, which enables systems to exchange data at high throughput and low latency. AMQ Streams is available as part of the Red Hat AMQ offering in two different flavors: one on the Red Hat Enterprise Linux platform and another on the OpenShift Container Platform. In this three-part article series, we will cover AMQ Streams on the OpenShift Container Platform.

To get the most out of these articles, it will help to be familiar with messaging concepts, Red Hat OpenShift, and Kubernetes.

When running on containers, AMQ Streams poses different challenges (see this talk by Sean Glover), such as:

  • Upgrading Kafka
  • Beginning deployment
  • Managing ZooKeeper
  • Replacing brokers
  • Rebalancing topic partitions
  • Decommissioning or adding brokers

These challenges are resolved using the Operator pattern from the Strimzi project.

Now that we have a basic background for Red Hat AMQ Streams, let's dive into how it all works.

Red Hat AMQ Streams deep dive

AMQ Streams has multiple Operators, which helps in solving the challenges of running AMQ Streams in the container world:

  • Cluster Operator: Deploys and manages Kafka clusters on Enterprise containers.
  • Entity Operator: Manages users and topics using two different sub-operators. The Topic Operator manages Kafka topics in your Kafka cluster, and the User Operator manages Kafka users on your Kafka cluster.
  • Kafka Connect: Connects external systems to the Kafka cluster.
  • Kafka Mirror Maker: Replicates data between Kafka clusters.
  • Kafka Bridge: Acts as a bridge between different protocols and Kafka clusters. Currently supports HTTP 1.1 and AMQP 1.0.

Figure 1 shows a bird's view of Red Hat AMQ Streams on Red Hat OpenShift:

AMQ Stream reference design on openshift, kubernetes. Enterprise Apache Kafka. Enterprise Strimzi
Figure 1: How Red Hat AMQ Streams and Red Hat OpenShift interact.

Now let's create a "hello world" program for all of these components. Due to the size of this walk-through, we will cover this topic in three articles, as follows:

  • Part 1: Setting up ZooKeeper, Kafka, and the Entity Operator
  • Part 2: Kafka Connect, Kafka Bridge, and Mirror Maker
  • Part 3: Monitoring and administration

Setting up ZooKeeper, Kafka, and the Entity Operator

Before starting, you will need an OCP cluster with a Red Hat subscription to access the Red Hat container images, and cluster admin access. This walk-through uses Red Hat AMQ Streams 1.3.0:

  1. Download and extract Red Hat AMQ Streams 1.3.0 and the OpenShift Container Platform Images from the Red Hat AMQ Streams 1.3.o download page:
$ unzip amq-streams-1.3.0-ocp-install-examples.zip

There will be two folders: examples and install.

  1. Log in and create a new project and namespace for AMQ Streams (see Figure 2):
$ oc login -u admin_user -p admin_password https://redhat.ocp.cluster.url.com
$ oc new-project amq-streams
amq-streams project creation image
Figure 2: AMQ Streams now shows up as a project.
  1. Navigate to the install/cluster-operator folder and modify the role-binding YAML files to use amq-streams as their namespace:
$ sed -i 's/namespace: .*/namespace: amq-streams/' install/cluster-operator/*RoleBinding*.yaml

For macOS:

$ sed -i '' 's/namespace: .*/namespace: amq-streams/' install/cluster-operator/*RoleBinding*.yaml
  1. Create the Cluster Operator (see Figure 3):
$ oc apply -f install/cluster-operator
kafka_2.12-2.3.0.redhat-00003 pramod$ oc apply -f install/cluster-operator
serviceaccount/strimzi-cluster-operator created
clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-namespaced created
rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created
clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-global created
clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created
clusterrole.rbac.authorization.k8s.io/strimzi-kafka-broker created
clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-broker-delegation created
clusterrole.rbac.authorization.k8s.io/strimzi-entity-operator created
rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-entity-operator-delegation created
clusterrole.rbac.authorization.k8s.io/strimzi-topic-operator created
rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-topic-operator-delegation created
customresourcedefinition.apiextensions.k8s.io/kafkas.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkaconnects.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkaconnects2is.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkatopics.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkausers.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkamirrormakers.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkabridges.kafka.strimzi.io created
deployment.apps/strimzi-cluster-operator created
AMQ stream cluster operator
Figure 3: The strimzi-cluster-operator was created.
  1. Ensure you have eight physical volumes. For the walk-through, we are using 5GB persistent volumes:
$ oc get pv | grep Available
kafka_2.12-2.3.0.redhat-00003 pramod$ oc get pv -o wide | grep Available
NAME   CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS     CLAIM  STORAGECLASS   REASON    AGE
pv14   5Gi       RWO           Recycle         Available                                  34m
pv19   5Gi       RWO           Recycle         Available                                  34m
pv20   5Gi       RWO           Recycle         Available                                  34m
pv21   5Gi       RWO           Recycle         Available                                  34m
pv23   5Gi       RWO           Recycle         Available                                  34m
pv3    5Gi       RWO           Recycle         Available                                  34m
pv5    5Gi       RWO           Recycle         Available                                  34m
pv9    5Gi       RWO           Recycle         Available                                  34m
  1. Create the persistent cluster config amq-kafka-cluster.yml. The example file present in examples/kafka/kafka-persistent.yml was used as a reference for this config:
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: simple-cluster
spec:
  kafka:
    version: 2.3.0
    replicas: 5
    listeners:
      plain: {}
      tls: {}
    config:
      offsets.topic.replication.factor: 5
      transaction.state.log.replication.factor: 5
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.3"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 5Gi
        deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 5Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}
  1. Create the AMQ Streams cluster (see Figure 4):
$ oc apply -f amq-kafka-cluster.yml
pramod$ oc get pv | grep Bound
pv12 5Gi RWO Recycle Bound ocplab/mongodb 38m
pv14 5Gi RWO Recycle Bound amq-streams/data-0-simple-cluster-kafka-4 38m
pv19 5Gi RWO Recycle Bound amq-streams/data-0-simple-cluster-kafka-3 38m
pv20 5Gi RWO Recycle Bound amq-streams/data-0-simple-cluster-kafka-2 38m
pv21 5Gi RWO Recycle Bound amq-streams/data-0-simple-cluster-kafka-1 38m
pv23 5Gi RWO Recycle Bound amq-streams/data-0-simple-cluster-kafka-0 38m
pv3 5Gi RWO Recycle Bound amq-streams/data-simple-cluster-zookeeper-2 38m
pv5 5Gi RWO Recycle Bound amq-streams/data-simple-cluster-zookeeper-0 38m
pv9 5Gi RWO Recycle Bound amq-streams/data-simple-cluster-zookeeper-3 38m
AMQ stream cluster
Figure 4: ZooKeeper, Kafka, and the Entity operator now appear.

To test your cluster:

  1. Log into the OpenShift Container Platform (OCP) cluster, create a producer sample app, and push a few messages:
$ oc run kafka-producer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-23:1.3.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list simple-cluster-kafka-bootstrap:9092 --topic redhat-demo-topics
If you don't see a command prompt, try pressing enter.
>hello world
>from pramod

Ignore the warning for now.

  1. Open another terminal and create a consumer sample app to listen to the messages:
$ oc run kafka-consumer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-23:1.3.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server simple-cluster-kafka-bootstrap:9092 --topic redhat-demo-topics --from-beginning

You should see two messages, which were published using the producer terminal, as shown in Figure 5:

The consumer terminal listening to the producer's messages
Figure 5: The consumer terminal listening to the producer's messages.
  1. Exit from both the producer and the consumer connections using Ctrl+C.

Conclusion

In this article, we explored Red Hat AMQ Streams basics and its components. We also showed how to create a basic Red Hat AMQ cluster on Red Hat OpenShift. In the next article, we will address Kafka Connect, the Kafka Bridge, and Mirror Maker.

References

Last updated: August 10, 2023