Image featuring red hat, lightning bolt, and gears

Apache Kafka is greatly enhanced by the Knative broker, which reduces overhead for message delivery.  Red Hat OpenShift has its own Knative implementation, introduced as a Tech Preview earlier this year. Our Knative implementation has reached General Availability now with the 1.25 release of Red Hat Openshift Serverless.

How Knative enhances Kafka message delivery

Knative is integrated with Apache Kafka as an optimized, native broker that stores and routes events to interested consumers. The incoming cloud events are directly stored on a configurable Kafka topic and read from the same topic for message delivery when the messages are routed to a registered subscriber using a Trigger API. Integrating the broker and its trigger APIs directly into Apache Kafka has enormous benefits over other, channel-based broker types. Knative reduces extra network traffic because you won't need further HTTP communications to and from an underlying channel.

Other benefits of the Kafka broker implementation include:

  • At-least-once delivery guarantees
  • Ordered delivery of events based on the CloudEvents partitioning extension
  • High availability for the control plane
  • A horizontally scalable data plane

How to use the Knative Kafka broker

You can use Knative on OpenShift Serverless by getting a Red Hat AMQ streams instance up and running, and configuring it for proper access in the Openshift Serverless Operator. For details about the configuration, please refer to the documentation.

After you have completed the configuration, you can use Kafka by specifying the broker.class annotation metadata on the broker's metadata field as follows:

apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
  annotations:
    eventing.knative.dev/broker.class: Kafka
  name: example-kafka-broker
spec:
  config:
    apiVersion: v1
    kind: ConfigMap
    name: kafka-broker-config
    namespace: knative-eventing

Now you can wire your Knative event sources to the broker object and use the Trigger APIs to route the cloud event messages to their final destinations by entering:

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
 name: simple-trigger
spec:
 broker: example-kafka-broker
 subscriber:
   ref:
     apiVersion: v1
     kind: Service
     name: receiversender

Components of the OpenShift Kafka-based Knative broker

In Kafka on OpenShift, the data plane consists of two generic components:

  1. A receiver for the ingress of the CloudEvents into the broker.
  2. A dispatcher for the egress of the CloudEvents out of the broker, using the Trigger API.

The control plane manages the data plane, which is represented by the Kafka-controller and its webhook controller.

It is important to note that the dispatcher and receiver components are reused on other Kafka-based Knative parts, including our KafkaChannel (receiver and dispatcher), KafkaSource (dispatcher), and KafkaSink (receiver). This reuse improves the quality of all our components and ensures a common code path for the ingress and egress of cloud events.

What's Next?

Now that Knative is integrated conveniently with OpenShift, you can speed up your data processing and analytics with a simple configuration. Please comment below if you have any questions. We welcome your feedback. In the meantime, you can practice in the Developer Sandbox for Red Hat OpenShift for free.

Last updated: September 20, 2023