A guided workshop for Kafka Streams

Kafka Streams provides a domain-specific language (DSL) that lets developers create scalable data stream processing pipelines with minimal amounts of code. This guided workshop will show you how to develop an application that uses the Kafka Streams DSL to produce new OpenShift Streams for Apache Kafka topics containing aggregate and filtered data.

Overview: A guided workshop for Kafka Streams

Featured image for Red Hat OpenShift Streams for Apache Kafka.

What is Kafka Streams?

Kafka Streams is a client library that enables developers to create streaming applications, using Java or Scala, that produce data to and consume data from Kafka clusters. Kafka Streams combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's server-side cluster technology.

The Kafka Streams library provides a domain-specific language (DSL) for creatinge pipelines to process data streams scalably with minimal code. The DSL supports stateless operations such as filtering an existing topic to create a new topic that contains messages matching specific criteria. Stateful operations are also possible, such as producing aggregates and joining messages from multiple input streams (topics).


You can try the Kafka service today by going to console.redhat.com and searching for Red Hat OpenShift Streams for Apache Kafka. Setting up a Kafka instance takes only a few minutes. You will need to create a Red Hat account to gain access to the trial.