The Apache Kafka project includes a Streams Domain-Specific Language (DSL) built on top of the lower-level Stream Processor API. This DSL provides developers with simple abstractions for performing data processing operations. However, how to build a stream processing pipeline in a containerized environment with Kafka isn't clear. This two-part article series describes the steps required to build your own Apache Kafka Streams application using Red Hat AMQ Streams.
During this first part of this series, we provide a simple example to use as building blocks for part two. First things first, to set up AMQ Streams, follow the instructions in the AMQ Streams getting started documentation to deploy a Kafka cluster. The rest of this documentation assumes you followed these steps.
Create a producer
To get started, we need a flow of data streaming into Kafka. This data can come from a variety of different sources, but for the purposes of this example, let's generate sample data using String
s sent with a delay.
Next, we need to configure the Kafka producer so that it talks to the Kafka brokers (see this article for a more in-depth explanation), as well as provides the topic name to write to and other information. As our application will be containerized, we can abstract this process away from the internal logic and read from environment variables. This reading could be done in a separate configuration class, but for this simple example, the following is sufficient:
private static final String DEFAULT_BOOTSTRAP_SERVERS = "localhost:9092"; private static final int DEFAULT_DELAY = 1000; private static final String DEFAULT_TOPIC = "source-topic"; private static final String BOOTSTRAP_SERVERS = "BOOTSTRAP_SERVERS"; private static final String DELAY = "DELAY"; private static final String TOPIC = "TOPIC"; private static final String ACKS = "1"; String bootstrapServers = System.getenv().getOrDefault(BOOTSTRAP_SERVERS, DEFAULT_BOOTSTRAP_SERVERS); long delay = Long.parseLong(System.getenv().getOrDefault(DELAY, String.valueOf(DEFAULT_DELAY))); String topic = System.getenv().getOrDefault(TOPIC, DEFAULT_TOPIC);
We can now create appropriate properties for our Kafka producer using the environment variables (or the defaults we set):
Properties props = new Properties(); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); props.put(ProducerConfig.ACKS_CONFIG, ACKS); props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer"); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
Next, we create the producer with the configuration properties:
KafkaProducer<String,String> producer = new KafkaProducer<>(props);
We can now stream the data to the provided topic
, assigning the required delay
between each message:
int i = 0; while (true) { String value = String.format("hello world %d", i); ProducerRecord<String, String> record = new ProducerRecord<>(topic, value); log.info("Sending record {}", record); producer.send(record); i++; try { Thread.sleep(delay); } catch (InterruptedException e) { // sleep interrupted, continue } }
Now that our application is complete, we need to package it into a Docker image and push to Docker Hub. We can then create a new deployment in our Kubernetes cluster using this image and pass the environment variables via YAML:
apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: basic-example name: basic-producer spec: replicas: 1 template: metadata: labels: app: basic-producer spec: containers: - name: basic-producer image: <docker-user>/basic-producer:latest env: - name: BOOTSTRAP_SERVERS value: my-cluster-kafka-bootstrap:9092 - name: DELAY value: 1000 - name: TOPIC value: source-topic
We can check that data is arriving at our Kafka topic by consuming from it:
$ oc run kafka-consumer -ti --image=registry.access.redhat.com/amq7/amq-streams-kafka:1.1.0-kafka-2.1.1 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic source-topic --from-beginning
The output should look like this:
hello world 0 hello world 1 hello world 2 ...
Create a Streams application
A Kafka Streams application typically reads/sources data from one or more input topics, and writes/sends data to one or more output topics, acting as a stream processor. We can set up the properties and configuration the same way as before, but this time we need to specify a SOURCE_TOPIC
and a SINK_TOPIC
.
To start, we create the source
stream:
StreamsBuilder builder = new StreamsBuilder(); KStream<String, String> source = builder.stream(sourceTopic);
We can now perform an operation on this source
stream. For example, we can create a new stream, called filtered
, which only contains records with an even-numbered index:
KStream<String, String> filtered = source .filter((key, value) -> { int i = Integer.parseInt(value.split(" ")[2]); return (i % 2) == 0; });
This new stream can then output to the sinkTopic
:
filtered.to(sinkTopic);
We have now created the topology defining our stream application's operations, but we do not have it running yet. Getting it running requires creating the streams object, setting up a shutdown handler, and starting the stream:
final KafkaStreams streams = new KafkaStreams(builder.build(), props); final CountDownLatch latch = new CountDownLatch(1); // attach shutdown handler to catch control-c Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") { @Override public void run() { streams.close(); latch.countDown(); } }); try { streams.start(); latch.await(); } catch (Throwable e) { System.exit(1); } System.exit(0);
There you go. It's as simple as this to get your Streams application running. Build the application into a Docker image, deploy in a similar way to the producer, and you can watch the SINK_TOPIC
for the output.
Read more
Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 2
Last updated: September 3, 2019