Scalability is often a key issue for many growing organizations. That’s why many organizations use Apache Kafka, a popular messaging and streaming platform. It is horizontally scalable, cloud-native, and versatile. It can serve as a traditional publish-and-subscribe messaging system, as a streaming platform, or as a distributed state store. Companies around the world use Apache Kafka to build real-time streaming applications, streaming data pipelines, and event-driven architectures.
Intro to Apache Kafka: Cloud Stream Processing with Jakub Scholz
https://www.youtube.com/watch?v=CZhOJ_ysIiI
Jakub Scholz is the Strimzi project lead and can be followed at @JakubScholz
In this session, we’ll cover these topics:
- How to start a local Kafka cluster and explain its basic concepts such as topics and partitions.
- Show example applications for sending and receiving messages using Java™, Python, and JavaScript.
- Demonstrate how Kafka scales to handle large amounts of data on Java, Python, and JavaScript.
Kafka Streams for Event-Driven Microservices with Marius Bogoevici
https://www.youtube.com/watch?v=x3QCrb6zCKA
Marius Bogoevici is a Red Hat Engineer and can be followed at @MariusBogoevici
Kafka Streams makes it easy to build JavaTM or Scala applications that interact with Kafka clusters, providing features that have been traditionally available in streaming platforms as part of standalone applications. Enterprises around the world use it to build solutions for data streaming, real-time analytics or event-driven architecture.
In this session, we’ll introduce the Kafka Streams application programming interface (API) and the Kafka Streams processing engine. We’ll show how to easily write and deploy Kafka Streams applications and how to take advantage of the enterprise Kubernetes platform, OpenShift®, for deploying microservice-based event-driven and data streaming solutions.
Last updated: August 21, 2019