Gunnar Morling, Burr Sutter
February 8, 2019

Apache Kafka and Debezium | DevNation Tech Talk

Apache Kafka has become the de facto standard for asynchronous event propagation between microservices. Things get challenging, though, when adding a service’s database to the picture: How can you avoid inconsistencies between Kafka and the database?

Enter change data capture (CDC) and Debezium. By capturing changes from the log files of the database, Debezium gives you both reliable and consistent inter-service messaging via Kafka and instant read-your-own-write semantics for services themselves.

Join this webinar to learn how to use CDC for reliable microservices integration and solving typical challenges such as gradually extracting microservices from existing monoliths, maintaining different read models in CQRS-style architectures, and updating caches as well as full-text indexes. We’ll cover:

How Debezium streams all the changes from datastores such as MySQL, PostgreSQL, SQL Server and MongoDB into Kafka, how you can react to change events in near real time, and how Debezium is designed to not compromise on data correctness and completeness if things go wrong.

A live demo using AMQ streams to run Apache Kafka on Red Hat OpenShift showing how to set up a change data stream out of your application's database without any code changes and see how to consume change events in other services, update search indexes, and much more.