debezium

Build a simple cloud-native change data capture pipeline

Build a simple cloud-native change data capture pipeline

Change data capture (CDC) is a well-established software design pattern for a system that monitors and captures data changes so that other software can respond to those events. Using KafkaConnect, along with Debezium Connectors and the Apache Camel Kafka Connector, we can build a configuration-driven data pipeline to bridge traditional data stores and new event-driven architectures.

This article walks through a simple example.

Continue reading “Build a simple cloud-native change data capture pipeline”

Share
Change data capture for microservices without writing any code

Change data capture for microservices without writing any code

Want to smoothly modernize your legacy and monolithic applications to microservices or cloud-native without writing any code? Through this demonstration, we show you how to achieve the following change data capture scenario between two microservices on Red Hat OpenShift using the combination of Syndesis, Strimzi, and Debezium.

architecture diagram

Continue reading “Change data capture for microservices without writing any code”

Share
Change data capture with Debezium: A simple how-to, Part 1

Change data capture with Debezium: A simple how-to, Part 1

One question always comes up as organizations moving towards being cloud-native, twelve-factor, and stateless: How do you get an organization’s data to these new applications? There are many different patterns out there, but one pattern we will look at today is change data capture. This post is a simple how-to on how to build out a change data capture solution using Debezium within an OpenShift environment. Future posts will also add to this and add additional capabilities.

Continue reading Change data capture with Debezium: A simple how-to, Part 1

Share
Capture database changes with Debezium Apache Kafka connectors

Capture database changes with Debezium Apache Kafka connectors

Change data capture, or CDC, is a well-established software design pattern for a system that monitors and captures the changes in data so that other software can respond to those changes. CDC captures row-level changes to database tables and passes corresponding change events to a data streaming bus. Applications can read these change event streams and access these change events in the order in which they occurred.

Thus, change data capture helps to bridge traditional data stores and new cloud-native event-driven architectures. Meanwhile, Debezium is a set of distributed services that captures row-level changes in databases so that applications can see and respond to those changes. This general availability (GA) release from Red Hat Integration includes the following Debezium connectors for Apache Kafka: MySQL Connector, PostgreSQL Connector, MongoDB Connector, and SQL Server Connector.

Continue reading “Capture database changes with Debezium Apache Kafka connectors”

Share
Low-code microservices orchestration with Syndesis

Low-code microservices orchestration with Syndesis

Recently I wrote about decoupling infrastructure code from microservices. I found that Apache Camel and Debezium provided the middleware I needed for that project, with minimal coding on my end. After my successful experiment, I wondered if it would be possible to orchestrate two or more similarly decoupled microservices into a new service–and could I do it without writing any code at all? I decided to find out.

This article is a quick dive into orchestrating microservices without writing any code. We will use Syndesis (an open source integration platform) as our orchestration platform. Note that the examples assume that you are familiar with Debezium and Kafka.

Continue reading “Low-code microservices orchestration with Syndesis”

Share
Red Hat advances Debezium CDC connectors for Apache Kafka support to Technical Preview

Red Hat advances Debezium CDC connectors for Apache Kafka support to Technical Preview

After a couple of months in Developer Preview, the Debezium Apache Kafka connectors for change data capture (CDC) are now available as a Technical Preview as part of the Q4 release of Red Hat Integration. Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process.

Continue reading Red Hat advances Debezium CDC connectors for Apache Kafka support to Technical Preview

Share
Decoupling microservices with Apache Camel and Debezium

Decoupling microservices with Apache Camel and Debezium

The rise of microservices-oriented architecture brought us new development paradigms and mantras about independent development and decoupling. In such a scenario, we have to deal with a situation where we aim for independence, but we still need to react to state changes in different enterprise domains.

I’ll use a simple and typical example in order to show what we’re talking about. Imagine the development of two independent microservices: Order and User. We designed them to expose a REST interface and to each use a separate database, as shown in Figure 1:

Diagram 1 - Order and User microservices

Figure 1: Order and User microservices.

Continue reading “Decoupling microservices with Apache Camel and Debezium”

Share
Developer preview of Debezium Apache Kafka connectors for Change Data Capture (CDC)

Developer preview of Debezium Apache Kafka connectors for Change Data Capture (CDC)

With the release of Red Hat AMQ Streams 1.2, Red Hat Integration now includes a developer preview of Change Data Capture (CDC) capabilities to enable data integration for modern cloud-native microservices-based applications. CDC features are based on the upstream project Debezium and are natively integrated with Apache Kafka and Strimzi to run on top of Red Hat OpenShift Container Platform, the enterprise Kubernetes, as part of the AMQ Streams release.

Continue reading “Developer preview of Debezium Apache Kafka connectors for Change Data Capture (CDC)”

Share
Red Hat Sessions at Devoxx 2017

Red Hat Sessions at Devoxx 2017

The 2017 edition of the legendary Devoxx conference is over, and as always, it has been a fantastic week.

Hosted in Antwerp, Belgium, and sold out months in advance, it’s one of the top events of the Java community. Five days fully packed with workshops, regular conference sessions, BOFs, ignite sessions and even quickie talks during the lunch breaks – there was something for everyone.

The super-comfortable cinema seats at the Devoxx venue are legendary, but also if you couldn’t attend, you wouldn’t miss a thing as the sessions were live streamed. But it gets even better: all the recordings are freely available on YouTube already.

Red Hat was present with more than ten speakers, so Devoxx was a great opportunity for us to show the latest projects. Our sessions covered the full range of software development, from presenting a new garbage collector, over Java coding patterns and updates on popular libraries such as Hibernate, up to several talks related to microservices, including how to test, secure and deploy them on Kubernetes and OpenShift.

Continue reading “Red Hat Sessions at Devoxx 2017”

Share