Stream Processing

Build a simple cloud-native change data capture pipeline

Build a simple cloud-native change data capture pipeline

Change data capture (CDC) is a well-established software design pattern for a system that monitors and captures data changes so that other software can respond to those events. Using KafkaConnect, along with Debezium Connectors and the Apache Camel Kafka Connector, we can build a configuration-driven data pipeline to bridge traditional data stores and new event-driven architectures.

This article walks through a simple example.

Continue reading “Build a simple cloud-native change data capture pipeline”

Share
Tracking COVID-19 using Quarkus, AMQ Streams, and Camel K on OpenShift

Tracking COVID-19 using Quarkus, AMQ Streams, and Camel K on OpenShift

In just a matter of weeks, the world that we knew changed forever. The COVID-19 pandemic came swiftly and caused massive disruption to our healthcare systems and local businesses, throwing the world’s economies into chaos. The coronavirus quickly became a crisis that affected everyone. As researchers and scientists rushed to make sense of it, and find ways to eliminate or slow the rate of infection, countries started gathering statistics such as the number of confirmed cases, reported deaths, and so on. Johns Hopkins University researchers have since aggregated the statistics from many countries and made them available.

In this article, we demonstrate how to build a website that shows a series of COVID-19 graphs. These graphs reflect the accumulated number of cases and deaths over a given time period for each country. We use the Red Hat build of Quarkus, Apache Camel K, and Red Hat AMQ Streams to get the Johns Hopkins University data and populate a MongoDB database with it. The deployment is built on the Red Hat OpenShift Container Platform (OCP).

Continue reading “Tracking COVID-19 using Quarkus, AMQ Streams, and Camel K on OpenShift”

Share
Event streaming and data federation: A citizen integrator’s story

Event streaming and data federation: A citizen integrator’s story

Businesses are seeking to benefit from every customer interaction with real-time personalized experience. Targeting each customer with relevant offers can greatly improve customer loyalty, but we must first understand the customer. We have to be able to draw on data and other resources from diverse systems, such as marketing, customer service, fraud, and business operations. With the advent of modern technologies and agile methodologies, we also want to be able to empower citizen integrators (typically business users who understand business and client needs) to create custom software. What we need is one single functional domain where the information is harmonized in a homogeneous way.

Continue reading Event streaming and data federation: A citizen integrator’s story

Share
Red Hat build of Eclipse Vert.x 3.9 brings Fluent API Query

Red Hat build of Eclipse Vert.x 3.9 brings Fluent API Query

Red Hat Runtimes provides a set of comprehensive frameworks, runtimes, and programming languages for developers, architects, and IT leaders with cloud-native application development needs. The latest update to Red Hat Runtimes has arrived with Red Hat’s build of Eclipse Vert.x version 3.9. Red Hat Runtimes provides application developers with a variety of application runtimes and lets them run on the Red Hat OpenShift Container Platform.

A fluent API is a common pattern throughout Vert.x, it lets multiple methods calls be chained together. For example:

request.response().putHeader("Content-Type", "text/plain").write("some text").end();

Chaining calls like this also allows you to write code that’s a bit less verbose.

With 3.9, you can now create prepared statements and collector queries with the inclusion of Query in the Fluent API. If you are familiar with JDBC, PreparedStatement lets you create and execute statements. Moreover, you can run multiple interactions, such as cursor or stream operations.

Continue reading “Red Hat build of Eclipse Vert.x 3.9 brings Fluent API Query”

Share
Extending Kafka connectivity with Apache Camel Kafka connectors

Extending Kafka connectivity with Apache Camel Kafka connectors

Apache Kafka is one of the most used pieces of software in modern application development because of its distributed nature, high throughput, and horizontal scalability. Every day more and more organizations are adopting Kafka as the central event bus for their event-driven architecture. As a result, more and more data flows through the cluster, making the connectivity requirements rise in priority for any backlog. For this reason, the Apache Camel community released the first iteration of Kafka Connect connectors for the purpose of easing the burden on development teams.

Continue reading “Extending Kafka connectivity with Apache Camel Kafka connectors”

Share
Capture database changes with Debezium Apache Kafka connectors

Capture database changes with Debezium Apache Kafka connectors

Change data capture, or CDC, is a well-established software design pattern for a system that monitors and captures the changes in data so that other software can respond to those changes. CDC captures row-level changes to database tables and passes corresponding change events to a data streaming bus. Applications can read these change event streams and access these change events in the order in which they occurred.

Thus, change data capture helps to bridge traditional data stores and new cloud-native event-driven architectures. Meanwhile, Debezium is a set of distributed services that captures row-level changes in databases so that applications can see and respond to those changes. This general availability (GA) release from Red Hat Integration includes the following Debezium connectors for Apache Kafka: MySQL Connector, PostgreSQL Connector, MongoDB Connector, and SQL Server Connector.

Continue reading “Capture database changes with Debezium Apache Kafka connectors”

Share
Running an event-driven health management business process through end user scenarios: Part 2

Running an event-driven health management business process through end user scenarios: Part 2

If you read the first article in this series, then you already set up the example application you’ll need for this article. If you have not set up the population health management application, you should do that before continuing. In this article, we’ll run a few business processes through our event- and business-process-driven application to test it out.

Continue reading Running an event-driven health management business process through end user scenarios: Part 2

Share
Running an event-driven health management business process through a few scenarios: Part 1

Running an event-driven health management business process through a few scenarios: Part 1

In the previous series of articles, Designing an event-driven business process at scale: A health management example (which you need to read to fully understand this one), you designed and implemented an event-driven scalable business process for the population health management use case. Now, you will run this process through a few scenarios. In this way, you will:

Continue reading Running an event-driven health management business process through a few scenarios: Part 1

Share
Low-code microservices orchestration with Syndesis

Low-code microservices orchestration with Syndesis

Recently I wrote about decoupling infrastructure code from microservices. I found that Apache Camel and Debezium provided the middleware I needed for that project, with minimal coding on my end. After my successful experiment, I wondered if it would be possible to orchestrate two or more similarly decoupled microservices into a new service–and could I do it without writing any code at all? I decided to find out.

This article is a quick dive into orchestrating microservices without writing any code. We will use Syndesis (an open source integration platform) as our orchestration platform. Note that the examples assume that you are familiar with Debezium and Kafka.

Continue reading “Low-code microservices orchestration with Syndesis”

Share
How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

In Open Liberty 20.0.0.3, you can now access Kafka-specific properties such as the message key and message headers, rather than just the message payload, as was the case with the basic MicroProfile Reactive Messaging Message API. Also, you can now set the SameSite attribute in the session cookie, the LTPA, and JWT cookies as well as in application-defined cookies.

Continue reading How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

Share