Stream Processing

Red Hat build of Eclipse Vert.x 3.9 brings Fluent API Query

Red Hat build of Eclipse Vert.x 3.9 brings Fluent API Query

Red Hat Runtimes provides a set of comprehensive frameworks, runtimes, and programming languages for developers, architects, and IT leaders with cloud-native application development needs. The latest update to Red Hat Runtimes has arrived with Red Hat’s build of Eclipse Vert.x version 3.9. Red Hat Runtimes provides application developers with a variety of application runtimes and lets them run on the Red Hat OpenShift Container Platform.

A fluent API is a common pattern throughout Vert.x, it lets multiple methods calls be chained together. For example:

request.response().putHeader("Content-Type", "text/plain").write("some text").end();

Chaining calls like this also allows you to write code that’s a bit less verbose.

With 3.9, you can now create prepared statements and collector queries with the inclusion of Query in the Fluent API. If you are familiar with JDBC, PreparedStatement lets you create and execute statements. Moreover, you can run multiple interactions, such as cursor or stream operations.

Continue reading “Red Hat build of Eclipse Vert.x 3.9 brings Fluent API Query”

Share
Extending Kafka connectivity with Apache Camel Kafka connectors

Extending Kafka connectivity with Apache Camel Kafka connectors

Apache Kafka is one of the most used pieces of software in modern application development because of its distributed nature, high throughput, and horizontal scalability. Every day more and more organizations are adopting Kafka as the central event bus for their event-driven architecture. As a result, more and more data flows through the cluster, making the connectivity requirements rise in priority for any backlog. For this reason, the Apache Camel community released the first iteration of Kafka Connect connectors for the purpose of easing the burden on development teams.

Continue reading “Extending Kafka connectivity with Apache Camel Kafka connectors”

Share
Capture database changes with Debezium Apache Kafka connectors

Capture database changes with Debezium Apache Kafka connectors

Change data capture, or CDC, is a well-established software design pattern for a system that monitors and captures the changes in data so that other software can respond to those changes. CDC captures row-level changes to database tables and passes corresponding change events to a data streaming bus. Applications can read these change event streams and access these change events in the order in which they occurred.

Thus, change data capture helps to bridge traditional data stores and new cloud-native event-driven architectures. Meanwhile, Debezium is a set of distributed services that captures row-level changes in databases so that applications can see and respond to those changes. This general availability (GA) release from Red Hat Integration includes the following Debezium connectors for Apache Kafka: MySQL Connector, PostgreSQL Connector, MongoDB Connector, and SQL Server Connector.

Continue reading “Capture database changes with Debezium Apache Kafka connectors”

Share
Running an event-driven health management business process through end user scenarios: Part 2

Running an event-driven health management business process through end user scenarios: Part 2

If you read the first article in this series, then you already set up the example application you’ll need for this article. If you have not set up the population health management application, you should do that before continuing. In this article, we’ll run a few business processes through our event- and business-process-driven application to test it out.

Continue reading Running an event-driven health management business process through end user scenarios: Part 2

Share
Running an event-driven health management business process through a few scenarios: Part 1

Running an event-driven health management business process through a few scenarios: Part 1

In the previous series of articles, Designing an event-driven business process at scale: A health management example (which you need to read to fully understand this one), you designed and implemented an event-driven scalable business process for the population health management use case. Now, you will run this process through a few scenarios. In this way, you will:

Continue reading Running an event-driven health management business process through a few scenarios: Part 1

Share
Low-code microservices orchestration with Syndesis

Low-code microservices orchestration with Syndesis

Recently I wrote about decoupling infrastructure code from microservices. I found that Apache Camel and Debezium provided the middleware I needed for that project, with minimal coding on my end. After my successful experiment, I wondered if it would be possible to orchestrate two or more similarly decoupled microservices into a new service–and could I do it without writing any code at all? I decided to find out.

This article is a quick dive into orchestrating microservices without writing any code. We will use Syndesis (an open source integration platform) as our orchestration platform. Note that the examples assume that you are familiar with Debezium and Kafka.

Continue reading “Low-code microservices orchestration with Syndesis”

Share
How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

In Open Liberty 20.0.0.3, you can now access Kafka-specific properties such as the message key and message headers, rather than just the message payload, as was the case with the basic MicroProfile Reactive Messaging Message API. Also, you can now set the SameSite attribute in the session cookie, the LTPA, and JWT cookies as well as in application-defined cookies.

Continue reading How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

Share
Dynamic case management in the event-driven era

Dynamic case management in the event-driven era

Case management applications are designed to handle a complex combination of human and automated tasks. All case updates and case data are captured as a case file, which acts as a pivot for the management. This then serves as a system of record for future audits and tracking. The key characteristic of these workflows is that they are ad hoc in nature. There is no single resolution, and often, one size doesn’t fit all.

Case management does not have structured time bounds. All cases typically don’t resolve at the same time. Consider examples like client onboarding, dispute resolution, fraud investigations, etc., which, by virtue, try to provide customized solutions based on the specific use case. With the advent of more modern technological frameworks and practices like microservices and event-driven processing, the potential of case management solutions opens up even further. This article describes how you can make use of case management for dynamic workflow processing in this modern era, including components such as Red Hat OpenShift, Red Hat AMQ Streams, Red Hat Fuse, and Red Hat Process Automation Manager.

Continue reading “Dynamic case management in the event-driven era”

Share
Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 2

Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 2

The Apache Kafka project includes a Streams Domain-Specific Language (DSL) built on top of the lower-level Stream Processor API. This DSL provides developers with simple abstractions for performing data processing operations. However, how one builds a stream processing pipeline in a containerized environment with Kafka isn’t clear. This second article in a two-part series uses the basics from the previous article to build an example application using Red Hat AMQ Streams.

Continue reading “Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 2”

Share
Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 1

Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 1

The Apache Kafka project includes a Streams Domain-Specific Language (DSL) built on top of the lower-level Stream Processor API. This DSL provides developers with simple abstractions for performing data processing operations. However, how to build a stream processing pipeline in a containerized environment with Kafka isn’t clear. This two-part article series describes the steps required to build your own Apache Kafka Streams application using Red Hat AMQ Streams.

Continue reading “Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 1”

Share