Apache Kafka

Consuming messages from closest replicas in Apache Kafka 2.4.0 and AMQ Streams

Consuming messages from closest replicas in Apache Kafka 2.4.0 and AMQ Streams

Thanks to changes in Apache Kafka 2.4.0, consumers are no longer required to connect to a leader replica to consume messages. In this article, I introduce you to Apache Kafka’s new ReplicaSelector interface and its customizable RackAwareReplicaSelector. I’ll briefly explain the benefits of the new rack-aware selector, then show you how to use it to more efficiently balance load across Amazon Web Services (AWS) availability zones.

For this example, we’ll use Red Hat AMQ Streams with Red Hat OpenShift Container Platform 4.3, running on Amazon AWS.

Continue reading “Consuming messages from closest replicas in Apache Kafka 2.4.0 and AMQ Streams”

Share
Capture database changes with Debezium Apache Kafka connectors

Capture database changes with Debezium Apache Kafka connectors

Change data capture, or CDC, is a well-established software design pattern for a system that monitors and captures the changes in data so that other software can respond to those changes. CDC captures row-level changes to database tables and passes corresponding change events to a data streaming bus. Applications can read these change event streams and access these change events in the order in which they occurred.

Thus, change data capture helps to bridge traditional data stores and new cloud-native event-driven architectures. Meanwhile, Debezium is a set of distributed services that captures row-level changes in databases so that applications can see and respond to those changes. This general availability (GA) release from Red Hat Integration includes the following Debezium connectors for Apache Kafka: MySQL Connector, PostgreSQL Connector, MongoDB Connector, and SQL Server Connector.

Continue reading “Capture database changes with Debezium Apache Kafka connectors”

Share
Running an event-driven health management business process through end user scenarios: Part 2

Running an event-driven health management business process through end user scenarios: Part 2

If you read the first article in this series, then you already set up the example application you’ll need for this article. If you have not set up the population health management application, you should do that before continuing. In this article, we’ll run a few business processes through our event- and business-process-driven application to test it out.

Continue reading Running an event-driven health management business process through end user scenarios: Part 2

Share
How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

In Open Liberty 20.0.0.3, you can now access Kafka-specific properties such as the message key and message headers, rather than just the message payload, as was the case with the basic MicroProfile Reactive Messaging Message API. Also, you can now set the SameSite attribute in the session cookie, the LTPA, and JWT cookies as well as in application-defined cookies.

Continue reading How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

Share
Getting started with Red Hat Integration service registry

Getting started with Red Hat Integration service registry

New projects require some help. Imagine you are getting ready to start that new feature your business has been asking for the last couple of months. Your team is ready to start coding to implement the new awesome thing that would change your business.

To achieve it, the team will need to interact with the current existing software components of your organization. Your developers will need to interact with API services and event endpoints already available in your architecture. Before being able to send and process information, developers need to be aware of the structure or schema expected by those services.

Red Hat announced the Technical Preview of the Red Hat Integration service registry to help teams to govern their services schemas. The service registry is a store for schema (and API design) artifacts providing a REST API and a set of optional rules for enforcing content validity and evolution. Teams can now use the service registry to query for the schemas required by each service endpoint or register and store new structures for future use.

Continue reading “Getting started with Red Hat Integration service registry”

Share
Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 3

Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 3

In the previous articles in this series, we first covered the basics of Red Hat AMQ Streams on OpenShift and then showed how to set up Kafka Connect, a Kafka Bridge, and Kafka Mirror Maker. Here are a few key points to keep in mind before we proceed:

  • AMQ Streams is based on Apache Kafka.
  • AMQ Streams for the Red Hat OpenShift Container Platform is based on the Strimzi project.
  • AMQ Streams on containers has multiple components, such as the Cluster Operator, Entity Operator, Mirror Maker, Kafka connect, and Kafka Bridge.

Now that we have everything set up (or so we think), let’s look at monitoring and alerting for our new environment.

Continue reading “Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 3”

Share
Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 1

Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 1

Red Hat AMQ Streams is an enterprise-grade Apache Kafka (event streaming) solution, which enables systems to exchange data at high throughput and low latency. AMQ Streams is available as part of the Red Hat AMQ offering in two different flavors: one on the Red Hat Enterprise Linux platform and another on the OpenShift Container Platform. In this three-part article series, we will cover AMQ Streams on the OpenShift Container Platform.

To get the most out of these articles, it will help to be familiar with messaging concepts, Red Hat OpenShift, and Kubernetes.

Continue reading “Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 1”

Share
Red Hat simplifies transition to open source Kafka with new service registry and HTTP bridge

Red Hat simplifies transition to open source Kafka with new service registry and HTTP bridge

Red Hat continues to increase the features available for users looking to implement a 100% open source, event-driven architecture (EDA) through running Apache Kafka on Red Hat OpenShift and Red Hat Enterprise Linux. The Red Hat Integration Q4 release provides new features and capabilities, including ones aimed at simplifying usage and deployment of the AMQ streams distribution of Apache Kafka. 

Continue reading “Red Hat simplifies transition to open source Kafka with new service registry and HTTP bridge”

Share
Red Hat advances Debezium CDC connectors for Apache Kafka support to Technical Preview

Red Hat advances Debezium CDC connectors for Apache Kafka support to Technical Preview

After a couple of months in Developer Preview, the Debezium Apache Kafka connectors for change data capture (CDC) are now available as a Technical Preview as part of the Q4 release of Red Hat Integration. Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process.

Continue reading Red Hat advances Debezium CDC connectors for Apache Kafka support to Technical Preview

Share