Apache Kafka

Tracking COVID-19 using Quarkus, AMQ Streams, and Camel K on OpenShift

Tracking COVID-19 using Quarkus, AMQ Streams, and Camel K on OpenShift

In just a matter of weeks, the world that we knew changed forever. The COVID-19 pandemic came swiftly and caused massive disruption to our healthcare systems and local businesses, throwing the world’s economies into chaos. The coronavirus quickly became a crisis that affected everyone. As researchers and scientists rushed to make sense of it, and find ways to eliminate or slow the rate of infection, countries started gathering statistics such as the number of confirmed cases, reported deaths, and so on. Johns Hopkins University researchers have since aggregated the statistics from many countries and made them available.

In this article, we demonstrate how to build a website that shows a series of COVID-19 graphs. These graphs reflect the accumulated number of cases and deaths over a given time period for each country. We use the Red Hat build of Quarkus, Apache Camel K, and Red Hat AMQ Streams to get the Johns Hopkins University data and populate a MongoDB database with it. The deployment is built on the Red Hat OpenShift Container Platform (OCP).

Continue reading “Tracking COVID-19 using Quarkus, AMQ Streams, and Camel K on OpenShift”

Share
Extending Kafka connectivity with Apache Camel Kafka connectors

Extending Kafka connectivity with Apache Camel Kafka connectors

Apache Kafka is one of the most used pieces of software in modern application development because of its distributed nature, high throughput, and horizontal scalability. Every day more and more organizations are adopting Kafka as the central event bus for their event-driven architecture. As a result, more and more data flows through the cluster, making the connectivity requirements rise in priority for any backlog. For this reason, the Apache Camel community released the first iteration of Kafka Connect connectors for the purpose of easing the burden on development teams.

Continue reading “Extending Kafka connectivity with Apache Camel Kafka connectors”

Share
Change data capture for microservices without writing any code

Change data capture for microservices without writing any code

Want to smoothly modernize your legacy and monolithic applications to microservices or cloud-native without writing any code? Through this demonstration, we show you how to achieve the following change data capture scenario between two microservices on Red Hat OpenShift using the combination of Syndesis, Strimzi, and Debezium.

architecture diagram

Continue reading “Change data capture for microservices without writing any code”

Share
Change data capture with Debezium: A simple how-to, Part 1

Change data capture with Debezium: A simple how-to, Part 1

One question always comes up as organizations moving towards being cloud-native, twelve-factor, and stateless: How do you get an organization’s data to these new applications? There are many different patterns out there, but one pattern we will look at today is change data capture. This post is a simple how-to on how to build out a change data capture solution using Debezium within an OpenShift environment. Future posts will also add to this and add additional capabilities.

Continue reading Change data capture with Debezium: A simple how-to, Part 1

Share
Consuming messages from closest replicas in Apache Kafka 2.4.0 and AMQ Streams

Consuming messages from closest replicas in Apache Kafka 2.4.0 and AMQ Streams

Thanks to changes in Apache Kafka 2.4.0, consumers are no longer required to connect to a leader replica to consume messages. In this article, I introduce you to Apache Kafka’s new ReplicaSelector interface and its customizable RackAwareReplicaSelector. I’ll briefly explain the benefits of the new rack-aware selector, then show you how to use it to more efficiently balance load across Amazon Web Services (AWS) availability zones.

For this example, we’ll use Red Hat AMQ Streams with Red Hat OpenShift Container Platform 4.3, running on Amazon AWS.

Continue reading “Consuming messages from closest replicas in Apache Kafka 2.4.0 and AMQ Streams”

Share
Capture database changes with Debezium Apache Kafka connectors

Capture database changes with Debezium Apache Kafka connectors

Change data capture, or CDC, is a well-established software design pattern for a system that monitors and captures the changes in data so that other software can respond to those changes. CDC captures row-level changes to database tables and passes corresponding change events to a data streaming bus. Applications can read these change event streams and access these change events in the order in which they occurred.

Thus, change data capture helps to bridge traditional data stores and new cloud-native event-driven architectures. Meanwhile, Debezium is a set of distributed services that captures row-level changes in databases so that applications can see and respond to those changes. This general availability (GA) release from Red Hat Integration includes the following Debezium connectors for Apache Kafka: MySQL Connector, PostgreSQL Connector, MongoDB Connector, and SQL Server Connector.

Continue reading “Capture database changes with Debezium Apache Kafka connectors”

Share
Running an event-driven health management business process through end user scenarios: Part 2

Running an event-driven health management business process through end user scenarios: Part 2

If you read the first article in this series, then you already set up the example application you’ll need for this article. If you have not set up the population health management application, you should do that before continuing. In this article, we’ll run a few business processes through our event- and business-process-driven application to test it out.

Continue reading Running an event-driven health management business process through end user scenarios: Part 2

Share
How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

In Open Liberty 20.0.0.3, you can now access Kafka-specific properties such as the message key and message headers, rather than just the message payload, as was the case with the basic MicroProfile Reactive Messaging Message API. Also, you can now set the SameSite attribute in the session cookie, the LTPA, and JWT cookies as well as in application-defined cookies.

Continue reading How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

Share
Getting started with Red Hat Integration service registry

Getting started with Red Hat Integration service registry

New projects require some help. Imagine you are getting ready to start that new feature your business has been asking for the last couple of months. Your team is ready to start coding to implement the new awesome thing that would change your business.

To achieve it, the team will need to interact with the current existing software components of your organization. Your developers will need to interact with API services and event endpoints already available in your architecture. Before being able to send and process information, developers need to be aware of the structure or schema expected by those services.

Red Hat announced the Technical Preview of the Red Hat Integration service registry to help teams to govern their services schemas. The service registry is a store for schema (and API design) artifacts providing a REST API and a set of optional rules for enforcing content validity and evolution. Teams can now use the service registry to query for the schemas required by each service endpoint or register and store new structures for future use.

Continue reading “Getting started with Red Hat Integration service registry”

Share
Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 3

Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 3

In the previous articles in this series, we first covered the basics of Red Hat AMQ Streams on OpenShift and then showed how to set up Kafka Connect, a Kafka Bridge, and Kafka Mirror Maker. Here are a few key points to keep in mind before we proceed:

  • AMQ Streams is based on Apache Kafka.
  • AMQ Streams for the Red Hat OpenShift Container Platform is based on the Strimzi project.
  • AMQ Streams on containers has multiple components, such as the Cluster Operator, Entity Operator, Mirror Maker, Kafka connect, and Kafka Bridge.

Now that we have everything set up (or so we think), let’s look at monitoring and alerting for our new environment.

Continue reading “Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 3”

Share