Stream Processing

Debezium serialization with Apache Avro and Apicurio Registry

Debezium serialization with Apache Avro and Apicurio Registry

In this article, you will learn how to use Debezium with Apache Avro and Apicurio Registry to efficiently monitor change events in a MySQL database. We will set up and run a demonstration using Apache Avro rather than the default JSON converter for Debezium serialization. We will use Apache Avro with the Apicurio service registry to externalize Debezium’s event data schema and reduce the payload of captured events.

Continue reading Debezium serialization with Apache Avro and Apicurio Registry

Share
New features and storage options in Red Hat Integration Service Registry 1.1 GA

New features and storage options in Red Hat Integration Service Registry 1.1 GA

This article introduces new storage installation options and features in the Red Hat Integration service registry. The service registry component is based on Apicurio. You can use it to store and retrieve service artifacts such as OpenAPI specifications and AsyncAPI definitions, and for schemas such as Apache Avro, JSON, and Google Protobuf. We’ve provided Red Hat Integration’s Service Registry 1.1 component as a general availability (GA) release in Red Hat Integration 2020-Q4.

Continue reading New features and storage options in Red Hat Integration Service Registry 1.1 GA

Share
Red Hat Process Automation Manager 7.9 brings Apache Kafka integration and more

Red Hat Process Automation Manager 7.9 brings Apache Kafka integration and more

Red Hat Process Automation Manager 7.9 brings bug fixes, performance improvements, and new features for process and case management, business and decision automation, and business optimization. This article introduces you to Process Automation Manager’s out-of-the-box integration with Apache Kafka, revamped business automation management capabilities, and support for multiple decision requirements diagrams (DRDs). I will also guide you through setting up and using the new drools-metric module for analyzing business rules performance, and I’ll briefly touch on Spring Boot integration in Process Automation Manager 7.9.

Continue reading Red Hat Process Automation Manager 7.9 brings Apache Kafka integration and more

Share
Capture IBM Db2 data changes with Debezium Db2 connector

Capture IBM Db2 data changes with Debezium Db2 connector

This article introduces the new Debezium Db2 connector for change data capture, now available as a technical preview from Red Hat Integration. Get a quick overview of using Debezium in a Red Hat AMQ Streams Kafka cluster, then find out how to use the new Db2 connector to capture row-level changes in your Db2 database tables.

Note: Change data capture, or CDC, is a well-established software design pattern for monitoring and capturing data changes in a database. CDC captures row-level changes to database tables and passes corresponding change events to a data streaming bus. Applications can read the change-event streams and access change events in the order that they happened.

Continue reading “Capture IBM Db2 data changes with Debezium Db2 connector”

Share
Build a data streaming pipeline using Kafka Streams and Quarkus

Build a data streaming pipeline using Kafka Streams and Quarkus

In typical data warehousing systems, data is first accumulated and then processed. But with the advent of new technologies, it is now possible to process data as and when it arrives. We call this real-time data processing. In real-time processing, data streams through pipelines; i.e., moving from one system to another. Data gets generated from static sources (like databases) or real-time systems (like transactional applications), and then gets filtered, transformed, and finally stored in a database or pushed to several other systems for further processing. The other systems can then follow the same cycle—i.e., filter, transform, store, or push to other systems.

Continue reading Build a data streaming pipeline using Kafka Streams and Quarkus

Share
New language support features in Apache Camel VS Code extension 0.0.27

New language support features in Apache Camel VS Code extension 0.0.27

In this article, I share several new language support features in the recently released Language Support for Apache Camel VS Code extension 0.0.27. Before I discuss these improvements, please note that updates to the VS Code extension are available in other IDEs that support the Camel Language Server, including Eclipse IDE, Eclipse Che, and more. It is simply easier to focus on one IDE for my demonstrations, so I’ve chosen VS Code.

Note: Apache Camel is a versatile open source integration framework based on known enterprise integration patterns.

Continue reading “New language support features in Apache Camel VS Code extension 0.0.27”

Share
Kubernetes-native Apache Kafka with Strimzi, Debezium, and Apache Camel (Kafka Summit 2020)

Kubernetes-native Apache Kafka with Strimzi, Debezium, and Apache Camel (Kafka Summit 2020)

Apache Kafka has become the leading platform for building real-time data pipelines. Today, Kafka is heavily used for developing event-driven applications, where it lets services communicate with each other through events. Using Kubernetes for this type of workload requires adding specialized components such as Kubernetes Operators and connectors to bridge the rest of your systems and applications to the Kafka ecosystem.

In this article, we’ll look at how the open source projects Strimzi, Debezium, and Apache Camel integrate with Kafka to speed up critical areas of Kubernetes-native development.

Note: Red Hat is sponsoring the Kafka Summit 2020 virtual conference from August 24-25, 2020. See the end of this article for details.

Continue reading “Kubernetes-native Apache Kafka with Strimzi, Debezium, and Apache Camel (Kafka Summit 2020)”

Share
Introduction to Strimzi: Apache Kafka on Kubernetes (KubeCon Europe 2020)

Introduction to Strimzi: Apache Kafka on Kubernetes (KubeCon Europe 2020)

Apache Kafka has emerged as the leading platform for building real-time data pipelines. Born as a messaging system, mainly for the publish/subscribe pattern, Kafka has established itself as a data-streaming platform for processing data in real-time. Today, Kafka is also heavily used for developing event-driven applications, enabling the services in your infrastructure to communicate with each other through events using Apache Kafka as the backbone. Meanwhile, cloud-native application development is gathering more traction thanks to Kubernetes.

Thanks to the abstraction layer provided by this platform, it’s easy to move your applications from running on bare metal to any cloud provider (AWS, Azure, GCP, IBM, and so on) enabling hybrid-cloud scenarios as well. But how do you move your Apache Kafka workloads to the cloud? It’s possible, but it’s not simple. You could learn all of the Apache Kafka tools for handling a cluster well enough to move your Kafka workloads to Kubernetes, or you could leverage the Kubernetes knowledge you already have using Strimzi.

Note: Strimzi will be represented at the virtual KubeCon Europe 2020 conference from 17-20 August 2020. See the end of the article for details.

Continue reading “Introduction to Strimzi: Apache Kafka on Kubernetes (KubeCon Europe 2020)”

Share