amq streams

Integrating Spring Boot with Red Hat Integration Service Registry

Integrating Spring Boot with Red Hat Integration Service Registry

Most of the new cloud-native applications and microservices designs are based on event-driven architecture (EDA), responding to real-time information by sending and receiving information about individual events. This kind of architecture relies on asynchronous, non-blocking communication between event producers and consumers through an event streaming backbone such as Red Hat AMQ Streams running on top of Red Hat OpenShift. In scenarios where many different events are being managed, defining a governance model where each event is defined as an API is critical. That way, producers and consumers can produce and consume checked and validated events. We can use a service registry as a datastore for events defined as APIs.

From my field experience working with many clients, I’ve found the most typical architecture consists of the following components:

In this article, you will learn how to easily integrate your Spring Boot applications with Red Hat Integration Service Registry, which is based on the open source Apicurio Registry.

Continue reading “Integrating Spring Boot with Red Hat Integration Service Registry”

Share
HTTP-based Kafka messaging with Red Hat AMQ Streams

HTTP-based Kafka messaging with Red Hat AMQ Streams

Apache Kafka is a rock-solid, super-fast, event streaming backbone that is not only for microservices. It’s an enabler for many use cases, including activity tracking, log aggregation, stream processing, change-data capture, Internet of Things (IoT) telemetry, and more.

Red Hat AMQ Streams makes it easy to run and manage Kafka natively on Red Hat OpenShift. AMQ Streams’ upstream project, Strimzi, does the same thing for Kubernetes.

Setting up a Kafka cluster on a developer’s laptop is fast and easy, but in some environments, the client setup is harder. Kafka uses a TCP/IP-based proprietary protocol and has clients available for many different programming languages. Only the JVM client is on Kafka’s main codebase, however.

Continue reading “HTTP-based Kafka messaging with Red Hat AMQ Streams”

Share
Consuming messages from closest replicas in Apache Kafka 2.4.0 and AMQ Streams

Consuming messages from closest replicas in Apache Kafka 2.4.0 and AMQ Streams

Thanks to changes in Apache Kafka 2.4.0, consumers are no longer required to connect to a leader replica to consume messages. In this article, I introduce you to Apache Kafka’s new ReplicaSelector interface and its customizable RackAwareReplicaSelector. I’ll briefly explain the benefits of the new rack-aware selector, then show you how to use it to more efficiently balance load across Amazon Web Services (AWS) availability zones.

For this example, we’ll use Red Hat AMQ Streams with Red Hat OpenShift Container Platform 4.3, running on Amazon AWS.

Continue reading “Consuming messages from closest replicas in Apache Kafka 2.4.0 and AMQ Streams”

Share
Set up Red Hat AMQ Streams custom certificates on OpenShift (update)

Set up Red Hat AMQ Streams custom certificates on OpenShift (update)

As anticipated in the “Additional notes” section of my previous article, starting from Red Hat AMQ Streams 1.4, it is finally possible to use your own custom certificate for encrypting communication between Kafka clients and brokers—without the requirement to provide a CA certificate. The auto-generated and -managed internal CAs will still remain, but only to protect inter-cluster communication.

The user-provided certificate can be used with all listeners that have TLS encryption enabled, such as the route, load balancer, ingress, and NodePort types. In this complete example, we will enable an external route listener for one-way TLS authentication.

Prerequisites

You need to have the following in place before you can proceed:

Continue reading “Set up Red Hat AMQ Streams custom certificates on OpenShift (update)”

Share
Using secrets in Kafka Connect configuration

Using secrets in Kafka Connect configuration

Kafka Connect is an integration framework that is part of the Apache Kafka project. On Kubernetes and Red Hat OpenShift, you can deploy Kafka Connect using the Strimzi and Red Hat AMQ Streams Operators. Kafka Connect lets users run sink and source connectors. Source connectors are used to load data from an external system into Kafka. Sink connectors work the other way around and let you load data from Kafka into another external system. In most cases, the connectors need to authenticate when connecting to the other systems, so you will need to provide credentials as part of the connector’s configuration. This article shows you how you can use Kubernetes secrets to store the credentials and then use them in the connector’s configuration.

Continue reading “Using secrets in Kafka Connect configuration”

Share