Apache Kafka

Building resilient event-driven architectures with Apache Kafka

Building resilient event-driven architectures with Apache Kafka

Even though cloud-native computing has been around for some time—the Cloud Native Computing Foundation was started in 2015; an eon in computer time—not every developer has experienced the, uh, “joy” of dealing with distributed systems. The old patterns of thinking and architecting systems have given way to new ideas and new problems. For example, it’s not always possible (or advisable) to connect to a database and run transactions. Databases themselves are giving way to events and Command Query Responsibility Segregation (CQRS) and eventual consistency. Two-phase commits are being replaced with queues and database sagas, while monoliths are replaced with microservices, containers, and Kubernetes. “Small and local” thinking rules the day.

Continue reading Building resilient event-driven architectures with Apache Kafka

Share
Event-driven APIs and schema governance for Apache Kafka: Get ready for Kafka Summit Europe 2021

Event-driven APIs and schema governance for Apache Kafka: Get ready for Kafka Summit Europe 2021

As a developer, I’m always excited to attend the Kafka Summit, happening this year from May 11 to 12. There are so many great sessions addressing critical challenges in the Apache Kafka ecosystem. One example is how changes to event-driven APIs are leading developers to focus on contract-first development for Kafka.

Continue reading Event-driven APIs and schema governance for Apache Kafka: Get ready for Kafka Summit Europe 2021

Share
New features and storage options in Red Hat Integration Service Registry 1.1 GA

New features and storage options in Red Hat Integration Service Registry 1.1 GA

This article introduces new storage installation options and features in the Red Hat Integration service registry. The service registry component is based on Apicurio. You can use it to store and retrieve service artifacts such as OpenAPI specifications and AsyncAPI definitions, and for schemas such as Apache Avro, JSON, and Google Protobuf. We’ve provided Red Hat Integration’s Service Registry 1.1 component as a general availability (GA) release in Red Hat Integration 2020-Q4.

Continue reading New features and storage options in Red Hat Integration Service Registry 1.1 GA

Share
Red Hat Process Automation Manager 7.9 brings Apache Kafka integration and more

Red Hat Process Automation Manager 7.9 brings Apache Kafka integration and more

Red Hat Process Automation Manager 7.9 brings bug fixes, performance improvements, and new features for process and case management, business and decision automation, and business optimization. This article introduces you to Process Automation Manager’s out-of-the-box integration with Apache Kafka, revamped business automation management capabilities, and support for multiple decision requirements diagrams (DRDs). I will also guide you through setting up and using the new drools-metric module for analyzing business rules performance, and I’ll briefly touch on Spring Boot integration in Process Automation Manager 7.9.

Continue reading Red Hat Process Automation Manager 7.9 brings Apache Kafka integration and more

Share
OpenShift 4.5: Bringing developers joy with Kubernetes 1.18 and so much more

OpenShift 4.5: Bringing developers joy with Kubernetes 1.18 and so much more

Since the first Red Hat OpenShift release in 2015, Red Hat has put out numerous releases based on Kubernetes. Five years later, Kubernetes is celebrating its sixth birthday, and last month, we announced the general availability of Red Hat OpenShift Container Platform 4.5. In this article, I offer a high-level view of the latest OpenShift release and its technology and feature updates based on Kubernetes 1.18.

Continue reading OpenShift 4.5: Bringing developers joy with Kubernetes 1.18 and so much more

Share
Introduction to Strimzi: Apache Kafka on Kubernetes (KubeCon Europe 2020)

Introduction to Strimzi: Apache Kafka on Kubernetes (KubeCon Europe 2020)

Apache Kafka has emerged as the leading platform for building real-time data pipelines. Born as a messaging system, mainly for the publish/subscribe pattern, Kafka has established itself as a data-streaming platform for processing data in real-time. Today, Kafka is also heavily used for developing event-driven applications, enabling the services in your infrastructure to communicate with each other through events using Apache Kafka as the backbone. Meanwhile, cloud-native application development is gathering more traction thanks to Kubernetes.

Thanks to the abstraction layer provided by this platform, it’s easy to move your applications from running on bare metal to any cloud provider (AWS, Azure, GCP, IBM, and so on) enabling hybrid-cloud scenarios as well. But how do you move your Apache Kafka workloads to the cloud? It’s possible, but it’s not simple. You could learn all of the Apache Kafka tools for handling a cluster well enough to move your Kafka workloads to Kubernetes, or you could leverage the Kubernetes knowledge you already have using Strimzi.

Note: Strimzi will be represented at the virtual KubeCon Europe 2020 conference from 17-20 August 2020. See the end of the article for details.

Continue reading “Introduction to Strimzi: Apache Kafka on Kubernetes (KubeCon Europe 2020)”

Share
HTTP-based Kafka messaging with Red Hat AMQ Streams

HTTP-based Kafka messaging with Red Hat AMQ Streams

Apache Kafka is a rock-solid, super-fast, event streaming backbone that is not only for microservices. It’s an enabler for many use cases, including activity tracking, log aggregation, stream processing, change-data capture, Internet of Things (IoT) telemetry, and more.

Red Hat AMQ Streams makes it easy to run and manage Kafka natively on Red Hat OpenShift. AMQ Streams’ upstream project, Strimzi, does the same thing for Kubernetes.

Setting up a Kafka cluster on a developer’s laptop is fast and easy, but in some environments, the client setup is harder. Kafka uses a TCP/IP-based proprietary protocol and has clients available for many different programming languages. Only the JVM client is on Kafka’s main codebase, however.

Continue reading “HTTP-based Kafka messaging with Red Hat AMQ Streams”

Share
Tracking COVID-19 using Quarkus, AMQ Streams, and Camel K on OpenShift

Tracking COVID-19 using Quarkus, AMQ Streams, and Camel K on OpenShift

In just a matter of weeks, the world that we knew changed forever. The COVID-19 pandemic came swiftly and caused massive disruption to our healthcare systems and local businesses, throwing the world’s economies into chaos. The coronavirus quickly became a crisis that affected everyone. As researchers and scientists rushed to make sense of it, and find ways to eliminate or slow the rate of infection, countries started gathering statistics such as the number of confirmed cases, reported deaths, and so on. Johns Hopkins University researchers have since aggregated the statistics from many countries and made them available.

In this article, we demonstrate how to build a website that shows a series of COVID-19 graphs. These graphs reflect the accumulated number of cases and deaths over a given time period for each country. We use the Red Hat build of Quarkus, Apache Camel K, and Red Hat AMQ Streams to get the Johns Hopkins University data and populate a MongoDB database with it. The deployment is built on the Red Hat OpenShift Container Platform (OCP).

Continue reading “Tracking COVID-19 using Quarkus, AMQ Streams, and Camel K on OpenShift”

Share