Stream Processing

Low-code microservices orchestration with Syndesis

Low-code microservices orchestration with Syndesis

Recently I wrote about decoupling infrastructure code from microservices. I found that Apache Camel and Debezium provided the middleware I needed for that project, with minimal coding on my end. After my successful experiment, I wondered if it would be possible to orchestrate two or more similarly decoupled microservices into a new service–and could I do it without writing any code at all? I decided to find out.

This article is a quick dive into orchestrating microservices without writing any code. We will use Syndesis (an open source integration platform) as our orchestration platform. Note that the examples assume that you are familiar with Debezium and Kafka.

Continue reading “Low-code microservices orchestration with Syndesis”

Share
How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

In Open Liberty 20.0.0.3, you can now access Kafka-specific properties such as the message key and message headers, rather than just the message payload, as was the case with the basic MicroProfile Reactive Messaging Message API. Also, you can now set the SameSite attribute in the session cookie, the LTPA, and JWT cookies as well as in application-defined cookies.

Continue reading How to use the new Kafka Client API for Kafka-specific message properties in Open Liberty 20.0.0.3

Share
Dynamic case management in the event-driven era

Dynamic case management in the event-driven era

Case management applications are designed to handle a complex combination of human and automated tasks. All case updates and case data are captured as a case file, which acts as a pivot for the management. This then serves as a system of record for future audits and tracking. The key characteristic of these workflows is that they are ad hoc in nature. There is no single resolution, and often, one size doesn’t fit all.

Case management does not have structured time bounds. All cases typically don’t resolve at the same time. Consider examples like client onboarding, dispute resolution, fraud investigations, etc., which, by virtue, try to provide customized solutions based on the specific use case. With the advent of more modern technological frameworks and practices like microservices and event-driven processing, the potential of case management solutions opens up even further. This article describes how you can make use of case management for dynamic workflow processing in this modern era, including components such as Red Hat OpenShift, Red Hat AMQ Streams, Red Hat Fuse, and Red Hat Process Automation Manager.

Continue reading “Dynamic case management in the event-driven era”

Share
Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 2

Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 2

The Apache Kafka project includes a Streams Domain-Specific Language (DSL) built on top of the lower-level Stream Processor API. This DSL provides developers with simple abstractions for performing data processing operations. However, how one builds a stream processing pipeline in a containerized environment with Kafka isn’t clear. This second article in a two-part series uses the basics from the previous article to build an example application using Red Hat AMQ Streams.

Continue reading “Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 2”

Share
Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 1

Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 1

The Apache Kafka project includes a Streams Domain-Specific Language (DSL) built on top of the lower-level Stream Processor API. This DSL provides developers with simple abstractions for performing data processing operations. However, how to build a stream processing pipeline in a containerized environment with Kafka isn’t clear. This two-part article series describes the steps required to build your own Apache Kafka Streams application using Red Hat AMQ Streams.

Continue reading “Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 1”

Share
Processing CloudEvents with Eclipse Vert.x

Processing CloudEvents with Eclipse Vert.x

Our connected world is full of events that are triggered or received by different software services. One of the big issues is that event publishers tend to describe events differently and in ways that are mostly incompatible with each other.

To address this, the Serverless Working Group from the Cloud Native Computing Foundation (CNCF) recently announced version 0.2 of the CloudEvents specification. The specification aims to describe event data in a common, standardized way. To some degree, a CloudEvent is an abstract envelope with some specified attributes that describe a concrete event and its data.

Working with CloudEvents is simple. This article shows how to use the powerful JVM toolkit provided by Vert.x to either generate or receive and process CloudEvents.

Continue reading “Processing CloudEvents with Eclipse Vert.x”

Share
Intro to Apache Kafka and Kafka Streams for Event-Driven Microservices on DevNation Live

Intro to Apache Kafka and Kafka Streams for Event-Driven Microservices on DevNation Live

Scalability is often a key issue for many growing organizations. That’s why many organizations use Apache Kafka, a popular messaging and streaming platform. It is horizontally scalable, cloud-native, and versatile. It can serve as a traditional publish-and-subscribe messaging system, as a streaming platform, or as a distributed state store. Companies around the world use Apache Kafka to build real-time streaming applications, streaming data pipelines, and event-driven architectures.

Continue reading Intro to Apache Kafka and Kafka Streams for Event-Driven Microservices on DevNation Live

Share
How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams

How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams

On October 25th Red Hat announced the general availability of their AMQ Streams Kubernetes Operator for Apache Kafka. Red Hat AMQ Streams focuses on running Apache Kafka on Openshift providing a massively-scalable, distributed, and high performance data streaming platform. AMQ Streams, based on the Apache Kafka and Strimzi projects, offers a distributed backbone that allows microservices and other applications to share data with extremely high throughput. This backbone enables:

  • Publish and subscribe: Many to many dissemination in a fault tolerant, durable manner.
  • Replayable events: Serves as a repository for microservices to build in-memory copies of source data, up to any point in time.
  • Long-term data retention: Efficiently stores data for immediate access in a manner limited only by disk space.
  • Partition messages for more horizontal scalability: Allows for organizing messages to maximum concurrent access.

One of the most requested items from developers and architects is how to get started with a simple deployment option for testing purposes. In this guide we will use Red Hat Container Development Kit, based on minishift, to start an Apache Kafka cluster on Kubernetes.

Continue reading “How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams”

Share
Welcome Apache Kafka to the Kubernetes Era!

Welcome Apache Kafka to the Kubernetes Era!

We have pretty exciting news this week as Red Hat is announcing the General Availability of their Apache Kafka Kubernetes operator. Red Hat AMQ Streams delivers the mechanisms for managing Apache Kafka on top of OpenShift, our enterprise distribution for Kubernetes.

Everything started last May 2018 when David Ingham (@dingha) unveiled the Developer Preview as new addition to the Red Hat AMQ offering. Red Hat AMQ Streams focuses on running Apache Kafka on OpenShift. In the microservices world, where several components need to rely on a high throughput communication mechanism, Apache Kafka has made a name for itself for being a leading real-time, distributed messaging platform for building data pipelines and streaming applications.

Continue reading “Welcome Apache Kafka to the Kubernetes Era!”

Share
EventFlow: Event-driven microservices on OpenShift (Part 1)

EventFlow: Event-driven microservices on OpenShift (Part 1)

This post is the first in a series of three related posts that describes a lightweight cloud-native distributed microservices framework we have created called EventFlow. EventFlow can be used to develop streaming applications that can process CloudEvents, which are an effort to standardize upon a data format for exchanging information about events generated by cloud platforms.

The EventFlow platform was created to specifically target the Kubernetes/OpenShift platforms, and it models event-processing applications as a connected flow or stream of components. The development of these components can be facilitated through the use of a simple SDK library, or they can be created as Docker images that can be configured using environment variables to attach to Kafka topics and process event data directly.

Continue reading “EventFlow: Event-driven microservices on OpenShift (Part 1)”

Share