Our connected world is full of events that are triggered or received by different software services. One of the big issues is that event publishers tend to describe events differently and in ways that are mostly incompatible with each other.
To address this, the Serverless Working Group from the Cloud Native Computing Foundation (CNCF) recently announced version 0.2 of the CloudEvents specification. The specification aims to describe event data in a common, standardized way. To some degree, a CloudEvent is an abstract envelope with some specified attributes that describe a concrete event and its data.
Working with CloudEvents is simple. This article shows how to use the powerful JVM toolkit provided by Vert.x to either generate or receive and process CloudEvents.
Continue reading “Processing CloudEvents with Eclipse Vert.x”
Scalability is often a key issue for many growing organizations. That’s why many organizations use Apache Kafka, a popular messaging and streaming platform. It is horizontally scalable, cloud-native, and versatile. It can serve as a traditional publish-and-subscribe messaging system, as a streaming platform, or as a distributed state store. Companies around the world use Apache Kafka to build real-time streaming applications, streaming data pipelines, and event-driven architectures.
Continue reading Intro to Apache Kafka and Kafka Streams for Event-Driven Microservices on DevNation Live
This post is the first in a series of three related posts that describes a lightweight cloud-native distributed microservices framework we have created called EventFlow. EventFlow can be used to develop streaming applications that can process CloudEvents, which are an effort to standardize upon a data format for exchanging information about events generated by cloud platforms.
The EventFlow platform was created to specifically target the Kubernetes/OpenShift platforms, and it models event-processing applications as a connected flow or stream of components. The development of these components can be facilitated through the use of a simple SDK library, or they can be created as Docker images that can be configured using environment variables to attach to Kafka topics and process event data directly.
Continue reading “EventFlow: Event-driven microservices on OpenShift (Part 1)”
Red Hat Decision Manager provides a vast array of decision management functionality. From the Decision Tables feature in the new Decision Model and Notation (DMN) v1.1, which implements the full FEEL Compliance Level 3 of the DMN specification, to Predictive Model Markup Language (PMML).
Another powerful feature is the Complex Event Processing (CEP) engine. This engine provides the ability to detect, correlate, abstract, aggregate or compose and react to events. In other words, the technology provides techniques to infer complex events from simple events, react to the events of interest, and take actions. The main difference between CEP and normal rules execution is the notion of time. Where standard rules execution in Decision Manager deals with facts and reasoning over these facts, the CEP engine focusses on events. An event represents a significant change of state at a particular point in time or interval.
Recently, I was asked to demonstrate how Decision Manager CEP can be used in a real-time credit card fraud detection system. One of the requirements I was presented with ended up in an interesting rule implementation that forms the basis of this article. The requirement was defined as follows:
Continue reading “Detecting credit card fraud with Red Hat Decision Manager 7”