Integration

Troubleshooting user task errors in Red Hat Process Automation Manager and Red Hat JBoss BPM Suite

Troubleshooting user task errors in Red Hat Process Automation Manager and Red Hat JBoss BPM Suite

I’ve been around Red Hat JBoss BPM Suite (jBPM) and Red Hat Process Automation Manager (RHPAM) for many years. Over that time, I’ve learned a lot about the lesser-known aspects of this business process management engine.

If you are like most people, you might believe that user tasks are trivial, and learning about their details is unnecessary. Then, one day, you will find yourself troubleshooting an error like this one:

User '[User:'admin']' was unable to execution operation 'Start' on task id 287271 due to a no 'current status' match.

Receiving one too many similar error messages led me to learn everything that I know about user tasks, and I have decided to share my experience.

User tasks are a vital part of any business process management engine, jBPM included. Their behavior is defined by the OASIS Web Services—Human Task Specification, which has been fully adopted by Business Process Model and Notation (BPMN) 2.0—the standard for business processes diagrams. The spec defines two exceptionally important things that I will discuss in this article: The user task lifecycle and task access control. Without further ado, let’s jump right in.

Note: These troubleshooting tips are applicable to Red Hat JBoss BPM Suite 6.2 and above and Red Hat Process Automation Manager 7.

Continue reading “Troubleshooting user task errors in Red Hat Process Automation Manager and Red Hat JBoss BPM Suite”

Share
Kubernetes-native Apache Kafka with Strimzi, Debezium, and Apache Camel (Kafka Summit 2020)

Kubernetes-native Apache Kafka with Strimzi, Debezium, and Apache Camel (Kafka Summit 2020)

Apache Kafka has become the leading platform for building real-time data pipelines. Today, Kafka is heavily used for developing event-driven applications, where it lets services communicate with each other through events. Using Kubernetes for this type of workload requires adding specialized components such as Kubernetes Operators and connectors to bridge the rest of your systems and applications to the Kafka ecosystem.

In this article, we’ll look at how the open source projects Strimzi, Debezium, and Apache Camel integrate with Kafka to speed up critical areas of Kubernetes-native development.

Note: Red Hat is sponsoring the Kafka Summit 2020 virtual conference from August 24-25, 2020. See the end of this article for details.

Continue reading “Kubernetes-native Apache Kafka with Strimzi, Debezium, and Apache Camel (Kafka Summit 2020)”

Share
Introduction to Strimzi: Apache Kafka on Kubernetes (KubeCon Europe 2020)

Introduction to Strimzi: Apache Kafka on Kubernetes (KubeCon Europe 2020)

Apache Kafka has emerged as the leading platform for building real-time data pipelines. Born as a messaging system, mainly for the publish/subscribe pattern, Kafka has established itself as a data-streaming platform for processing data in real-time. Today, Kafka is also heavily used for developing event-driven applications, enabling the services in your infrastructure to communicate with each other through events using Apache Kafka as the backbone. Meanwhile, cloud-native application development is gathering more traction thanks to Kubernetes.

Thanks to the abstraction layer provided by this platform, it’s easy to move your applications from running on bare metal to any cloud provider (AWS, Azure, GCP, IBM, and so on) enabling hybrid-cloud scenarios as well. But how do you move your Apache Kafka workloads to the cloud? It’s possible, but it’s not simple. You could learn all of the Apache Kafka tools for handling a cluster well enough to move your Kafka workloads to Kubernetes, or you could leverage the Kubernetes knowledge you already have using Strimzi.

Note: Strimzi will be represented at the virtual KubeCon Europe 2020 conference from 17-20 August 2020. See the end of the article for details.

Continue reading “Introduction to Strimzi: Apache Kafka on Kubernetes (KubeCon Europe 2020)”

Share
HTTP-based Kafka messaging with Red Hat AMQ Streams

HTTP-based Kafka messaging with Red Hat AMQ Streams

Apache Kafka is a rock-solid, super-fast, event streaming backbone that is not only for microservices. It’s an enabler for many use cases, including activity tracking, log aggregation, stream processing, change-data capture, Internet of Things (IoT) telemetry, and more.

Red Hat AMQ Streams makes it easy to run and manage Kafka natively on Red Hat OpenShift. AMQ Streams’ upstream project, Strimzi, does the same thing for Kubernetes.

Setting up a Kafka cluster on a developer’s laptop is fast and easy, but in some environments, the client setup is harder. Kafka uses a TCP/IP-based proprietary protocol and has clients available for many different programming languages. Only the JVM client is on Kafka’s main codebase, however.

Continue reading “HTTP-based Kafka messaging with Red Hat AMQ Streams”

Share
Choosing the right asynchronous-messaging infrastructure for the job

Choosing the right asynchronous-messaging infrastructure for the job

The term asynchronous means “not occurring at the same time.” In the context of distributed systems and messaging, this term implies that request processing will occur at an arbitrary point in time. Asynchronous interactions hold many advantages over synchronous ones, but they also introduce new challenges. In this article, we will focus on specific considerations for choosing the asynchronous-messaging infrastructure for your event-driven systems.

Continue reading Choosing the right asynchronous-messaging infrastructure for the job

Share
Creating event sources in the OpenShift 4.5 web console

Creating event sources in the OpenShift 4.5 web console

Red Hat OpenShift 4.5 makes it easier than ever to deploy and run event-driven applications that react to real-time information via event notifications. Empowered by OpenShift Serverless, applications come to life through events, scaling up resources as needed (or up to a pre-configured limit), and then scaling back to zero after the resource burst is over.

Continue reading Creating event sources in the OpenShift 4.5 web console

Share
Build a simple cloud-native change data capture pipeline

Build a simple cloud-native change data capture pipeline

Change data capture (CDC) is a well-established software design pattern for a system that monitors and captures data changes so that other software can respond to those events. Using KafkaConnect, along with Debezium Connectors and the Apache Camel Kafka Connector, we can build a configuration-driven data pipeline to bridge traditional data stores and new event-driven architectures.

This article walks through a simple example.

Continue reading “Build a simple cloud-native change data capture pipeline”

Share