C# 8 asynchronous streams

C# 8 asynchronous streams

.NET Core 3.1 (December 2019) includes support for C# 8, a new major
version of the C# programming language. In this series of articles,
we’ll look at the new features in .NET’s main programming language. This first article, in particular, looks at asynchronous streams. This feature makes it easy to create and consume asynchronous enumerables, so before getting into the new feature, you first need to understand the IEnumerable interface.

Note: C# 8 can be used with the .NET Core 3.1 SDK, which is available on Red Hat Enterprise Linux, Fedora, Windows, macOS, and on other Linux distributions.

A brief history of IEnumerable

The classic IEnumerable<T> has been around since .NET Framework 2 (2005). This interface provides us with a type-safe way to iterate over any collection.

The iteration is based on the IEnumerator<T> type:

Continue reading “C# 8 asynchronous streams”

Share
Designing an event-driven process at scale: Part 3

Designing an event-driven process at scale: Part 3

In the first article in this series, Designing an event-driven business process at scale: A health management example, Part 1, you found the business use case and data model for a concrete example from the health management industry. You then began implementing the example in jBPM (an open source business automation suite) by creating the Trigger process.

In the second article, you implemented the Task subprocess and, among other things, you also configured the call parameters for the Reminder and Escalation subprocesses within the Task subprocess. Now you will implement these subprocesses.

Continue reading “Designing an event-driven process at scale: Part 3”

Share
Designing an event-driven process at scale: Part 2

Designing an event-driven process at scale: Part 2

In the first article in this series, Designing an event-driven business process at scale: A health management example, Part 1, we began by defining the business use case and data model for a concrete example from the health management industry. We then began implementing the example in jBPM (an open source business automation suite) by creating our trigger process.

Now, in the second article in this series, we will focus on creating the Task subprocess and its many components. In our case, these are:

  • The Expired? gate
  • The Suppressed? gate
  • The human task
  • The Reminder subprocess
  • The “What type of close?” gate
  • The Hard Close embedded subprocess
  • The Escalation subprocess

Continue reading “Designing an event-driven process at scale: Part 2”

Share
Designing an event-driven business process at scale: A health management example, Part 1

Designing an event-driven business process at scale: A health management example, Part 1

The concept of a business process (BP), or workflow (WF), and the discipline and practice of business process management (BPM) have been around since the early 90s. Since then, WF/BPM tools have evolved considerably. More recently, a convergence of different tools has taken place, adding decision management (DM) and case management (CM) to the mix. The ascendance of data science, machine learning, and artificial intelligence in the last few years has further complicated the picture. The mature field of BPM has been subsumed into the hyped pseudo-novelties of digital business automation, digital reinvention, digital everything, etc., with the addition of “low code” and robotic process automation (RPA).

A common requirement of business applications today is to be event-driven; that is, specific events should trigger a workflow or decision in real-time. This requirement leads to a fundamental problem. In realistic situations, there are many different types of events, each one requiring specific handling. An event-driven business application may have hundreds of qualitatively different workflows or processes. As new types of events arise in today’s ever-changing business conditions, new processes have to be designed and deployed as quickly as possible.

Continue reading “Designing an event-driven business process at scale: A health management example, Part 1”

Share
Metrics and traces correlation in Kiali

Metrics and traces correlation in Kiali

Metrics, traces, and logs might be the Three Pillars of Observability, as you’ve certainly already heard. This mantra helps us focus our mindset around observability, but it is not a religion. “There is so much more data that can help us have insight into our running systems,” said Frederic Branczyk at KubeCon last year.

These three kind of signals do have their specificities, but they also have common denominators that we can generalize. They could all appear on a virtual timeline and they all originate from a workload, so they are timed and sourced, which is a good start for enabling correlation. If there’s anything as important as knowing the signals that a system can emit, it’s knowing the relationships between those signals and being able to correlate one with another, even when they’re not strictly of the same nature. Ultimately, we can postulate that any sort of signal that is timed and sourced is a good candidate for correlation as well, even if we don’t have hard links between them.

Continue reading “Metrics and traces correlation in Kiali”

Share
Continuous integration with GDB Buildbot

Continuous integration with GDB Buildbot

Continuous integration is a hot topic these days, and the GNU Project Debugger is keeping up with the trend. Who better to serve as a role model for tracking and exterminating bugs than a debugger?

The GDB Buildbot started as a pet project back in 2014 but is now an integral part of the development process. It provides an infrastructure to test new commits pushed to the official repository, as well as a service (which we call try builds) for developers to submit their proposed changes. In this article, I share the story of our Buildbot instance, where we are right now in terms of functionality, and the plans (and challenges) for the future.

Continue reading “Continuous integration with GDB Buildbot”

Share
Using secrets in Kafka Connect configuration

Using secrets in Kafka Connect configuration

Kafka Connect is an integration framework that is part of the Apache Kafka project. On Kubernetes and Red Hat OpenShift, you can deploy Kafka Connect using the Strimzi and Red Hat AMQ Streams Operators. Kafka Connect lets users run sink and source connectors. Source connectors are used to load data from an external system into Kafka. Sink connectors work the other way around and let you load data from Kafka into another external system. In most cases, the connectors need to authenticate when connecting to the other systems, so you will need to provide credentials as part of the connector’s configuration. This article shows you how you can use Kubernetes secrets to store the credentials and then use them in the connector’s configuration.

Continue reading “Using secrets in Kafka Connect configuration”

Share
OpenShift Actions: Deploy to Red Hat OpenShift directly from your GitHub repository

OpenShift Actions: Deploy to Red Hat OpenShift directly from your GitHub repository

Here is a common situation: You write your code, everything is on GitHub, and you are ready to publish it. But, you know that your job is not finished yet. You need to deploy your code and this process can be a nightmare at times.

Ideally, you should be able to do this whole process all in one place, but until now, you always had to set up external services and integrate them with GitHub (or add post-commit hooks). What if, instead, you could replace all of these extras and run everything directly from your GitHub repository with just a few YAML lines? Well, this is exactly what GitHub Actions are for.

Continue reading “OpenShift Actions: Deploy to Red Hat OpenShift directly from your GitHub repository”

Share
Toward _FORTIFY_SOURCE parity between Clang and GCC

Toward _FORTIFY_SOURCE parity between Clang and GCC

GCC combined with glibc can detect instances of buffer overflow by standard C library functions. When a user passes the -D_FORTIFY_SOURCE={1,2} preprocessor flag and an optimization level greater or equal to -O1, an alternate, fortified implementation of the function is used when calling, say, strcpy. Depending on the function and its inputs, this behavior may result in a compile-time error, or a runtime error triggered upon execution. (For more info on this feature, there’s an excellent blog post here on the subject).

Continue reading Toward _FORTIFY_SOURCE parity between Clang and GCC

Share