Integration

Open Virtual Network unidling

Open Virtual Network unidling

Open Virtual Network (OVN) is a project born as a sub-component of Open vSwitch (OVS), which is a performant, programmable, multi-platform virtual switch. OVN allows OVS users to natively create overlay networks by introducing virtual network abstractions such as virtual switches and routers. Moreover, OVN provides methods for setting up Access Control Lists (ACLs) and network services such as DHCP. Many Red Hat products, like Red Hat OpenStack Platform, Red Hat Virtualization, and Red Hat OpenShift Container Platform, rely on OVN to configure network functionalities.

In this article, I will cover the OVN unidling issue and how the proposed solution can be used to forward events to a CMS (e.g., OpenStack or OpenShift).

Continue reading “Open Virtual Network unidling”

Share
Architecting messaging solutions with Apache ActiveMQ Artemis

Architecting messaging solutions with Apache ActiveMQ Artemis

As an architect in the Red Hat Consulting team, I’ve helped countless customers with their integration challenges over the last six years. Recently, I had a few consulting gigs around Red Hat AMQ 7 Broker (the enterprise version of Apache ActiveMQ Artemis), where the requirements and outcomes were similar. That similarity made me think that the whole requirement identification process and can be more structured and repeatable.

This guide is intended for sharing what I learned from these few gigs in an attempt to make the AMQ Broker architecting process, the resulting deployment topologies, and the expected effort more predictable—at least for the common use cases. As such, what follows will be useful for messaging and integration consultants and architects tasked with creating a messaging architecture for Apache Artemis, and other messaging solutions in general. This article focuses on Apache Artemis. It doesn’t cover Apache Kafka, Strimzi, Apache Qpid, EnMasse, or the EAP messaging system, which are all components of our Red Hat AMQ 7 product offering.

Continue reading “Architecting messaging solutions with Apache ActiveMQ Artemis”

Share
Dynamic case management in the event-driven era

Dynamic case management in the event-driven era

Case management applications are designed to handle a complex combination of human and automated tasks. All case updates and case data are captured as a case file, which acts as a pivot for the management. This then serves as a system of record for future audits and tracking. The key characteristic of these workflows is that they are ad hoc in nature. There is no single resolution, and often, one size doesn’t fit all.

Case management does not have structured time bounds. All cases typically don’t resolve at the same time. Consider examples like client onboarding, dispute resolution, fraud investigations, etc., which, by virtue, try to provide customized solutions based on the specific use case. With the advent of more modern technological frameworks and practices like microservices and event-driven processing, the potential of case management solutions opens up even further. This article describes how you can make use of case management for dynamic workflow processing in this modern era, including components such as Red Hat OpenShift, Red Hat AMQ Streams, Red Hat Fuse, and Red Hat Process Automation Manager.

Continue reading “Dynamic case management in the event-driven era”

Share
Set up Red Hat AMQ Streams custom certificates on OpenShift

Set up Red Hat AMQ Streams custom certificates on OpenShift

Secure communication over a computer network is one of the most important requirements for a system, and yet it can be difficult to set up correctly. This example shows how to set up Red Hat AMQ Streams‘ end-to-end TLS encryption using a custom X.509 CA certificate on the Red Hat OpenShift platform.

Prerequisites

You need to have the following in place before you can proceed with this example:

Continue reading “Set up Red Hat AMQ Streams custom certificates on OpenShift”

Share
Replacing Confluent Schema Registry with Red Hat integration service registry

Replacing Confluent Schema Registry with Red Hat integration service registry

With the latest release of Red Hat Integration now available, we’ve introduced some exciting new capabilities. Along the enhancements for Apache Kafka-based environments, Red Hat announced the Technical Preview of the Red Hat Integration service registry to help teams to govern their services schemas. Developers can now use the registry to query for the schemas and artifacts required by each service endpoint or register and store new structures for future use.

Continue reading “Replacing Confluent Schema Registry with Red Hat integration service registry”

Share
VS Code Language support for Apache Camel 0.0.20 release

VS Code Language support for Apache Camel 0.0.20 release

During the past months, several noticeable new features have been added to improve the developer experience of application based on Apache Camel. These updates are available in the 0.0.20 release of Visual Studio (VS) Code extension.

Before going into the list of updates in detail, I want to note that I mentioned in the title the VS Code Extension release because VS Code extension is covering the broader set of new features. Don’t worry if you are using another IDE, though, most features are also available in all other IDEs that support the Camel Language Server (Eclipse Desktop, Eclipse Che, and more).

Continue reading “VS Code Language support for Apache Camel 0.0.20 release”

Share
Getting started with Red Hat Integration service registry

Getting started with Red Hat Integration service registry

New projects require some help. Imagine you are getting ready to start that new feature your business has been asking for the last couple of months. Your team is ready to start coding to implement the new awesome thing that would change your business.

To achieve it, the team will need to interact with the current existing software components of your organization. Your developers will need to interact with API services and event endpoints already available in your architecture. Before being able to send and process information, developers need to be aware of the structure or schema expected by those services.

Red Hat announced the Technical Preview of the Red Hat Integration service registry to help teams to govern their services schemas. The service registry is a store for schema (and API design) artifacts providing a REST API and a set of optional rules for enforcing content validity and evolution. Teams can now use the service registry to query for the schemas required by each service endpoint or register and store new structures for future use.

Continue reading “Getting started with Red Hat Integration service registry”

Share
LoRaWAN setup at the EclipseCon IoT playground

LoRaWAN setup at the EclipseCon IoT playground

At the recent EclipseCon Europe in Ludwigsburg, Germany, we had a big dashboard in the IoT playground area showing graphs of the number of WiFi devices, the temperature, and air quality, all transmitted via LoRaWAN. We worked on this project during the community day and kept the setup throughout the conference, where we showed it and played with it even further. This article describes the architecture of the setup and gives pointers to replicate it.

Continue reading “LoRaWAN setup at the EclipseCon IoT playground”

Share
Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 3

Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 3

In the previous articles in this series, we first covered the basics of Red Hat AMQ Streams on OpenShift and then showed how to set up Kafka Connect, a Kafka Bridge, and Kafka Mirror Maker. Here are a few key points to keep in mind before we proceed:

  • AMQ Streams is based on Apache Kafka.
  • AMQ Streams for the Red Hat OpenShift Container Platform is based on the Strimzi project.
  • AMQ Streams on containers has multiple components, such as the Cluster Operator, Entity Operator, Mirror Maker, Kafka connect, and Kafka Bridge.

Now that we have everything set up (or so we think), let’s look at monitoring and alerting for our new environment.

Continue reading “Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 3”

Share