Call an existing REST service with Apache Camel K

Call an existing REST service with Apache Camel K

With the release of Apache Camel K, it is possible to create and deploy integrations with existing applications that are quicker and more lightweight than ever. In many cases, calling an existing REST endpoint is the best way to connect a new system to an existing one. Take the example of a cafe serving coffee. What happens when the cafe wants to allow customers to use a delivery service like GrubHub? You would only need to introduce a single Camel K integration to connect the cafe and GrubHub systems.

In this article, I will show you how to create a Camel K integration that calls an existing REST service and uses its existing data format. For the data format, I have a Maven project configured with Java objects. Ideally, you would have this packaged and available in a Nexus repository. For the purpose of my demonstration, I utilized JitPack, which lets me have my dependency available in a repository directly from my GitHub code. See the GitHub repository associated with this demo for the data format code and directions for getting it into JitPack.

Continue reading “Call an existing REST service with Apache Camel K”

Share
Build a data streaming pipeline using Kafka Streams and Quarkus

Build a data streaming pipeline using Kafka Streams and Quarkus

In typical data warehousing systems, data is first accumulated and then processed. But with the advent of new technologies, it is now possible to process data as and when it arrives. We call this real-time data processing. In real-time processing, data streams through pipelines; i.e., moving from one system to another. Data gets generated from static sources (like databases) or real-time systems (like transactional applications), and then gets filtered, transformed, and finally stored in a database or pushed to several other systems for further processing. The other systems can then follow the same cycle—i.e., filter, transform, store, or push to other systems.

Continue reading Build a data streaming pipeline using Kafka Streams and Quarkus

Share
Rootless containers with Podman: The basics

Rootless containers with Podman: The basics

As a developer, you have probably heard a lot about containers. A container is a unit of software that provides a packaging mechanism that abstracts the code and all of its dependencies to make application builds fast and reliable. An easy way to experiment with containers is with the Pod Manager tool (Podman), which is a daemonless, open source, Linux-native tool that provides a command-line interface (CLI) similar to the docker container engine.

In this article, I will explain the benefits of using containers and Podman, introduce rootless containers and why they are important, and then show you how to use rootless containers with Podman with an example. Before we dive into the implementation, let’s review the basics.

Continue reading “Rootless containers with Podman: The basics”

Share
New C++ features in GCC 10

New C++ features in GCC 10

The GNU Compiler Collection (GCC) 10.1 was released in May 2020. Like every other GCC release, this version brought many additions, improvements, bug fixes, and new features. Fedora 32 already ships GCC 10 as the system compiler, but it’s also possible to try GCC 10 on other platforms (see godbolt.org, for example). Red Hat Enterprise Linux (RHEL) users will get GCC 10 in the Red Hat Developer Toolset (RHEL 7), or the Red Hat GCC Toolset (RHEL 8).

Continue reading New C++ features in GCC 10

Share
Set up continuous integration for .NET Core with OpenShift Pipelines

Set up continuous integration for .NET Core with OpenShift Pipelines

Have you ever wanted to set up continuous integration (CI) for .NET Core in a cloud-native way, but you didn’t know where to start? This article provides an overview, examples, and suggestions for developers who want to get started setting up a functioning cloud-native CI system for .NET Core.

We will use the new Red Hat OpenShift Pipelines feature to implement .NET Core CI. OpenShift Pipelines are based on the open source Tekton project. OpenShift Pipelines provide a cloud-native way to define a pipeline to build, test, deploy, and roll out your applications in a continuous integration workflow.

In this article, you will learn how to:

  1. Set up a simple .NET Core application.
  2. Install OpenShift Pipelines on Red Hat OpenShift.
  3. Create a simple pipeline manually.
  4. Create a Source-to-Image (S2I)-based pipeline.

Continue reading “Set up continuous integration for .NET Core with OpenShift Pipelines”

Share
Kubernetes: The evolution of distributed systems

Kubernetes: The evolution of distributed systems

DevNation Tech Talks are hosted by the Red Hat technologists who create our products. These sessions include real solutions plus code and sample projects to help you get started. In this talk, you’ll learn about Kubernetes and distributed systems from Bilgin Ibryam and Burr Sutter.

Cloud-native applications of the future will consist of hybrid workloads: stateful applications, batch jobs, stateless microservices, and functions (plus maybe something else) wrapped as Linux containers and deployed via Kubernetes on any cloud. Functions and the so-called serverless computing model are the latest evolution of what started as service-oriented architecture years ago. But is this the last step of the application architecture evolution and is it here to stay?

Continue reading “Kubernetes: The evolution of distributed systems”

Share
Troubleshooting user task errors in Red Hat Process Automation Manager and Red Hat JBoss BPM Suite

Troubleshooting user task errors in Red Hat Process Automation Manager and Red Hat JBoss BPM Suite

I’ve been around Red Hat JBoss BPM Suite (jBPM) and Red Hat Process Automation Manager (RHPAM) for many years. Over that time, I’ve learned a lot about the lesser-known aspects of this business process management engine.

If you are like most people, you might believe that user tasks are trivial, and learning about their details is unnecessary. Then, one day, you will find yourself troubleshooting an error like this one:

User '[User:'admin']' was unable to execution operation 'Start' on task id 287271 due to a no 'current status' match.

Receiving one too many similar error messages led me to learn everything that I know about user tasks, and I have decided to share my experience.

User tasks are a vital part of any business process management engine, jBPM included. Their behavior is defined by the OASIS Web Services—Human Task Specification, which has been fully adopted by Business Process Model and Notation (BPMN) 2.0—the standard for business processes diagrams. The spec defines two exceptionally important things that I will discuss in this article: The user task lifecycle and task access control. Without further ado, let’s jump right in.

Note: These troubleshooting tips are applicable to Red Hat JBoss BPM Suite 6.2 and above and Red Hat Process Automation Manager 7.

Continue reading “Troubleshooting user task errors in Red Hat Process Automation Manager and Red Hat JBoss BPM Suite”

Share
Migrating from Fabric8 Maven Plugin to Eclipse JKube 1.0.0

Migrating from Fabric8 Maven Plugin to Eclipse JKube 1.0.0

The recent release of Eclipse JKube 1.0.0 means that the Fabric8 Maven Plugin is no longer supported. If you are currently using the Fabric8 Maven Plugin, this article provides instructions for migrating to JKube instead. I will also explain the relationship between Eclipse JKube and the Fabric8 Maven Plugin (they’re the same thing) and introduce the highlights of the new Eclipse JKube 1.0.0 release. These migration instructions are for developers working on the Kubernetes and Red Hat OpenShift platforms.

Eclipse JKube is the Fabric8 Maven Plugin

Eclipse JKube and the Fabric8 Maven Plugin are one and the same. Eclipse JKube was first released in 2014 under the name of Fabric8 Maven Plugin. The development team changed the name when we pre-released Eclipse JKube 0.1.0 in December 2019. For more about the name change, see my recent introduction to Eclipse JKube. This article focuses on the migration path to JKube 1.0.0.

Continue reading “Migrating from Fabric8 Maven Plugin to Eclipse JKube 1.0.0”

Share
New language support features in Apache Camel VS Code extension 0.0.27

New language support features in Apache Camel VS Code extension 0.0.27

In this article, I share several new language support features in the recently released Language Support for Apache Camel VS Code extension 0.0.27. Before I discuss these improvements, please note that updates to the VS Code extension are available in other IDEs that support the Camel Language Server, including Eclipse IDE, Eclipse Che, and more. It is simply easier to focus on one IDE for my demonstrations, so I’ve chosen VS Code.

Note: Apache Camel is a versatile open source integration framework based on known enterprise integration patterns.

Continue reading “New language support features in Apache Camel VS Code extension 0.0.27”

Share