Kubernetes

Low-code microservices orchestration with Syndesis

Low-code microservices orchestration with Syndesis

Recently I wrote about decoupling infrastructure code from microservices. I found that Apache Camel and Debezium provided the middleware I needed for that project, with minimal coding on my end. After my successful experiment, I wondered if it would be possible to orchestrate two or more similarly decoupled microservices into a new service–and could I do it without writing any code at all? I decided to find out.

This article is a quick dive into orchestrating microservices without writing any code. We will use Syndesis (an open source integration platform) as our orchestration platform. Note that the examples assume that you are familiar with Debezium and Kafka.

Continue reading “Low-code microservices orchestration with Syndesis”

Share
Kogito 0.8.0 features online editors and cloud-native business automation

Kogito 0.8.0 features online editors and cloud-native business automation

Kogito is a cloud-native business automation solution that offers a powerful, developer-friendly experience. Based on production-tested open source projects Drools and jBPM, Kogito has business rules and processes down to a science. Kogito also aligns with popular lightweight runtimes such as Quarkus and Spring Boot to support developers building business-driven applications.

This article is an overview of the new enhancements for Kogito 0.8.0, which was released on March 10, 2020.

Continue reading “Kogito 0.8.0 features online editors and cloud-native business automation”

Share
Testing memory-based horizontal pod autoscaling on OpenShift

Testing memory-based horizontal pod autoscaling on OpenShift

Red Hat OpenShift offers horizontal pod autoscaling (HPA) primarily for CPUs, but it can also perform memory-based HPA, which is useful for applications that are more memory-intensive than CPU-intensive. In this article, I demonstrate how to use OpenShift’s memory-based horizontal pod autoscaling feature (tech preview) to autoscale your pods if the demands on memory increase. The test performed in this article might not necessarily reflect a real application. The tests only aim to demonstrate memory-based HPA in the simplest way possible.

Continue reading “Testing memory-based horizontal pod autoscaling on OpenShift”

Share
How to customize Fedora CoreOS for dedicated workloads with OSTree

How to customize Fedora CoreOS for dedicated workloads with OSTree

In part one of this series, I introduced Fedora CoreOS (and Red Hat CoreOS) and explained why its immutable and atomic nature is important for running containers. I then walked you through getting Fedora CoreOS, creating an Ignition file, booting Fedora CoreOS, logging in, and running a test container. In this article, I will walk you through customizing Fedora CoreOS and making use of its immutable and atomic nature.

Continue reading How to customize Fedora CoreOS for dedicated workloads with OSTree

Share
How to run containerized workloads securely and at scale with Fedora CoreOS

How to run containerized workloads securely and at scale with Fedora CoreOS

The history of container-optimized operating systems is short but filled by a variety of proposals with different degrees of success. Along with CoreOS Container Linux, Red Hat sponsored the Project Atomic community, which is today the umbrella that holds many projects, from Fedora/CentOS/Red Hat Enterprise Linux Atomic Host to container tools (Buildah, skopeo, and others) and Fedora SilverBlue, an immutable OS for the desktop (more on the “immutable” term in the next sections).

When Red Hat acquired the San Francisco-based company CoreOS on January 2018 new perspectives opened. Red Hat Enterprise Linux CoreOS (RHCOS) was one of the first products of this merge, becoming the base operating system in OpenShift 4. Since Red Hat is focused on open source software, always striving to create and feed upstream communities, the Fedora ecosystem was the natural environment for the RHCOS-related upstream, Fedora CoreOS. Fedora CoreOS is based on the best parts of CoreOS Container Linux and Atomic Host, merging features and tools from both.

In this first article, I introduce Fedora CoreOS and explain why it is so important to developers and DevOps professionals. Throughout the rest of this series, I will dive into the details of setting up, using, and managing Fedora CoreOS.

Continue reading “How to run containerized workloads securely and at scale with Fedora CoreOS”

Share
Speed up Maven builds in Tekton Pipelines

Speed up Maven builds in Tekton Pipelines

Tekton is an open source project that provides standard Kubernetes-style resources and building blocks for creating CI/CD pipelines that can run on any Kubernetes. Tekton does this by introducing a number of custom resource definitions (CRD) such as Pipeline, Task, and ClusterTask to provide a language and structure for defining delivery pipelines as shown in Figure 1. Tekton also provides a set of controllers that are responsible for running pipelines in pods on demand whenever a user creates an aforementioned resource.

Diagram of a Pipeline containing a Task workflow.

Figure 1: A Tekton pipeline contains a sequence of tasks.

The use of Tekton has grown rapidly over the last year. One of the frequently requested features is the ability to share artifacts between tasks in order to cache dependencies for build tools such as Maven and NPM. Although it was possible previously to use volumes in tasks, the release of Tekton 0.10 adds support for workspaces, which makes it easier for tasks within a pipeline to share artifacts using a persistent volume.

In this article, we look at how workspaces can be used to cache Maven dependencies in Java builds in order to remove the need to download dependencies for each build.

Continue reading “Speed up Maven builds in Tekton Pipelines”

Share
Metrics and traces correlation in Kiali

Metrics and traces correlation in Kiali

Metrics, traces, and logs might be the Three Pillars of Observability, as you’ve certainly already heard. This mantra helps us focus our mindset around observability, but it is not a religion. “There is so much more data that can help us have insight into our running systems,” said Frederic Branczyk at KubeCon last year.

These three kind of signals do have their specificities, but they also have common denominators that we can generalize. They could all appear on a virtual timeline and they all originate from a workload, so they are timed and sourced, which is a good start for enabling correlation. If there’s anything as important as knowing the signals that a system can emit, it’s knowing the relationships between those signals and being able to correlate one with another, even when they’re not strictly of the same nature. Ultimately, we can postulate that any sort of signal that is timed and sourced is a good candidate for correlation as well, even if we don’t have hard links between them.

Continue reading “Metrics and traces correlation in Kiali”

Share
Using secrets in Kafka Connect configuration

Using secrets in Kafka Connect configuration

Kafka Connect is an integration framework that is part of the Apache Kafka project. On Kubernetes and Red Hat OpenShift, you can deploy Kafka Connect using the Strimzi and Red Hat AMQ Streams Operators. Kafka Connect lets users run sink and source connectors. Source connectors are used to load data from an external system into Kafka. Sink connectors work the other way around and let you load data from Kafka into another external system. In most cases, the connectors need to authenticate when connecting to the other systems, so you will need to provide credentials as part of the connector’s configuration. This article shows you how you can use Kubernetes secrets to store the credentials and then use them in the connector’s configuration.

Continue reading “Using secrets in Kafka Connect configuration”

Share
Installing Kubeflow v0.7 on OpenShift 4.2

Installing Kubeflow v0.7 on OpenShift 4.2

As part of the Open Data Hub project, we see potential and value in the Kubeflow project, so we dedicated our efforts to enable Kubeflow on Red Hat OpenShift. We decided to use Kubeflow 0.7 as that was the latest released version at the time this work began. The work included adding new installation scripts that provide all of the necessary changes such as permissions for service accounts to run on OpenShift.

Continue reading Installing Kubeflow v0.7 on OpenShift 4.2

Share
How to use third-party APIs in Operator SDK projects

How to use third-party APIs in Operator SDK projects

The Operator Framework is an open source toolkit for managing Kubernetes-native applications. This framework and its features provide the ability to develop tools that simplify complexities, such as installing, configuring, managing, and packaging applications on Kubernetes and Red Hat OpenShift. In this article, we show how to use third-party APIs in Operator-SDK projects.

In projects built with Operator-SDK, only the Kubernetes API schemas are added by default. However, you might need to create, read, update, or delete a resource that is from another API—even one that you created yourself via other Operator projects.

Let’s check out an example scenario: How to create a Route resource from the OpenShift API for an Operator-SDK project.

Continue reading “How to use third-party APIs in Operator SDK projects”

Share