Operators are one of the ways to package, deploy, and manage application distribution on Red Hat OpenShift. After a developer creates an Operator, the next step is to get the Operator published on OperatorHub.io. Doing this allows users to install and deploy the Operator in their OpenShift clusters. The Operator is installed, updated, and the management lifecycle is handled by the Operator Lifecycle Manager (OLM).
In this article, we explore the steps required to test OLM integration for the Operator. For demonstration, we use a simple Operator that prints a test message to the shell. The Operator is packaged in the recently introduced Bundle Format.
Continue reading “Operator integration testing for Operator Lifecycle Manager”
When debugging an application within a Red Hat OpenShift container, it is important to keep in mind that the Linux environment within the container is subject to various constraints. Because of these constraints, the full functionality of debugging tools might not be available:
- An unprivileged OpenShift container is restricted from accessing kernel interfaces that are required by some low-level debugging tools.
Note: Almost all applications on OpenShift run in unprivileged containers. Unprivileged containers allow the use of standard debugging tools such as
strace. Examples of debugging tools that cannot be used in unprivileged containers include
perf, which requires access to the kernel’s
perf_events interface, and SystemTap, which depends on the kernel’s module-loading functionality.
- Debug information for system packages within OpenShift containers is not accessible. There is ongoing work (as part of the elfutils project) to develop a file server for debug information (
debuginfod), which would make such access possible.
- The set of packages in an OpenShift container is fixed ahead of time, when the corresponding container image is built. Once a container is running, no additional packages can be installed. A few debugging tools are preinstalled in commonly used container base images, but any other tools must be added when the container image build process is configured.
To successfully debug a containerized application, it is necessary to understand these constraints and how they determine which debugging tools can be used.
Continue reading “Debugging applications within Red Hat OpenShift containers”
In the past few years, the popularity and adoption of containers has skyrocketed, and the Kubernetes container orchestration platform has been largely adopted as well. With these changes, a new set of challenges has emerged when dealing with applications deployed on Kubernetes clusters in the real world. One challenge is how to deal with communication between multiple clusters that might be in different networks (even private ones), behind firewalls, and so on.
One possible solution to this problem is to use a Virtual Application Network (VAN), which is sometimes referred to as a Layer 7 network. In a nutshell, a VAN is a logical network that is deployed at the application level and introduces a new layer of addressing for fine-grained application components with no constraints on the network topology. For a much more in-depth explanation, please read this excellent article.
So, what is Skupper? In the project’s own words:
Skupper is a layer seven service interconnect. It enables secure communication across Kubernetes clusters with no VPNs or special firewall rules.
Continue reading “Skupper.io: Let your services communicate across Kubernetes clusters”
In the past few years, developers have addressed the challenge of evolving from monolith systems to microservices architecture. These days, we hear about the adoption of serverless systems.
Like many trends in software, there’s no one clear view of how to define serverless or how this approach offers added value for our software architecture. The perfect place to start with serverless systems and discover serverless capabilities is through a use case.
Continue reading “Move your APIs into the serverless era with Camel K and Knative”
Secure communication over a computer network is one of the most important requirements for a system, and yet it can be difficult to set up correctly. This example shows how to set up Red Hat AMQ Streams‘ end-to-end TLS encryption using a custom X.509 CA certificate on the Red Hat OpenShift platform.
You need to have the following in place before you can proceed with this example:
Continue reading “Set up Red Hat AMQ Streams custom certificates on OpenShift”
In an earlier article, Aaron Merey introduced the new elfutils
debuginfo-server daemon. With this software now integrated and released into elfutils 0.178 and coming to distros near you, it’s time to consider why and how to set up such a service for yourself and your team.
debuginfod exists to distribute ELF or DWARF debugging information, plus associated source code, for a collection of binaries. If you need to run a debugger like
gdb, a trace or probe tool like
systemtap, binary analysis tools like
pahole, or binary rewriting libraries like
dyninst, you will eventually need
debuginfo that matches your binaries. The
debuginfod client support in these tools enables a fast, transparent way of fetching this data on the fly, without ever having to stop, change to root, run all of the right
yum debuginfo-install commands, and try again. Debuginfo lets you debug anywhere, anytime.
We hope this opening addresses the “why.” Now, onto the “how.”
Continue reading “Deploying debuginfod servers for your developers”
When it comes to the process of optimizing a production-level artificial intelligence/machine learning (AI/ML) process, workflows and pipelines are an integral part of this effort. Pipelines are used to create workflows that are repeatable, automated, customizable, and intelligent.
An example AI/ML pipeline is presented in Figure 1, where functionalities such as data extract, transform, and load (ETL), model training, model evaluation, and model serving are automated as part of the pipeline.
Continue reading “AI/ML pipelines using Open Data Hub and Kubeflow on Red Hat OpenShift”
In the previous articles in this series, we first covered the basics of Red Hat AMQ Streams on OpenShift and then showed how to set up Kafka Connect, a Kafka Bridge, and Kafka Mirror Maker. Here are a few key points to keep in mind before we proceed:
- AMQ Streams is based on Apache Kafka.
- AMQ Streams for the Red Hat OpenShift Container Platform is based on the Strimzi project.
- AMQ Streams on containers has multiple components, such as the Cluster Operator, Entity Operator, Mirror Maker, Kafka connect, and Kafka Bridge.
Now that we have everything set up (or so we think), let’s look at monitoring and alerting for our new environment.
Continue reading “Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 3”
Red Hat AMQ Streams is an enterprise-grade Apache Kafka (event streaming) solution, which enables systems to exchange data at high throughput and low latency. AMQ Streams is available as part of the Red Hat AMQ offering in two different flavors: one on the Red Hat Enterprise Linux platform and another on the OpenShift Container Platform. In this three-part article series, we will cover AMQ Streams on the OpenShift Container Platform.
To get the most out of these articles, it will help to be familiar with messaging concepts, Red Hat OpenShift, and Kubernetes.
Continue reading “Understanding Red Hat AMQ Streams components for OpenShift and Kubernetes: Part 1”