Operator

Operator integration testing for Operator Lifecycle Manager

Operators are one of the ways to package, deploy, and manage application distribution on Red Hat OpenShift. After a developer creates an Operator, the next step is to get the Operator published on OperatorHub.io. Doing this allows users to install and deploy the Operator in their OpenShift clusters. The Operator is installed, updated, and the management lifecycle is handled by the Operator Lifecycle Manager (OLM).

In this article, we explore the steps required to test OLM integration for the Operator. For demonstration, we use a simple Operator that prints a test message to the shell. The Operator is packaged in the recently introduced Bundle Format.

Continue reading “Operator integration testing for Operator Lifecycle Manager”

Share
Create a Kubernetes Operator in Golang to automatically manage a simple, stateful application

Create a Kubernetes Operator in Golang to automatically manage a simple, stateful application

A Kubernetes Operator acts as an automated site reliability engineer for its application, encoding the skills of an expert administrator in software. For example, an Operator can manage a cluster of database servers and configure and manage its application. It can also install a database cluster of a declared software version and a designated number of members.

Continue reading Create a Kubernetes Operator in Golang to automatically manage a simple, stateful application

Share
New custom metrics and air gapped installation in Red Hat 3scale API Management 2.9

New custom metrics and air gapped installation in Red Hat 3scale API Management 2.9

We continue to update the Red Hat Integration product portfolio to provide a better operational and development experience for modern cloud– and container-native applications. The Red Hat Integration 2020-Q3 release includes Red Hat 3scale API Management 2.9, which provides new features and capabilities for 3scale. Among other features, we have updated the 3scale API Management and Gateway Operators.

This article introduces the Red Hat 3scale API Management 2.9 release highlights, including air-gapped installation for 3scale on Red Hat OpenShift and new APIcast policies for custom metrics and upstream mutual Transport Layer Security (TLS).

Continue reading “New custom metrics and air gapped installation in Red Hat 3scale API Management 2.9”

Share
Kubernetes integration and more in odo 2.0

Kubernetes integration and more in odo 2.0

Odo is a developer-focused command-line interface (CLI) for OpenShift and Kubernetes. This article introduces highlights of the odo 2.0 release, which now integrates with Kubernetes. Additional highlights include the new default deployment method in odo 2.0, which uses devfiles for rapid, iterative development. We’ve also moved Operator deployment out of experimental mode, so you can easily deploy Operator-backed services from the odo command line.

Continue reading “Kubernetes integration and more in odo 2.0”

Share
Command-line cluster management with Red Hat OpenShift’s new web terminal (tech preview)

Command-line cluster management with Red Hat OpenShift’s new web terminal (tech preview)

Red Hat OpenShift‘s web console simplifies many development and deployment chores to just a few clicks, but sometimes you need a command-line interface (CLI) to get things done on a cluster. Whether you’re learning by cut-and-paste in a tutorial or troubleshooting a deep bug in production (also often done by cut-and-paste), you’ll likely need to enter at least a line or two at a command prompt.

Starting with version 4.5.3, OpenShift users can try out a tech preview of the new Web Terminal Operator. The new OpenShift web terminal brings indispensable command-line tools right to the web console, and its Linux environment runs in a pod deployed on your OpenShift cluster. The web terminal eliminates the need to install software and configure connections and authentication for your local terminal. It also makes it easier to use OpenShift on devices like tablets and mobile phones, which might lack a native terminal.

Continue reading “Command-line cluster management with Red Hat OpenShift’s new web terminal (tech preview)”

Share
Call an existing REST service with Apache Camel K

Call an existing REST service with Apache Camel K

With the release of Apache Camel K, it is possible to create and deploy integrations with existing applications that are quicker and more lightweight than ever. In many cases, calling an existing REST endpoint is the best way to connect a new system to an existing one. Take the example of a cafe serving coffee. What happens when the cafe wants to allow customers to use a delivery service like GrubHub? You would only need to introduce a single Camel K integration to connect the cafe and GrubHub systems.

In this article, I will show you how to create a Camel K integration that calls an existing REST service and uses its existing data format. For the data format, I have a Maven project configured with Java objects. Ideally, you would have this packaged and available in a Nexus repository. For the purpose of my demonstration, I utilized JitPack, which lets me have my dependency available in a repository directly from my GitHub code. See the GitHub repository associated with this demo for the data format code and directions for getting it into JitPack.

Continue reading “Call an existing REST service with Apache Camel K”

Share
Kubeflow 1.0 monitoring and enhanced JupyterHub builds in Open Data Hub 0.8

Kubeflow 1.0 monitoring and enhanced JupyterHub builds in Open Data Hub 0.8

The new Open Data Hub version 0.8 (ODH) release includes many new features, continuous integration (CI) additions, and documentation updates. For this release, we focused on enhancing JupyterHub image builds, enabling more mixing of Open Data Hub and Kubeflow components, and designing our comprehensive end-to-end continuous integration and continuous deployment and delivery (CI/CD) process. In this article, we introduce the highlights of this newest release.

Note: Open Data Hub is an open source project and a community Operator for building an AI-as-a-Service (AIaaS) platform on Red Hat OpenShift.

Continue reading “Kubeflow 1.0 monitoring and enhanced JupyterHub builds in Open Data Hub 0.8”

Share
How to switch Red Hat OpenShift Virtualization from hardware virtualization to software emulation

How to switch Red Hat OpenShift Virtualization from hardware virtualization to software emulation

DISCLAIMER: The following setup is not supported by Red Hat, even for dev/test/sandbox environments. It is only meant to demonstrate the technical possibilities. See Configuring your cluster for OpenShift Virtualization for information. In addition, Tiny Code Generator (TCG) is not supported or tested by Red Hat.

OpenShift Virtualization is a feature of Red Hat OpenShift Container Platform (OCP) and OpenShift Kubernetes Engine that allows you to run and manage virtual machine workloads alongside container workloads. Based on the open source project KubeVirt, the goal of OpenShift Virtualization is to help enterprises move from a VM-based infrastructure to a Kubernetes-and-container-based stack, one application at a time.

In my previous article, I showed you how to set up and enable OpenShift Virtualization running on Amazon Web Services Elastic Compute Cloud (AWS EC2). In that article, I noted that OpenShift Virtualization looks for hardware virtualization by default, which requires a bare-metal server instance. If you are running OpenShift on AWS EC2, as I do, then you have to enable software emulation over the default hardware virtualization. Otherwise, you need a bare-metal instance from the public cloud provider or a pure bare-metal solution.

In this article, I show you how to switch OpenShift Virtualization from its default of hardware virtualization to QEMU-based software emulation. You will then be able to start and operate a virtual machine through OpenShift Virtualization, even in a non-bare metal instance such as AWS EC2.

Continue reading “How to switch Red Hat OpenShift Virtualization from hardware virtualization to software emulation”

Share
5 tips for developing Kubernetes Operators with the new Operator SDK

5 tips for developing Kubernetes Operators with the new Operator SDK

Kubernetes Operators are all the rage this season, and the fame is well deserved. Operators are evolving from being used primarily by technical-infrastructure gurus to becoming more mainstream, Kubernetes-native tools for managing complex applications. Kubernetes Operators today are important for cluster administrators and ISV providers, and also for custom applications developed in house. They provide the base for a standardized operational model that is similar to what cloud providers offer. Operators also open the door to fully portable workloads and services on Kubernetes.

The new Kubernetes Operator Framework is an open source toolkit that lets you manage Kubernetes Operators in an effective, automated, and scalable way. The Operator Framework consists of three components: the Operator SDK, the Operator Lifecycle Manager, and OperatorHub. In this article, I introduce tips and tricks for working with the Operator SDK. The Operator SDK 1.0.0 release shipped in mid-August, so it’s a great time to have a look at it.

Continue reading “5 tips for developing Kubernetes Operators with the new Operator SDK”

Share