Operator

Kubernetes integration and more in odo 2.0

Kubernetes integration and more in odo 2.0

Odo is a developer-focused command-line interface (CLI) for OpenShift and Kubernetes. This article introduces highlights of the odo 2.0 release, which now integrates with Kubernetes. Additional highlights include the new default deployment method in odo 2.0, which uses devfiles for rapid, iterative development. We’ve also moved Operator deployment out of experimental mode, so you can easily deploy Operator-backed services from the odo command line.

Continue reading “Kubernetes integration and more in odo 2.0”

Share
Command-line cluster management with Red Hat OpenShift’s new web terminal (tech preview)

Command-line cluster management with Red Hat OpenShift’s new web terminal (tech preview)

Red Hat OpenShift‘s web console simplifies many development and deployment chores to just a few clicks, but sometimes you need a command-line interface (CLI) to get things done on a cluster. Whether you’re learning by cut-and-paste in a tutorial or troubleshooting a deep bug in production (also often done by cut-and-paste), you’ll likely need to enter at least a line or two at a command prompt.

Starting with version 4.5.3, OpenShift users can try out a tech preview of the new Web Terminal Operator. The new OpenShift web terminal brings indispensable command-line tools right to the web console, and its Linux environment runs in a pod deployed on your OpenShift cluster. The web terminal eliminates the need to install software and configure connections and authentication for your local terminal. It also makes it easier to use OpenShift on devices like tablets and mobile phones, which might lack a native terminal.

Continue reading “Command-line cluster management with Red Hat OpenShift’s new web terminal (tech preview)”

Share
Call an existing REST service with Apache Camel K

Call an existing REST service with Apache Camel K

With the release of Apache Camel K, it is possible to create and deploy integrations with existing applications that are quicker and more lightweight than ever. In many cases, calling an existing REST endpoint is the best way to connect a new system to an existing one. Take the example of a cafe serving coffee. What happens when the cafe wants to allow customers to use a delivery service like GrubHub? You would only need to introduce a single Camel K integration to connect the cafe and GrubHub systems.

In this article, I will show you how to create a Camel K integration that calls an existing REST service and uses its existing data format. For the data format, I have a Maven project configured with Java objects. Ideally, you would have this packaged and available in a Nexus repository. For the purpose of my demonstration, I utilized JitPack, which lets me have my dependency available in a repository directly from my GitHub code. See the GitHub repository associated with this demo for the data format code and directions for getting it into JitPack.

Continue reading “Call an existing REST service with Apache Camel K”

Share
Kubeflow 1.0 monitoring and enhanced JupyterHub builds in Open Data Hub 0.8

Kubeflow 1.0 monitoring and enhanced JupyterHub builds in Open Data Hub 0.8

The new Open Data Hub version 0.8 (ODH) release includes many new features, continuous integration (CI) additions, and documentation updates. For this release, we focused on enhancing JupyterHub image builds, enabling more mixing of Open Data Hub and Kubeflow components, and designing our comprehensive end-to-end continuous integration and continuous deployment and delivery (CI/CD) process. In this article, we introduce the highlights of this newest release.

Note: Open Data Hub is an open source project and a community Operator for building an AI-as-a-Service (AIaaS) platform on Red Hat OpenShift.

Continue reading “Kubeflow 1.0 monitoring and enhanced JupyterHub builds in Open Data Hub 0.8”

Share
How to switch Red Hat OpenShift Virtualization from hardware virtualization to software emulation

How to switch Red Hat OpenShift Virtualization from hardware virtualization to software emulation

OpenShift Virtualization is a feature of Red Hat OpenShift Container Platform (OCP) and OpenShift Kubernetes Engine that allows you to run and manage virtual machine workloads alongside container workloads. Based on the open source project KubeVirt, the goal of OpenShift Virtualization is to help enterprises move from a VM-based infrastructure to a Kubernetes-and-container-based stack, one application at a time.

In my previous article, I showed you how to set up and enable OpenShift Virtualization running on Amazon Web Services Elastic Compute Cloud (AWS EC2). In that article, I noted that OpenShift Virtualization looks for hardware virtualization by default, which requires a bare-metal server instance. If you are running OpenShift on AWS EC2, as I do, then you have to enable software emulation over the default hardware virtualization. Otherwise, you need a bare-metal instance from the public cloud provider or a pure bare-metal solution.

In this article, I show you how to switch OpenShift Virtualization from its default of hardware virtualization to QEMU-based software emulation. You will then be able to start and operate a virtual machine through OpenShift Virtualization, even in a non-bare metal instance such as AWS EC2.

Continue reading “How to switch Red Hat OpenShift Virtualization from hardware virtualization to software emulation”

Share
5 tips for developing Kubernetes Operators with the new Operator SDK

5 tips for developing Kubernetes Operators with the new Operator SDK

Kubernetes Operators are all the rage this season, and the fame is well deserved. Operators are evolving from being used primarily by technical-infrastructure gurus to becoming more mainstream, Kubernetes-native tools for managing complex applications. Kubernetes Operators today are important for cluster administrators and ISV providers, and also for custom applications developed in house. They provide the base for a standardized operational model that is similar to what cloud providers offer. Operators also open the door to fully portable workloads and services on Kubernetes.

The new Kubernetes Operator Framework is an open source toolkit that lets you manage Kubernetes Operators in an effective, automated, and scalable way. The Operator Framework consists of three components: the Operator SDK, the Operator Lifecycle Manager, and OperatorHub. In this article, I introduce tips and tricks for working with the Operator SDK. The Operator SDK 1.0.0 release shipped in mid-August, so it’s a great time to have a look at it.

Continue reading “5 tips for developing Kubernetes Operators with the new Operator SDK”

Share
How to install the CouchbaseDB Operator for Red Hat OpenShift on your laptop using Red Hat CodeReady Containers and Red Hat Marketplace

How to install the CouchbaseDB Operator for Red Hat OpenShift on your laptop using Red Hat CodeReady Containers and Red Hat Marketplace

Red Hat Marketplace is an online store of sorts, where you can choose the software that you want to install and run on your Red Hat OpenShift cluster. The analogy is a phone app store, where you select an app, and it’s automagically installed on your phone. With Marketplace, you simply register your cluster(s), select the software that you want, and it is installed for you. It could not be easier.

In this article, I show you how to install Couchbase Server Enterprise Edition on an OpenShift cluster. In my case, the cluster is running on Fedora 32 using Red Hat CodeReady Containers (CRC). Couchbase Server Enterprise Edition is currently available as a free trial, and CRC is also available at zero cost. This setup offers a no-risk way to try containers, Kubernetes, OpenShift, and, in this case, Couchbase. This is definitely “developers playing around with the software”-level stuff.

Continue reading “How to install the CouchbaseDB Operator for Red Hat OpenShift on your laptop using Red Hat CodeReady Containers and Red Hat Marketplace”

Share
Install Red Hat OpenShift Operators on your laptop using Red Hat CodeReady Containers and Red Hat Marketplace

Install Red Hat OpenShift Operators on your laptop using Red Hat CodeReady Containers and Red Hat Marketplace

Red Hat CodeReady Containers (CRC) is the quickest way for developers to get started with clusters on Red Hat OpenShift 4.1 or newer. CodeReady Containers is designed to run on a local computer. It simplifies setup and testing by emulating the cloud development environment locally with all of the tools that you need to develop container-based applications.

Red Hat Marketplace is an open cloud marketplace that makes it easy to discover and purchase the certified, containerized tools you need to build enterprise-first applications. It was created to help developers using OpenShift build applications and deploy them across a hybrid cloud. Red Hat Marketplace works on any developer workstation that is running CodeReady Containers.

This article guides you through the steps of setting up Red Hat Marketplace and installing containerized products in your local CodeReady Containers-based OpenShift clusters.

Continue reading “Install Red Hat OpenShift Operators on your laptop using Red Hat CodeReady Containers and Red Hat Marketplace”

Share
The present and future of CI/CD with GitOps on Red Hat OpenShift

The present and future of CI/CD with GitOps on Red Hat OpenShift

The need to deliver applications faster is near-universal, even in organizations that traditionally are perceived as risk-averse. As the foundations of DevOps, continuous integration (CI) and continuous delivery (CD) are essential to application delivery in most organizations. Together, CI/CD tools and processes automate building and testing applications on every code or configuration change, then trigger a sequence of workflows that deliver the application to production.

Continue reading The present and future of CI/CD with GitOps on Red Hat OpenShift

Share
Introduction to Tekton and Argo CD for multicluster development

Introduction to Tekton and Argo CD for multicluster development

Over the last two years, my coworkers and I have worked on developing a multicluster project for Kubernetes and Red Hat OpenShift. We needed a way to efficiently deploy applications, oversee access and authorization, and manage application placement across clusters. This need led us to develop with Argo CD and GitOps.

Recently, I switched to another team that also focuses on multicluster development. During my interviews, I promised to help create a catalog of our projects and develop a process to deploy them rapidly. Together, the catalog and process would allow the team to just work on things, rather than trying to figure out how to get them operational. However, I quickly hit a wall. With Argo CD, I couldn’t control when and in what order cluster objects were deployed onto new or existing clusters. Eventually, I discovered Tekton, a powerful addition to my development toolset.

In this article, I briefly describe my process for developing the catalog and process tool. I’ll introduce the components involved, explain a little about how Tekton Pipelines works, and leave you with a tool that you can share with your organization and teams.

Continue reading “Introduction to Tekton and Argo CD for multicluster development”

Share