Operator

Kourier: A lightweight Knative Serving ingress

Kourier: A lightweight Knative Serving ingress

Until recently, Knative Serving used Istio as its default networking component for handling external cluster traffic and service-to-service communication. Istio is a great service mesh solution, but it can add unwanted complexity and resource use to your cluster if you don’t need it.

That’s why we created Kourier: To simplify the ingress side of Knative Serving. Knative recently adopted Kourier, so it is now a part of the Knative family! This article introduces Kourier and gets you started with using it as a simpler, more lightweight way to expose Knative applications to an external network.

Let’s begin with a brief overview of Knative and Knative Serving.

Continue reading “Kourier: A lightweight Knative Serving ingress”

Share
Migrating a namespace-scoped Operator to a cluster-scoped Operator

Migrating a namespace-scoped Operator to a cluster-scoped Operator

Within the context of Kubernetes, a namespace allows dividing resources, policies, authorization, and a boundary for cluster objects. In this article, we cover two different types of Operators: namespace-scoped and cluster-scoped. We then walk through an example of how to migrate from one to the other, which illustrates the difference between the two.

Namespace-scoped and cluster-scoped

A namespace-scoped Operator is defined within the boundary of a namespace with the flexibility to handle upgrades without impacting others. It watches objects within that namespace and maintains Role and RoleBinding for role-based access control (RBAC) policies for accessing the resource.

Meanwhile, a cluster-scoped Operator promotes reusability and manages defined resources across the cluster. It watches all namespaces in a cluster and maintains ClusterRole and ClusterRoleBinding for RBAC policies for authorizing cluster objects. Two examples of cluster-scoped operators are istio-operator and cert-manager. The istio-operator can be deployed as a cluster-scoped to manage the service mesh for an entire cluster, while the cert-manager is used to issue certificates for an entire cluster.

These two types of Operators support both types of installation based on your requirements. In the case of a cluster-scoped Operator, upgrading the Operator version can impact resources managed by the Operator in the entire cluster, as compared to upgrading the namespace-scoped Operator, which will be easier to upgrade as it only affects the resource within its scope.

Continue reading “Migrating a namespace-scoped Operator to a cluster-scoped Operator”

Share
A development roadmap for Open Data Hub

A development roadmap for Open Data Hub

Open Data Hub (ODH) is a blueprint for building an AI-as-a-Service (AIaaS) platform on Red Hat’s Kubernetes-based OpenShift 4.x. The Open Data Hub team recently released Open Data Hub 0.6.0, followed up by a smaller update of Open Data Hub 0.6.1.

We recently got together and discussed our plans and timeline for the next two releases. Our plans are based on the roadmap slide deck that we put together and presented during the Open Data Hub community meeting on April 6.

In this article, we present our roadmap for the next several Open Data Hub releases. We would like to emphasize that the target dates are optimistic, describing what we would like to achieve. With the current state of the world and vacation time coming up, these dates might change.

Continue reading “A development roadmap for Open Data Hub”

Share
Camel K 1.0: The serverless integration platform goes GA

Camel K 1.0: The serverless integration platform goes GA

After many months of waiting, Apache Camel K 1.0 is finally here! This groundbreaking project introduces developers to cloud-native application development and automated cloud configurations without breaking a sweat. With the 1.0 general availability (GA) release, Apache Camel K is more stable than ever, with performance improvements that developers will appreciate.

Continue reading “Camel K 1.0: The serverless integration platform goes GA”

Share
How to install CodeReady Workspaces in a restricted OpenShift 4 environment

How to install CodeReady Workspaces in a restricted OpenShift 4 environment

It’s your first day as a Java programmer, right out of college. You have received your badge, a shiny new laptop, and all of your software requests have been approved. Everything seems to be going well.

You install Eclipse and set up the required Java Development Kit (JDK) in your new development environment. You clone a project from the company’s GitHub repository, modify the code, and make your first commit. You are excited to be working on your first project.

But then, a few hours later, a senior programmer asks what version of the JDK you used. It seems that the pipeline is reporting a project failure. All you did was commit Java source code, not binary, and it worked perfectly on your local machine. What could possibly have gone wrong?

Coding in a restricted environment

The issue I described is well-known among programmers as the “It works on my computer, and I don’t know why it doesn’t work on your computer” problem. Fortunately, this is the type of problem Red Hat CodeReady Workspaces (CRW) can help you solve. CodeReady Workspaces is a cloud-based IDE based on Che. Whereas Che is an open source project, CRW is an enterprise-ready development environment that provides the security, stability, and consistency that many corporations require. All you have to do is open the CRW link in a web browser, sign in with your user credentials, and code inside the browser.

In this article, I show you how to install CodeReady Workspaces in a restricted Red Hat OpenShift 4 environment.

Continue reading “How to install CodeReady Workspaces in a restricted OpenShift 4 environment”

Share
First look at the new Apicurio Registry UI and Operator

First look at the new Apicurio Registry UI and Operator

Last year, the Apicurio developer community launched the new Apicurio Registry project, which is an API and schema registry for microservices. You can use the Apicurio Registry to store and retrieve service artifacts such as OpenAPI specifications and AsyncAPI definitions, as well as schemas such as Apache Avro, JSON, and Google Protocol Buffers.

Because the registry also works as a catalog where you can navigate through artifacts, adding a new web-based user interface (UI) was a priority for the current Apicurio Registry 1.2.2 release. With this release, the Apicurio community has made the Apicurio Registry available as a binary download or from container images. To make it easier to set up and manage your Apicurio Registry deployment, they have also created a new Kubernetes Operator for the Apicurio Registry.

This article is a quick introduction to the new Apicurio Registry UI and Apicurio Registry Operator. I’ll show you how to access these new features in Apicurio 1.2.2 and describe a few highlights of using them. For a more detailed demonstration, check out my video tutorial introducing the new UI and Kubernetes Operator.

Continue reading “First look at the new Apicurio Registry UI and Operator”

Share
Open Data Hub 0.6.1: Bug fix release to smooth out redesign regressions

Open Data Hub 0.6.1: Bug fix release to smooth out redesign regressions

It is just a few short weeks since we released Open Data Hub (ODH) 0.6.0, bringing many changes to the underlying architecture and some new features. We found a few issues in this new version with the Kubeflow Operator and a few regressions that came in with the new JupyterHub updates. To make sure your experience with ODH 0.6 does not suffer because we wanted to release early, we offer a new (mostly) bugfix release: Open Data Hub 0.6.1.

Continue reading Open Data Hub 0.6.1: Bug fix release to smooth out redesign regressions

Share
Deploy and bind enterprise-grade microservices with Kubernetes Operators

Deploy and bind enterprise-grade microservices with Kubernetes Operators

Deploying enterprise-grade runtime components into Kubernetes can be daunting. You might wonder:

  • How do I fetch a certificate for my app?
  • What’s the syntax for autoscaling resources with the Horizontal Pod Autoscaler?
  • How do I link my container with a database and with a Kafka cluster?
  • Are my metrics going to Prometheus?
  • Also, how do I scale to zero with Knative?

Operators can help with all of those needs and more. In this article, I introduce three Operators—Runtime Component Operator, Service Binding Operator, and Open Liberty Operator—that work together to help you deploy containers like a pro.

Continue reading “Deploy and bind enterprise-grade microservices with Kubernetes Operators”

Share
Open Data Hub 0.6 brings component updates and Kubeflow architecture

Open Data Hub 0.6 brings component updates and Kubeflow architecture

Open Data Hub (ODH) is a blueprint for building an AI-as-a-service platform on Red Hat’s Kubernetes-based OpenShift 4.x. Version 0.6 of Open Data Hub comes with significant changes to the overall architecture as well as component updates and additions. In this article, we explore these changes.

From Ansible Operator to Kustomize

If you follow the Open Data Hub project closely, you might be aware that we have been working on a major design change for a few weeks now. Since we started working closer with the Kubeflow community to get Kubeflow running on OpenShift, we decided to leverage Kubeflow as the Open Data Hub upstream and adopt its deployment tools—namely KFdef manifests and Kustomize—for deployment manifest customization.

Continue reading “Open Data Hub 0.6 brings component updates and Kubeflow architecture”

Share