Apache Camel K development inside Eclipse Che: Iteration 1

Apache Camel K development inside Eclipse Che: Iteration 1

The Eclipse Che 7.6.0 release provides a new stack for Apache Camel K integration development. This release is the first iteration to give a preview of what is possible. If you like what you see, shout it out, and more will surely come.

This article details how to test this release on a local instance deployed on minikube. The difference with a hosted instance is that we avoid the prerequisites involving Camel K installation in the cluster and specific rights for the user.

Continue reading “Apache Camel K development inside Eclipse Che: Iteration 1”

Share
Editing, debugging, and GitHub in Red Hat CodeReady Workspaces 2

Editing, debugging, and GitHub in Red Hat CodeReady Workspaces 2

In a previous article, I showed how to get Red Hat CodeReady Workspaces 2.0 (CRW) up and running with a workspace available for use. This time, we will go through the edit-debug-push (to GitHub) cycle. This walk-through will simulate a real-life development effort.

To start, you’ll need to fork a GitHub repository. The Quote Of The Day repo contains a microservice written in Go that we’ll use for this article. Don’t worry if you’ve never worked with Go. This is a simple program and we’ll only change one line of code.

After you fork the repo, make note of (or copy) your fork’s URL. We’ll be using that information in a moment.

Continue reading “Editing, debugging, and GitHub in Red Hat CodeReady Workspaces 2”

Share
How to maintain stable build and deployment performance on Red Hat OpenShift

How to maintain stable build and deployment performance on Red Hat OpenShift

In this article, I will introduce helpful, common tips for managing reliable builds and deployments on Red Hat OpenShift. If you have experienced a sudden performance degradation for builds and deployments on OpenShift, it might be helpful to troubleshoot your cluster. We will start by reviewing the whole process, from build to deployment, and then cover each aspect in more detail. We will use Red Hat OpenShift 4.2 (Kubernetes 1.14) for this purpose.

Continue reading “How to maintain stable build and deployment performance on Red Hat OpenShift”

Share
Using Kubernetes ConfigMaps to define your Quarkus application’s properties

Using Kubernetes ConfigMaps to define your Quarkus application’s properties

So, you wrote your Quarkus application, and now you want to deploy it to a Kubernetes cluster. Good news: Deploying a Quarkus application to a Kubernetes cluster is easy. Before you do this, though, you need to straighten out your application’s properties. After all, your app probably has to connect with a database, call other services, and so on. These settings are already defined in your application.properties file, but the values match the ones for your local environment and won’t work once deployed onto your cluster.

So, how do you easily solve this problem? Let’s walk through an example.

Continue reading “Using Kubernetes ConfigMaps to define your Quarkus application’s properties”

Share
Operator pattern: REST API for Kubernetes and Red Hat OpenShift

Operator pattern: REST API for Kubernetes and Red Hat OpenShift

In this article, we will see a similar pattern when writing the REST API in any known framework vs. writing an Operator using Kubernetes’ client libraries. The idea behind this article is not to explain how to write a REST API, but instead to explain the internals of Kubernetes by working with an analogy.

Local setup

To follow along, you will need the following installed:

As a developer, if you have used the REST API with frameworks like Quarkus/Spring (Java), Express (Nodejs), Ruby on Rails, Flask (Python), Golang (mux), etc., understanding and writing an operator will be easier for you. We will use this experience with other languages or frameworks to build our understanding.

Continue reading “Operator pattern: REST API for Kubernetes and Red Hat OpenShift”

Share
Why not couple an Operator’s logic to a specific Kubernetes platform?

Why not couple an Operator’s logic to a specific Kubernetes platform?

You might find yourself in situations where you believe that a logic implementation should occur only if and when your Operator is running on a specific Kubernetes platform. So, you probably want to know how to get the cluster vendor from the operator. In this article, we will discuss why relying on the vendor is not a good idea. Also, we will show how to solve this kind of scenario.

Continue reading “Why not couple an Operator’s logic to a specific Kubernetes platform?”

Share
First steps with the data virtualization Operator for Red Hat OpenShift

First steps with the data virtualization Operator for Red Hat OpenShift

The Red Hat Integration Q4 release adds many new features and capabilities with an increasing focus around cloud-native data integration. The features I’m most excited about are the introduction of the schema registry, the advancement of change data capture capabilities based on Debezium to technical preview, and data virtualization (technical preview) capabilities.

Data integration is a topic that has not received much attention from the cloud-native community so far, and we will cover it in more detail in future posts. Here, we jump straight into demonstrating the latest release of data virtualization (DV) capabilities on Red Hat OpenShift 4. This is a step-by-step visual tutorial describing how to create a simple virtual database using Red Hat Integration’s data virtualization Operator. By the end of the tutorial, you will learn:

  • How to deploy the DV Operator.
  • How to create a virtual database.
  • How to access the virtual database.

The steps throughout this article work on any Openshift 4.x environment with operator support, even on time- and resource-constrained environments such as the Red Hat OpenShift Interactive Learning Portal.

Continue reading “First steps with the data virtualization Operator for Red Hat OpenShift”

Share
MIR: A lightweight JIT compiler project

MIR: A lightweight JIT compiler project

For the past three years, I’ve been participating in adding just-in-time compilation (JIT) to CRuby. Now, CRuby has the method-based just-in-time compiler (MJIT), which improves performance for non-input/output-bound programs.

The most popular approach to implementing a JIT is to use LLVM or GCC JIT interfaces, like ORC or LibGCCJIT. GCC and LLVM developers spend huge effort to implement the optimizations reliably, effectively, and to work on a lot of targets. Using LLVM or GCC to implement JIT, we can just utilize these optimizations for free. Using the existing compilers was the only way to get JIT for CRuby in the short time before the Ruby 3.0 release, which has the goal of improving CRuby performance by three times.

So, CRuby MJIT utilizes GCC or LLVM, but what is unique about this JIT?

Continue reading “MIR: A lightweight JIT compiler project”

Share
Deploying applications in the OpenShift 4.3 Developer perspective

Deploying applications in the OpenShift 4.3 Developer perspective

In this article, we take a look at user flow improvements for deploying applications in Red Hat OpenShift 4.3‘s Developer perspective. You can learn more about all of the developer-focused console improvements in the OpenShift 4.3 release article here. Since the initial launch of the Developer perspective in the OpenShift 4.2 release, we’ve had frequent feedback sessions with developers, developer advocates, stakeholders, and other community members to better understand how the experience meets their needs. While, overall, the user interface has been well received, we continue to gather and use the feedback to enhance our flows.

Continue reading Deploying applications in the OpenShift 4.3 Developer perspective

Share