Kubernetes integration and more in odo 2.0

Kubernetes integration and more in odo 2.0

Odo is a developer-focused command-line interface (CLI) for OpenShift and Kubernetes. This article introduces highlights of the odo 2.0 release, which now integrates with Kubernetes. Additional highlights include the new default deployment method in odo 2.0, which uses devfiles for rapid, iterative development. We’ve also moved Operator deployment out of experimental mode, so you can easily deploy Operator-backed services from the odo command line.

Continue reading “Kubernetes integration and more in odo 2.0”

Share
Customizing and tuning the Kuryr SDN for Red Hat OpenShift 3.11 on Red Hat OpenStack 13

Customizing and tuning the Kuryr SDN for Red Hat OpenShift 3.11 on Red Hat OpenStack 13

In a previous article, I showed you how to customize Red Hat OpenShift software-defined networking (SDN) for your organization’s requirements and restrictions. In this article, we’ll look at using the Kuryr SDN instead. Using Kuryr with OpenShift 3.11 on Red Hat OpenStack 13 changes the customization requirements because Kuryr works directly with OpenStack Neutron and Octavia.

Note: This article builds on the discussion and examples from my previous one. I recommend reading the previous one first.

Background

Traditional OpenShift installations leverage openshift-sdn, which is specific to OpenShift. Using openshift-sdn means that your containers run on a network within a network. This setup, known as double encapsulation, introduces an additional layer of complexity, which becomes apparent when troubleshooting network issues. Double encapsulation also affects network performance due to the overhead of running a network within a network.

Continue reading “Customizing and tuning the Kuryr SDN for Red Hat OpenShift 3.11 on Red Hat OpenStack 13”

Share
Command-line cluster management with Red Hat OpenShift’s new web terminal (tech preview)

Command-line cluster management with Red Hat OpenShift’s new web terminal (tech preview)

Red Hat OpenShift‘s web console simplifies many development and deployment chores to just a few clicks, but sometimes you need a command-line interface (CLI) to get things done on a cluster. Whether you’re learning by cut-and-paste in a tutorial or troubleshooting a deep bug in production (also often done by cut-and-paste), you’ll likely need to enter at least a line or two at a command prompt.

Starting with version 4.5.3, OpenShift users can try out a tech preview of the new Web Terminal Operator. The new OpenShift web terminal brings indispensable command-line tools right to the web console, and its Linux environment runs in a pod deployed on your OpenShift cluster. The web terminal eliminates the need to install software and configure connections and authentication for your local terminal. It also makes it easier to use OpenShift on devices like tablets and mobile phones, which might lack a native terminal.

Continue reading “Command-line cluster management with Red Hat OpenShift’s new web terminal (tech preview)”

Share
Building modern CI/CD workflows for serverless applications with Red Hat OpenShift Pipelines and Argo CD, Part 1

Building modern CI/CD workflows for serverless applications with Red Hat OpenShift Pipelines and Argo CD, Part 1

A recent article, The present and future of CI/CD with GitOps on Red Hat OpenShift, proposed Tekton as a framework for cloud-native CI/CD pipelines, and Argo CD as its perfect partner for GitOps. GitOps practices support continuous delivery in hybrid, multi-cluster Kubernetes environments.

Continue reading Building modern CI/CD workflows for serverless applications with Red Hat OpenShift Pipelines and Argo CD, Part 1

Share
AI software stack inspection with Thoth and TensorFlow

AI software stack inspection with Thoth and TensorFlow

Project Thoth develops open source tools that enhance the day-to-day life of developers and data scientists. Thoth uses machine-generated knowledge to boost the performance, security, and quality of your applications using artificial intelligence (AI) through reinforcement learning (RL). This machine-learning approach is implemented in Thoth adviser (if you want to know more, click here) and it is used by Thoth integrations to provide the software stack based on user inputs.

Continue reading AI software stack inspection with Thoth and TensorFlow

Share
Call an existing REST service with Apache Camel K

Call an existing REST service with Apache Camel K

With the release of Apache Camel K, it is possible to create and deploy integrations with existing applications that are quicker and more lightweight than ever. In many cases, calling an existing REST endpoint is the best way to connect a new system to an existing one. Take the example of a cafe serving coffee. What happens when the cafe wants to allow customers to use a delivery service like GrubHub? You would only need to introduce a single Camel K integration to connect the cafe and GrubHub systems.

In this article, I will show you how to create a Camel K integration that calls an existing REST service and uses its existing data format. For the data format, I have a Maven project configured with Java objects. Ideally, you would have this packaged and available in a Nexus repository. For the purpose of my demonstration, I utilized JitPack, which lets me have my dependency available in a repository directly from my GitHub code. See the GitHub repository associated with this demo for the data format code and directions for getting it into JitPack.

Continue reading “Call an existing REST service with Apache Camel K”

Share
Build a data streaming pipeline using Kafka Streams and Quarkus

Build a data streaming pipeline using Kafka Streams and Quarkus

In typical data warehousing systems, data is first accumulated and then processed. But with the advent of new technologies, it is now possible to process data as and when it arrives. We call this real-time data processing. In real-time processing, data streams through pipelines; i.e., moving from one system to another. Data gets generated from static sources (like databases) or real-time systems (like transactional applications), and then gets filtered, transformed, and finally stored in a database or pushed to several other systems for further processing. The other systems can then follow the same cycle—i.e., filter, transform, store, or push to other systems.

Continue reading Build a data streaming pipeline using Kafka Streams and Quarkus

Share
Rootless containers with Podman: The basics

Rootless containers with Podman: The basics

As a developer, you have probably heard a lot about containers. A container is a unit of software that provides a packaging mechanism that abstracts the code and all of its dependencies to make application builds fast and reliable. An easy way to experiment with containers is with the Pod Manager tool (Podman), which is a daemonless, open source, Linux-native tool that provides a command-line interface (CLI) similar to the docker container engine.

In this article, I will explain the benefits of using containers and Podman, introduce rootless containers and why they are important, and then show you how to use rootless containers with Podman with an example. Before we dive into the implementation, let’s review the basics.

Continue reading “Rootless containers with Podman: The basics”

Share
New C++ features in GCC 10

New C++ features in GCC 10

The GNU Compiler Collection (GCC) 10.1 was released in May 2020. Like every other GCC release, this version brought many additions, improvements, bug fixes, and new features. Fedora 32 already ships GCC 10 as the system compiler, but it’s also possible to try GCC 10 on other platforms (see godbolt.org, for example). Red Hat Enterprise Linux (RHEL) users will get GCC 10 in the Red Hat Developer Toolset (RHEL 7), or the Red Hat GCC Toolset (RHEL 8).

Continue reading New C++ features in GCC 10

Share