New and improved Topology view for OpenShift 4.3

New and improved Topology view for OpenShift 4.3

The Topology view in the Red Hat OpenShift console’s Developer perspective is a thoughtfully designed interface that provides a visual representation of an application’s structure. This view helps developers clearly identify one resource type from another, as well as understand the overall communication dynamics within the application. Launched with the 4.2 release of OpenShift, the Topology view has already earned a spotlight in the cloud-native application development arena. The constant feedback cycles and regular follow-ups on the ongoing trends in the developer community have helped to shape up a great experience in the upcoming release. This article focuses on a few showstopper features in the Topology view that were added for OpenShift 4.3.

Continue reading New and improved Topology view for OpenShift 4.3

Share
What’s new in the OpenShift 4.3 console developer experience

What’s new in the OpenShift 4.3 console developer experience

The developer experience is significantly improved in the Red Hat OpenShift 4.3 web console. If you have used the Developer perspective, which was introduced in OpenShift 4.2 Console, you are probably familiar with our streamlined user flows for deploying applications, the new Topology view, and the enhanced experience around OpenShift Pipelines powered by Tekton and OpenShift Serverless powered by Knative. This release continues to improve upon the features that were introduced in 4.2 and introduces new flows and features for the developer.

Continue reading What’s new in the OpenShift 4.3 console developer experience

Share
Installing debugging tools into a Red Hat OpenShift container with oc-inject

Installing debugging tools into a Red Hat OpenShift container with oc-inject

A previous article, Debugging applications within Red Hat OpenShift containers, gives an overview of tools for debugging applications within Red Hat OpenShift containers, and existing restrictions on their use. One of the restrictions discussed in that article was an inability to install debugging tool packages into an ordinary, unprivileged container once it was already instantiated. In such a container, debugging tool packages have to be included when the container image is built, because once the container is instantiated, using package installation commands requires elevated privileges that are not available to the ordinary container user.

However, there are important situations where it is desirable to install a debugging tool into an already-instantiated container. In particular, if the resolution of a problem requires access to the temporary state of a long-running containerized application, the usual method of adding debugging tools to the container by rebuilding the container image and restarting the application will destroy that temporary state.

To provide a way to add debugging tools to unprivileged containers, I developed a utility, called oc-inject, that can temporarily copy a debugging tool into a container. Instead of relying on package management or other privileged operations, oc-inject’s implementation is based on the existing and well-supported OpenShift operations oc rsync and oc exec, which do not require any elevated privileges.

This article describes the current capabilities of the oc-inject utility, which is available on GitHub or via a Fedora COPR repository. The oc-inject utility works on any Linux system that includes Python 3, the ldd utility, and the Red Hat OpenShift command-line tool oc.

Continue reading “Installing debugging tools into a Red Hat OpenShift container with oc-inject”

Share
How to use the VS Code Tekton Pipelines extension

How to use the VS Code Tekton Pipelines extension

The Tekton Project, which was announced in March after branching off from the Knative project, is creating excitement as a Kubernetes-native CI/CD pipeline tool.

Tekton offers the flexibility and agnosticism that Kubernetes is celebrated for and is positioned to become the first open standardized engine for executing pipelines. Although the project is still in the early stages of development, we couldn’t wait to start making it easier for developers to jump on the Tekton train. In this article, we’ll take a quick look at the Tekton Pipelines extension and how to use it.

Continue reading “How to use the VS Code Tekton Pipelines extension”

Share
Introducing new Red Hat Enterprise Linux certification for software partner products

Introducing new Red Hat Enterprise Linux certification for software partner products

We are pleased to announce an improved software certification for Red Hat partner products built for Red Hat Enterprise Linux 8 (RHEL 8). This new RHEL software certification validates the use of common best practices, improves joint supportability, and promotes your product in the new Red Hat Ecosystem Catalog.

What is this certification?

This certification now features a partner executable test suite that produces results that are then reviewed by Red Hat. Your non-containerized software is certified when the test results show successful interoperability with Red Hat Enterprise Linux 8 in a secure, supportable manner using best practices. Once verified, you can promote your product(s) in the Red Hat Ecosystem catalog.

In addition, Red Hat will grant partners a complimentary Limited membership to TSANet for collaborative customer case management to improve their ongoing user experiences.

Continue reading “Introducing new Red Hat Enterprise Linux certification for software partner products”

Share
Architecting messaging solutions with Apache ActiveMQ Artemis

Architecting messaging solutions with Apache ActiveMQ Artemis

As an architect in the Red Hat Consulting team, I’ve helped countless customers with their integration challenges over the last six years. Recently, I had a few consulting gigs around Red Hat AMQ 7 Broker (the enterprise version of Apache ActiveMQ Artemis), where the requirements and outcomes were similar. That similarity made me think that the whole requirement identification process and can be more structured and repeatable.

This guide is intended for sharing what I learned from these few gigs in an attempt to make the AMQ Broker architecting process, the resulting deployment topologies, and the expected effort more predictable—at least for the common use cases. As such, what follows will be useful for messaging and integration consultants and architects tasked with creating a messaging architecture for Apache Artemis, and other messaging solutions in general. This article focuses on Apache Artemis. It doesn’t cover Apache Kafka, Strimzi, Apache Qpid, EnMasse, or the EAP messaging system, which are all components of our Red Hat AMQ 7 product offering.

Continue reading “Architecting messaging solutions with Apache ActiveMQ Artemis”

Share
Debugging applications within Red Hat OpenShift containers

Debugging applications within Red Hat OpenShift containers

When debugging an application within a Red Hat OpenShift container, it is important to keep in mind that the Linux environment within the container is subject to various constraints. Because of these constraints, the full functionality of debugging tools might not be available:

  • An unprivileged OpenShift container is restricted from accessing kernel interfaces that are required by some low-level debugging tools.

Note: Almost all applications on OpenShift run in unprivileged containers. Unprivileged containers allow the use of standard debugging tools such as gdbserver or strace. Examples of debugging tools that cannot be used in unprivileged containers include perf, which requires access to the kernel’s perf_events interface, and SystemTap, which depends on the kernel’s module-loading functionality.

  • Debug information for system packages within OpenShift containers is not accessible. There is ongoing work (as part of the elfutils project) to develop a file server for debug information (debuginfod), which would make such access possible.
  • The set of packages in an OpenShift container is fixed ahead of time, when the corresponding container image is built. Once a container is running, no additional packages can be installed. A few debugging tools are preinstalled in commonly used container base images, but any other tools must be added when the container image build process is configured.

To successfully debug a containerized application, it is necessary to understand these constraints and how they determine which debugging tools can be used.

Continue reading “Debugging applications within Red Hat OpenShift containers”

Share
The new Tekton Pipelines extension for Visual Studio Code

The new Tekton Pipelines extension for Visual Studio Code

The Tekton Project, which was announced in March after branching off from the Knative project, is creating excitement as a Kubernetes-native CI/CD pipeline tool.

It offers the flexibility and agnosticism that Kubernetes is celebrated for and is positioned to become the first open standardized engine for executing pipelines. Although the project is still in the early stages of development, we couldn’t wait to start making it easier for developers to jump on the Tekton train. Therefore in this article, we’ll take a quick look at the Tekton Pipelines extension and how to use it.

Continue reading “The new Tekton Pipelines extension for Visual Studio Code”

Share
Red Hat support for Node.js

Red Hat support for Node.js

Node.js Foundation Logo

For the past two years, Red Hat Middleware has provided a supported Node.js runtime on Red Hat OpenShift as part of Red Hat Runtimes. Our goal has been to provide rapid releases of the upstream Node.js core project, example applications to get developers up and running quickly, Node.js container images, integrations with other components of Red Hat’s cloud-native stack, and (of course) provide world-class service and support for customers. Earlier this year, the team behind Red Hat’s distribution and support of Node.js even received a “Devie” award from DeveloperWeek for this work, further acknowledging Red Hat’s role in supporting the community and ecosystem.

Red Hat Node.js experts at your fingertips

Red Hat collaborates in more ways than one with the fastest growing runtimes used in business-critical applications on the cloud by contributing to the community, being part of the Technical Steering Committee, and even participating and driving strategic initiatives to carve the future of Node.js. Combining this work with our Red Hat Enterprise Linux (RHEL) and OpenShift expertise, we can help you reach your goals of delivering and supporting business-critical applications on and off the cloud.

Continue reading “Red Hat support for Node.js”

Share