Products

Leveraging OpenShift or Kubernetes for automated performance tests (part 3)

Leveraging OpenShift or Kubernetes for automated performance tests (part 3)

This is the third of a series of three articles based on a session I held at EMEA Red Hat Tech Exchange. In the first article, I presented the rationale and approach for leveraging Red Hat OpenShift or Kubernetes for automated performance testing, and I gave an overview of the setup. In the second article, we looked at building an observability stack. In this third part, we will see how the execution of the performance tests can be automated and related metrics gathered.

An example of what is described in this article is available in my GitHub repository.

Continue reading “Leveraging OpenShift or Kubernetes for automated performance tests (part 3)”

Share
Podman: Managing pods and containers in a local container runtime

Podman: Managing pods and containers in a local container runtime

People associate running pods with Kubernetes. And when they run containers in their development runtimes, they do not even think about the role pods could play—even in a localized runtime.  Most people coming from the Docker world of running single containers do not envision the concept of running pods. There are several good reasons to consider using pods locally, other than using pods to naturally group your containers.

For example, suppose you have multiple containers that require the use of a MariaDB container.  But you would prefer to not bind that database to a routable network; either in your bridge or further.  Using a pod, you could bind to the localhost address of the pod and all containers in that pod will be able to connect to it because of the shared network name space.

Continue reading “Podman: Managing pods and containers in a local container runtime”

Share
Using Red Hat Application Migration Toolkit to see the impact of migrating to OpenJDK

Using Red Hat Application Migration Toolkit to see the impact of migrating to OpenJDK

Migrating from one software solution to another is a reality that all good software developers need to plan for. Having a plan helps to drive innovation at a continuous pace, whether you are developing software for in-house use or you are acquiring software from a vendor. In either case, never anticipating or planning for migration endangers the entire innovation value proposition. And in today’s ever-changing world of software, everyone who wants to benefit from the success of the cloud has to ensure that cloud innovation is continuous. Therefore, maintaining a stack that is changing along with technological advancements is a necessity.

In this article, we will take a look at the impact of moving to OpenJDK and the results will aid in drawing further conclusions and in planning. It’s quite common to be using a proprietary version of JDK, and this article addresses how to use Red Hat Application Migration Toolkit to analyze your codebase to understand the impact of migrating to OpenJDK.

Continue reading “Using Red Hat Application Migration Toolkit to see the impact of migrating to OpenJDK”

Share
Speeding up Open vSwitch with partial hardware offloading

Speeding up Open vSwitch with partial hardware offloading

Open vSwitch (OVS) can use the kernel datapath or the userspace datapath. There are interesting developments in the kernel datapath using hardware offloading through the TC Flower packet classifier, but in this article, the focus will be on the userspace datapath accelerated with the Data Plane Development Kit (DPDK) and its new feature—partial flow hardware offloading—to accelerate the virtual switch even more.

This article explains how the virtual switch worked before versus now and why the new feature can potentially save resources while improving the packet processing rate.

Continue reading “Speeding up Open vSwitch with partial hardware offloading”

Share
Using a local NuGet server with Red Hat OpenShift

Using a local NuGet server with Red Hat OpenShift

NuGet is the .NET package manager. By default, the .NET Core SDK will use packages from the nuget.org website.

In this article, you’ll learn how to deploy a NuGet server on Red Hat OpenShift Container Platform (RHOCP). We’ll use it as a caching server and see that it speeds up our builds. Before we get to that, we’ll explore some general NuGet concepts and see why it makes sense to use a local NuGet server.

Continue reading “Using a local NuGet server with Red Hat OpenShift”

Share
Using the Yeoman Camel-Project generator to jump start a project

Using the Yeoman Camel-Project generator to jump start a project

The Red Hat Fuse Tooling team recently broadened its focus from a cross-platform, single-IDE (Eclipse) approach to a cross-platform, cross-IDE approach (Eclipse, VS Code, Che), starting several concerted efforts to provide tools that work across platforms and development environments. Supporting VS Code has become a priority that led us to explore using the Yeoman framework for project and file generation to provide developers a way to jump start their Fuse/Camel development efforts.

This article describes the Yeoman framework and the new Yeoman-based Camel-Project generator the Fuse Tooling team created, and it shows how to install and run the generator.

Continue reading “Using the Yeoman Camel-Project generator to jump start a project”

Share
Integration of container platform essentials (Part 5)

Integration of container platform essentials (Part 5)

In Part 4 of this series, we looked into details that determine how your integration becomes the key to transforming your omnichannel customer experience.

It started with laying out the process of how I’ve approached the use case by researching successful customer portfolio solutions as the basis for a generic architectural blueprint. Now it’s time to cover more blueprint details.

This article discusses the core elements in the blueprint (container platform and microservices) that are crucial to the generic architectural overview.

Continue reading “Integration of container platform essentials (Part 5)”

Share
Leveraging OpenShift or Kubernetes for automated performance tests (part 2)

Leveraging OpenShift or Kubernetes for automated performance tests (part 2)

This is the second of a series of three articles based on a session I hold at EMEA Red Hat Tech Exchange. In the first article, I presented the rationale and approach for leveraging Red Hat OpenShift or Kubernetes for automated performance testing, and I gave an overview of the setup.

In this article, we will look at building an observability stack that, beyond the support it provides in production, can be leveraged during performance tests. This will provide insight into how the application performs under load.

An example of what is described in this article is available in my GitHub repository.

Continue reading “Leveraging OpenShift or Kubernetes for automated performance tests (part 2)”

Share