Containers

Leveraging OpenShift or Kubernetes for automated performance tests (part 3)

Leveraging OpenShift or Kubernetes for automated performance tests (part 3)

This is the third of a series of three articles based on a session I held at EMEA Red Hat Tech Exchange. In the first article, I presented the rationale and approach for leveraging Red Hat OpenShift or Kubernetes for automated performance testing, and I gave an overview of the setup. In the second article, we looked at building an observability stack. In this third part, we will see how the execution of the performance tests can be automated and related metrics gathered.

An example of what is described in this article is available in my GitHub repository.

Continue reading “Leveraging OpenShift or Kubernetes for automated performance tests (part 3)”

Share
Podman: Managing pods and containers in a local container runtime

Podman: Managing pods and containers in a local container runtime

People associate running pods with Kubernetes. And when they run containers in their development runtimes, they do not even think about the role pods could play—even in a localized runtime.  Most people coming from the Docker world of running single containers do not envision the concept of running pods. There are several good reasons to consider using pods locally, other than using pods to naturally group your containers.

For example, suppose you have multiple containers that require the use of a MariaDB container.  But you would prefer to not bind that database to a routable network; either in your bridge or further.  Using a pod, you could bind to the localhost address of the pod and all containers in that pod will be able to connect to it because of the shared network name space.

Continue reading “Podman: Managing pods and containers in a local container runtime”

Share
Integration of container platform essentials (Part 5)

Integration of container platform essentials (Part 5)

In Part 4 of this series, we looked into details that determine how your integration becomes the key to transforming your omnichannel customer experience.

It started with laying out the process of how I’ve approached the use case by researching successful customer portfolio solutions as the basis for a generic architectural blueprint. Now it’s time to cover more blueprint details.

This article discusses the core elements in the blueprint (container platform and microservices) that are crucial to the generic architectural overview.

Continue reading “Integration of container platform essentials (Part 5)”

Share
Leveraging OpenShift or Kubernetes for automated performance tests (part 2)

Leveraging OpenShift or Kubernetes for automated performance tests (part 2)

This is the second of a series of three articles based on a session I hold at EMEA Red Hat Tech Exchange. In the first article, I presented the rationale and approach for leveraging Red Hat OpenShift or Kubernetes for automated performance testing, and I gave an overview of the setup.

In this article, we will look at building an observability stack that, beyond the support it provides in production, can be leveraged during performance tests. This will provide insight into how the application performs under load.

An example of what is described in this article is available in my GitHub repository.

Continue reading “Leveraging OpenShift or Kubernetes for automated performance tests (part 2)”

Share
Monitoring Node.js Applications on OpenShift with Prometheus

Monitoring Node.js Applications on OpenShift with Prometheus

Observability is Key

One of the great things about Node.js is how well it performs in a container. Its fast start up time, and relatively small size make it a favorite for microservice applications on OpenShift. But with this shift to containerized deployments comes some complexity. As a result, monitoring Node.js applications can be difficult. At times it seems as though the performance and behavior of our applications become opaque to us. So what can we do to find and address issues in our services before they become a problem? We need to enhance observability by monitoring the state of our services.

Instrumentation

Instrumentation of our applications is one way to increase observability. Therefore, in this article, I will demonstrate the instrumentation of a Node.js application using Prometheus.

Continue reading “Monitoring Node.js Applications on OpenShift with Prometheus”

Share
Integration of API management details (Part 4)

Integration of API management details (Part 4)

In Part 3 of this series, we started diving into the details that determine how your integration becomes the key to transforming your customer experience.

It started with laying out the process of how I’ve approached the use case by researching successful customer portfolio solutions as the basis for a generic architectural blueprint. Now it’s time to cover various blueprint details.

This article takes you deeper into specific elements (API management and reverse proxy) of the generic architectural overview.

Continue reading “Integration of API management details (Part 4)”

Share
Building Java 11 and Gradle containers for OpenShift

Building Java 11 and Gradle containers for OpenShift

How do YOU get your Java apps running in a cloud?

First you grab a cloud from the sky by, for example,  (1) Getting started with a free account on Red Hat OpenShift Online, or (2) locally on your laptop using Red Hat Container Development Kit (CDK) or upstream Minishift on Windows, macOS, and Linux, or (3) using oc cluster up (only on Linux), or (4) by obtaining a login from someone running Red Hat OpenShift on a public or on-premises cloud. Then, you download the oc CLI client tool probably for Windows (and put it on your PATH). Then you select the Copy Login Command from the menu in the upper right corner under your name in the OpenShift Console’s UI, and you use, for example, the oc status command.

Great—now you just need to containerize your Java app. You could, of course, start to write your own Dockerfile, pick an appropriate container base image (and discuss Red Hat Enterprise Linux versus CentOS versus Fedora versus Ubuntu versus Debian versus Alpine with your co-workers; and, especially if you’re in an enterprise environment, figure out how to have that supported in production), figure out appropriate JVM startup parameters for a container, add monitoring, and so.

But perhaps what you really wanted to do today is…well, just get your Java app running in a cloud!

Read on to find an easier way.

Continue reading “Building Java 11 and Gradle containers for OpenShift”

Share