test automation

Automating tests and metrics gathering for Kubernetes and OpenShift  (part 3)

Automating tests and metrics gathering for Kubernetes and OpenShift (part 3)

This is the third of a series of three articles based on a session I held at Red Hat Tech Exchange EMEA. In the first article, I presented the rationale and approach for leveraging Red Hat OpenShift or Kubernetes for automated performance testing, and I gave an overview of the setup. In the second article, we looked at building an observability stack. In this third part, we will see how the execution of the performance tests can be automated and related metrics gathered.

An example of what is described in this article is available in my GitHub repository.

Continue reading “Automating tests and metrics gathering for Kubernetes and OpenShift (part 3)”

Share
Building an observability stack for automated performance tests on Kubernetes and OpenShift (part 2)

Building an observability stack for automated performance tests on Kubernetes and OpenShift (part 2)

This is the second of a series of three articles based on a session I held at Red Hat Tech Exchange in EMEA. In the first article, I presented the rationale and approach for leveraging Red Hat OpenShift or Kubernetes for automated performance testing, and I gave an overview of the setup.

In this article, we will look at building an observability stack. In production, the observability stack can help verify that the system is working correctly and performing well. It can also be leveraged during performance tests to provide insight into how the application performs under load.

An example of what is described in this article is available in my GitHub repository.

Continue reading “Building an observability stack for automated performance tests on Kubernetes and OpenShift (part 2)”

Share
Leveraging Kubernetes and OpenShift for automated performance tests (part 1)

Leveraging Kubernetes and OpenShift for automated performance tests (part 1)

This is the first article in a series of three articles based on a session I hold at Red Hat Tech Exchange EMEA. In this first article, I present the rationale and approach for leveraging Red Hat OpenShift or Kubernetes for automated performance testing, give an overview of the setup, and discuss points that are worth considering when executing and analyzing performance tests. I will also say a few words about performance tuning.

In the second article, we will look at building an observability stack, which—beyond the support it provides in production—can be leveraged during performance tests. Open sources projects like Prometheus, Jaeger, Elasticsearch, and Grafana will be used for that purpose. The third article will present the details for building an environment for performance testing and automating the execution with JMeter and Jenkins.

Continue reading “Leveraging Kubernetes and OpenShift for automated performance tests (part 1)”

Share
Container Testing in OpenShift with Meta Test Family

Container Testing in OpenShift with Meta Test Family

Without proper testing, we should not ship any container. We should guarantee that a given service in a container works properly. Meta Test Family (MTF) was designed for this very purpose.

Containers can be tested as “standalone” containers and as “orchestrated” containers. Let’s look at how to test containers with the Red Hat OpenShift environment. This article describes how to do that and what actions are needed.

MTF is a minimalistic library built on the existing Avocado and behave testing frameworks, assisting developers in quickly enabling test automation and requirements. MTF adds basic support and abstraction for testing various module artifact types: RPM-based, Docker images, and more. For detailed information about the framework and how to use it check out the MTF documentation.

Continue reading “Container Testing in OpenShift with Meta Test Family”

Share