Red Hat OpenShift Container Platform Load Testing Tips

A large bank in the Association of Southeast Asian Nations (ASEAN) plans to develop a new mobile back-end application using microservices and container technology. They expect the platform to be able to support 10,000,000 customers with 5,000 TPS. They decided to use Red Hat OpenShift Container Platform (OCP) as the runtime platform for this application. To ensure that this platform is able to support their throughput requirements and future growth rate, they have performed internal load testing with their infrastructure and mock-up services. This article will share the lessons learned load testing Red Hat OpenShift Container Platform.

Continue reading “Red Hat OpenShift Container Platform Load Testing Tips”

Share

Towards The Ruby 3×3 Performance Goal

This blog post is about my work to improve CRuby performance by introducing new virtual machine instructions and a JIT. It is loosely based on my presentation at RubyKaigi 2017 in Hiroshima, Japan.

As many Ruby people know, the author of Ruby, Yukihiro Matsumoto (Matz), set up a very ambitious goal for performance of CRuby version 3. Version 3 should be 3 times faster than version 2.

Koichi Sasada did a great job improving the performance of CRuby version 2 by about 3 times over version 1, by introducing a byte code virtual machine (VM). So I guess it is symbolic to set up the same goal for CRuby version 3.

Continue reading “Towards The Ruby 3×3 Performance Goal”

Share

Open vSwitch-DPDK: How Much Hugepage Memory?

Introduction

In order to maximize performance of the Open vSwitch DPDK datapath, it pre-allocates hugepage memory. As a user you are responsible for telling Open vSwitch how much hugepage memory to pre-allocate. The question of exactly what value to use often arises. The answer is, it depends.

There is no simple answer as it depends on things like the MTU size of the ports, the MTU differences between ports, and whether those ports are on the same NUMA node. Just to complicate things a bit more, there are multiple overheads, and alignment and rounding need to be accounted for at various places in OVS-DPDK. Everything clear? OK, you can stop reading then!
However, if not, read on.

Continue reading “Open vSwitch-DPDK: How Much Hugepage Memory?”

Share

Create a scalable REST API with Falcon and RHSCL

APIs are critical to automation, integration and developing cloud-native applications, and it’s vital they can be scaled to meet the demands of your user-base. In this article, we’ll create a database-backed REST API based on the Python Falcon framework using Red Hat Software Collections (RHSCL), test how it performs, and scale-out in response to a growing user-base.

Continue reading Create a scalable REST API with Falcon and RHSCL

Share

What are BPF Maps and how are they used in stapbpf

Compared to SystemTap’s default backend, one of stapbpf’s most distinguishing features is the absence of a kernel module runtime. The BPF machinery inside the kernel instead mostly handles its runtime. Therefore it would be very helpful if BPF provided us with a way for states to be maintained across multiple invocations of BPF programs and for userspace programs to be able to communicate with BPF programs. This is accomplished by BPF maps. In this blog post, I will introduce BPF maps and explain their role in stapbpf’s implementation.

What are BPF maps?

BPF maps are essentially generic data structures consisting of key/value pairs. They are created from userspace using the BPF system call, which returns a file descriptor for the map. The key size and value size are specified by the user, allowing for the storage of key/value pairs with arbitrary types. Once a map is created, elements can be accessed from userspace using the BPF system call. Maps are automatically deallocated once the user process that created the map terminates (although it is possible to force the map to persist longer than this process). Stapbpf uses the following function to create new BPF maps.

Continue reading “What are BPF Maps and how are they used in stapbpf”

Share

Introducing stapbpf – SystemTap’s new BPF backend

SystemTap 3.2 includes an early prototype of SystemTap’s new BPF backend (stapbpf). It represents a first step towards leveraging powerful new tracing and performance analysis capabilities recently added to the Linux kernel. In this post, I will compare the translation process of stapbpf with the default backend (stap) and compare some differences in functionality between these two backends.

Continue reading “Introducing stapbpf – SystemTap’s new BPF backend”

Share