Red Hat OpenShift Container Platform Load Testing Tips

A large bank in the Association of Southeast Asian Nations (ASEAN) plans to develop a new mobile back-end application using microservices and container technology. They expect the platform to be able to support 10,000,000 customers with 5,000 TPS. They decided to use Red Hat OpenShift Container Platform (OCP) as the runtime platform for this application. To ensure that this platform is able to support their throughput requirements and future growth rate, they have performed internal load testing with their infrastructure and mock-up services. This article will share the lessons learned load testing Red Hat OpenShift Container Platform.

Continue reading “Red Hat OpenShift Container Platform Load Testing Tips”

Share

Create a scalable REST API with Falcon and RHSCL

APIs are critical to automation, integration and developing cloud-native applications, and it’s vital they can be scaled to meet the demands of your user-base. In this article, we’ll create a database-backed REST API based on the Python Falcon framework using Red Hat Software Collections (RHSCL), test how it performs, and scale-out in response to a growing user-base.

Continue reading Create a scalable REST API with Falcon and RHSCL

Share

What are BPF Maps and how are they used in stapbpf

Compared to SystemTap’s default backend, one of stapbpf’s most distinguishing features is the absence of a kernel module runtime. The BPF machinery inside the kernel instead mostly handles its runtime. Therefore it would be very helpful if BPF provided us with a way for states to be maintained across multiple invocations of BPF programs and for userspace programs to be able to communicate with BPF programs. This is accomplished by BPF maps. In this blog post, I will introduce BPF maps and explain their role in stapbpf’s implementation.

What are BPF maps?

BPF maps are essentially generic data structures consisting of key/value pairs. They are created from userspace using the BPF system call, which returns a file descriptor for the map. The key size and value size are specified by the user, allowing for the storage of key/value pairs with arbitrary types. Once a map is created, elements can be accessed from userspace using the BPF system call. Maps are automatically deallocated once the user process that created the map terminates (although it is possible to force the map to persist longer than this process). Stapbpf uses the following function to create new BPF maps.

Continue reading “What are BPF Maps and how are they used in stapbpf”

Share

Configuring mKahaDB persistence storage for ActiveMQ

In this post, I wanted to address how to configure mKahaDB persistence storage on ActiveMQ for better management and reducing disk usage.

Default configured KahaDB persistence adapter works well when all the destinations (queues/topics) being managed by the broker have similar performance. However, an enterprise solution where several third parties are involved is never the case.

There are multiple queues or topics and different consumers or listeners listening to these queues/topics. Some consumers might be slower than other consumers. This will grow the message store’s disk usage rapidly. Due to this situation and being single KahaDB all store destinations might perform slow.

Continue reading “Configuring mKahaDB persistence storage for ActiveMQ”

Share

Using New Relic in Red Hat Mobile Node.js Applications

Introduction

New Relic is an application-monitoring platform that provides in-depth analytics and analysis for applications regardless of the type of environment where they are deployed, or as New Relic put it themselves:

“Gain end-to-end visibility across your customer experience, application performance, and dynamic infrastructure with the New Relic Digital Intelligence Platform.” – New Relic

You might ask why there’s a use for New Relic’s monitoring capabilities when Red Hat Mobile Application Platform (RHMAP) and OpenShift Container Platform both offer insights into the CPU, Disk, Memory, and general resource utilization of your server-side applications. While these generic resource reports are valuable, they might not offer the detail required to debug a specific issue. Since New Relic is built as an analytics platform from the ground up it is capable of providing unique insights into the specific runtime of your applications. For example, the JavaScript code deployed in Node.js applications is run using the V8 JavaScript engine which has a life-cycle that can have a significant impact on the performance of your application depending on how you’ve written it. Utilizing New Relic’s Node.js module provides a real-time view of V8 engine performance and how they might be affecting the performance of your production application. By using this data, you can refine your application code to reduce memory usage, which in turn can free CPU resources due to less frequent garbage collections. Neat!

Continue reading “Using New Relic in Red Hat Mobile Node.js Applications”

Share