Create a scalable REST API with Falcon and RHSCL

APIs are critical to automation, integration and developing cloud-native applications, and it’s vital they can be scaled to meet the demands of your user-base. In this article, we’ll create a database-backed REST API based on the Python Falcon framework using Red Hat Software Collections (RHSCL), test how it performs, and scale-out in response to a growing user-base.

Continue reading Create a scalable REST API with Falcon and RHSCL

Share

What are BPF Maps and how are they used in stapbpf

Compared to SystemTap’s default backend, one of stapbpf’s most distinguishing features is the absence of a kernel module runtime. The BPF machinery inside the kernel instead mostly handles its runtime. Therefore it would be very helpful if BPF provided us with a way for states to be maintained across multiple invocations of BPF programs and for userspace programs to be able to communicate with BPF programs. This is accomplished by BPF maps. In this blog post, I will introduce BPF maps and explain their role in stapbpf’s implementation.

Continue reading What are BPF Maps and how are they used in stapbpf

Share

Configuring mKahaDB persistence storage for ActiveMQ

In this post, I wanted to address how to configure mKahaDB persistence storage on ActiveMQ for better management and reducing disk usage.

Default configured KahaDB persistence adapter works well when all the destinations (queues/topics) being managed by the broker have similar performance. However, an enterprise solution where several third parties are involved is never the case.

There are multiple queues or topics and different consumers or listeners listening to these queues/topics. Some consumers might be slower than other consumers. This will grow the message store’s disk usage rapidly. Due to this situation and being single KahaDB all store destinations might perform slow.

Continue reading “Configuring mKahaDB persistence storage for ActiveMQ”

Share

Using New Relic in Red Hat Mobile Node.js Applications

Introduction

New Relic is an application-monitoring platform that provides in-depth analytics and analysis for applications regardless of the type of environment where they are deployed, or as New Relic put it themselves:

“Gain end-to-end visibility across your customer experience, application performance, and dynamic infrastructure with the New Relic Digital Intelligence Platform.” – New Relic

You might ask why there’s a use for New Relic’s monitoring capabilities when Red Hat Mobile Application Platform (RHMAP) and OpenShift Container Platform both offer insights into the CPU, Disk, Memory, and general resource utilization of your server-side applications. While these generic resource reports are valuable, they might not offer the detail required to debug a specific issue. Since New Relic is built as an analytics platform from the ground up it is capable of providing unique insights into the specific runtime of your applications. For example, the JavaScript code deployed in Node.js applications is run using the V8 JavaScript engine which has a life-cycle that can have a significant impact on the performance of your application depending on how you’ve written it. Utilizing New Relic’s Node.js module provides a real-time view of V8 engine performance and how they might be affecting the performance of your production application. By using this data, you can refine your application code to reduce memory usage, which in turn can free CPU resources due to less frequent garbage collections. Neat!

Continue reading “Using New Relic in Red Hat Mobile Node.js Applications”

Share

Profiling NodeJS applications with Linux Performance Tools

Using Linux Perf Tools

The Performance Analysis Tool for Linux (perfis a powerful tool to profile applications. It works by using a mix of hardware counters (is fast) and software counters, all provided by the Linux Performance Counter (LPC) subsystem that takes charge of the complex task of wrapping the CPU counters for the different type of CPUs. So you can have access to a very efficient way to get information of running processes through their C API or a convenient command in this case (perf).

This command gives you access to a great variety of system and process level events but in this entry, I will use it to investigate CPU bounded issues.  

Continue reading “Profiling NodeJS applications with Linux Performance Tools”

Share