Performance

Red Hat Developer

Natively compile Java code for better startup time

Microservices and serverless architectures are being implemented, or are a part of the roadmap, in most modern solution stacks. Given that Java is still the dominant language for business applications, the need for reducing the startup time for Java is becoming more important. Serverless architectures are one such area that needs faster startup times, and applications hosted on container platforms such as Red Hat Openshift can benefit from both fast Java startup time and a smaller Docker image size.

Let’s see how GraalVM can be beneficial for Java-based programs in terms of speed and size improvements. Surely, these gains are not bound to containers or serverless architectures and can be applied to a variety of use cases.

Continue reading “Natively compile Java code for better startup time”

Share

Improving .NET Core Kestrel performance using a Linux-specific transport

ASP.NET Core is the web framework for .NET Core. Performance is a key feature. The stack is heavily optimized and continuously benchmarked. Kestrel is the name of the HTTP server. In this blog post, we’ll replace Kestrel’s networking layer with a Linux-specific implementation and benchmark it against the default out-of-the-box implementations. The TechEmpower web framework benchmarks are used to compare the network layer performance.

Continue reading “Improving .NET Core Kestrel performance using a Linux-specific transport”

Share

Scaling AMQ 7 Brokers with AMQ Interconnect

Red Hat JBoss AMQ Interconnect provides flexible routing of messages between AMQP-enabled endpoints, including clients, brokers, and standalone services. With a single connection to a network of AMQ Interconnect routers, a client can exchange messages with any other endpoint connected to the network.

AMQ Interconnect can create various topologies to manage a high volume of traffic or define an elastic network in front of AMQ 7 brokers. This article shows a sample AMQ Interconnect topology for scaling AMQ 7 brokers easily.

Continue reading “Scaling AMQ 7 Brokers with AMQ Interconnect”

Share

SystemTap’s BPF Backend Introduces Tracepoint Support

This blog is the third in a series on stapbpf, SystemTap’s BPF (Berkeley Packet Filter) backend. In the first post, Introducing stapbpf – SystemTap’s new BPF backend, I explain what BPF is and what features it brings to SystemTap. In the second post, What are BPF Maps and how are they used in stapbpf, I examine BPF maps, one of BPF’s key components, and their role in stapbpf’s implementation.

In this post, I introduce stapbpf’s recently added support for tracepoint probes. Tracepoints are statically-inserted hooks in the Linux kernel onto which user-defined probes can be attached. Tracepoints can be found in a variety of locations throughout the Linux kernel, including performance-critical subsystems such as the scheduler. Therefore, tracepoint probes must terminate quickly in order to avoid significant performance penalties or unusual behavior in these subsystems. BPF’s lack of loops and limit of 4k instructions means that it’s sufficient for this task.

Continue reading “SystemTap’s BPF Backend Introduces Tracepoint Support”

Share

Red Hat OpenShift Container Platform Load Testing Tips

A large bank in the Association of Southeast Asian Nations (ASEAN) plans to develop a new mobile back-end application using microservices and container technology. They expect the platform to be able to support 10,000,000 customers with 5,000 TPS. They decided to use Red Hat OpenShift Container Platform (OCP) as the runtime platform for this application. To ensure that this platform is able to support their throughput requirements and future growth rate, they have performed internal load testing with their infrastructure and mock-up services. This article will share the lessons learned load testing Red Hat OpenShift Container Platform.

Continue reading “Red Hat OpenShift Container Platform Load Testing Tips”

Share

Create a scalable REST API with Falcon and RHSCL

APIs are critical to automation, integration and developing cloud-native applications, and it’s vital they can be scaled to meet the demands of your user-base. In this article, we’ll create a database-backed REST API based on the Python Falcon framework using Red Hat Software Collections (RHSCL), test how it performs, and scale-out in response to a growing user-base.

Continue reading Create a scalable REST API with Falcon and RHSCL

Share

What are BPF Maps and how are they used in stapbpf

Compared to SystemTap’s default backend, one of stapbpf’s most distinguishing features is the absence of a kernel module runtime. The BPF machinery inside the kernel instead mostly handles its runtime. Therefore it would be very helpful if BPF provided us with a way for states to be maintained across multiple invocations of BPF programs and for userspace programs to be able to communicate with BPF programs. This is accomplished by BPF maps. In this blog post, I will introduce BPF maps and explain their role in stapbpf’s implementation.

What are BPF maps?

BPF maps are essentially generic data structures consisting of key/value pairs. They are created from userspace using the BPF system call, which returns a file descriptor for the map. The key size and value size are specified by the user, allowing for the storage of key/value pairs with arbitrary types. Once a map is created, elements can be accessed from userspace using the BPF system call. Maps are automatically deallocated once the user process that created the map terminates (although it is possible to force the map to persist longer than this process). Stapbpf uses the following function to create new BPF maps.

Continue reading “What are BPF Maps and how are they used in stapbpf”

Share

Configuring mKahaDB persistence storage for ActiveMQ

In this post, I wanted to address how to configure mKahaDB persistence storage on ActiveMQ for better management and reducing disk usage.

Default configured KahaDB persistence adapter works well when all the destinations (queues/topics) being managed by the broker have similar performance. However, an enterprise solution where several third parties are involved is never the case.

There are multiple queues or topics and different consumers or listeners listening to these queues/topics. Some consumers might be slower than other consumers. This will grow the message store’s disk usage rapidly. Due to this situation and being single KahaDB all store destinations might perform slow.

Continue reading “Configuring mKahaDB persistence storage for ActiveMQ”

Share