Performance

Ceph storage monitoring with Zabbix

Ceph storage monitoring with Zabbix

Storage prices are decreasing, while business demands are growing, and companies are storing more data than ever before. Following this growth pattern, demand grows for monitoring and data protection involving software-defined storage. Downtimes have a high cost that can directly impact business continuity and cause irreversible damage to organizations. Aftereffects include loss of assets and information; interruption of services and operations; law, regulation, or contract violations; along with the financial impacts from losing customers and damaging a company’s reputation.

Gartner estimates that a minute of downtime costs enterprise organizations $5,600, and an hour costs over $300,000.

On the other hand, in a DevOps context, it’s essential to think about continuous monitoring, which is a proactive approach to monitoring throughout the full application’s life cycle and that of its components. This approach helps identify the root cause of possible problems and then quickly and proactively prevent performance issues or future outages. In this article, you will learn how to implement Ceph storage monitoring using the enterprise open source tool Zabbix.

Continue reading “Ceph storage monitoring with Zabbix”

Share
Testing memory-based horizontal pod autoscaling on OpenShift

Testing memory-based horizontal pod autoscaling on OpenShift

Red Hat OpenShift offers horizontal pod autoscaling (HPA) primarily for CPUs, but it can also perform memory-based HPA, which is useful for applications that are more memory-intensive than CPU-intensive. In this article, I demonstrate how to use OpenShift’s memory-based horizontal pod autoscaling feature (tech preview) to autoscale your pods if the demands on memory increase. The test performed in this article might not necessarily reflect a real application. The tests only aim to demonstrate memory-based HPA in the simplest way possible.

Continue reading “Testing memory-based horizontal pod autoscaling on OpenShift”

Share
Possible issues with debugging and inspecting compiler-optimized binaries

Possible issues with debugging and inspecting compiler-optimized binaries

Developers think of their programs as a serial sequence of operations running as written in the original source code. However, program source code is just a specification for computations. The compiler analyzes the source code and determines if changes to the specified operations will yield the same visible results but be more efficient. It will eliminate operations that are ultimately not visible, and rearrange operations to extract more parallelism and hide latency. These differences between the original program’s source code and the optimized binary that actually runs might be visible when inspecting the execution of the optimized binary via tools like GDB and SystemTap.

To aid with the debugging and instrumentation of binaries the compiler generates debug information to map between the source code and executable binary. The debug information includes which line of source code each machine instruction is associated with, where the variables are located, and how to unwind the stack to get a backtrace of function calls. However, even with the compiler generating this information, a number of non-intuitive effects might be observed when instrumenting a compiler-optimized binary:

Continue reading “Possible issues with debugging and inspecting compiler-optimized binaries”

Share
Red Hat OpenShift 4.2 IPI on OpenStack 13: All-in-one setup

Red Hat OpenShift 4.2 IPI on OpenStack 13: All-in-one setup

Months ago, a customer asked me about Red Hat OpenShift on OpenStack, especially regarding the network configuration options available in OpenShift at the node level. In order to give them an answer and increase my confidence on $topic, I’ve considered how to test this scenario.

At the same time, the Italian solution architect “Top Gun Team” was in charge of preparing speeches and demos for the Italian Red Hat Forum (also known as Open Source Day) for the Rome and Milan dates. Brainstorming led me to start my journey toward testing OpenShift 4.2 setup on OpenStack 13 in order to reply to the customer and leverage this effort to build a demo video for Red Hat Forum.

Continue reading “Red Hat OpenShift 4.2 IPI on OpenStack 13: All-in-one setup”

Share
MIR: A lightweight JIT compiler project

MIR: A lightweight JIT compiler project

For the past three years, I’ve been participating in adding just-in-time compilation (JIT) to CRuby. Now, CRuby has the method-based just-in-time compiler (MJIT), which improves performance for non-input/output-bound programs.

The most popular approach to implementing a JIT is to use LLVM or GCC JIT interfaces, like ORC or LibGCCJIT. GCC and LLVM developers spend huge effort to implement the optimizations reliably, effectively, and to work on a lot of targets. Using LLVM or GCC to implement JIT, we can just utilize these optimizations for free. Using the existing compilers was the only way to get JIT for CRuby in the short time before the Ruby 3.0 release, which has the goal of improving CRuby performance by three times.

So, CRuby MJIT utilizes GCC or LLVM, but what is unique about this JIT?

Continue reading “MIR: A lightweight JIT compiler project”

Share
Deploying debuginfod servers for your developers

Deploying debuginfod servers for your developers

In an earlier article, Aaron Merey introduced the new elfutils debuginfo-server daemon. With this software now integrated and released into elfutils 0.178 and coming to distros near you, it’s time to consider why and how to set up such a service for yourself and your team.

Recall that debuginfod exists to distribute ELF or DWARF debugging information, plus associated source code, for a collection of binaries. If you need to run a debugger like gdb, a trace or probe tool like perf or systemtap, binary analysis tools like binutils or pahole, or binary rewriting libraries like dyninst, you will eventually need debuginfo that matches your binaries. The debuginfod client support in these tools enables a fast, transparent way of fetching this data on the fly, without ever having to stop, change to root, run all of the right yum debuginfo-install commands, and try again. Debuginfo lets you debug anywhere, anytime.

We hope this opening addresses the “why.” Now, onto the “how.”

Continue reading “Deploying debuginfod servers for your developers”

Share
An upside-down approach to GCC optimizations

An upside-down approach to GCC optimizations

Many traditional optimizations in the compiler work from a top-down approach, which starts at the beginning of the program and works toward the bottom. This allows the optimization to see the definition of something before any uses of it, which simplifies most evaluations. It’s also the natural way we process things. In this article, we’ll look at a different approach and a new project called Ranger, which attempts to turn this problem upside down.

Continue reading “An upside-down approach to GCC optimizations”

Share