Developers think of their programs as a serial sequence of operations running as written in the original source code. However, program source code is just a specification for computations. The compiler analyzes the source code and determines if changes to the specified operations will yield the same visible results but be more efficient. It will eliminate operations that are ultimately not visible, and rearrange operations to extract more parallelism and hide latency. These differences between the original program’s source code and the optimized binary that actually runs might be visible when inspecting the execution of the optimized binary via tools like GDB and SystemTap.
To aid with the debugging and instrumentation of binaries the compiler generates debug information to map between the source code and executable binary. The debug information includes which line of source code each machine instruction is associated with, where the variables are located, and how to unwind the stack to get a backtrace of function calls. However, even with the compiler generating this information, a number of non-intuitive effects might be observed when instrumenting a compiler-optimized binary:
Continue reading “Possible issues with debugging and inspecting compiler-optimized binaries”
In a previous article, I showed how to get Red Hat CodeReady Workspaces 2.0 (CRW) up and running with a workspace available for use. This time, we will go through the edit-debug-push (to GitHub) cycle. This walk-through will simulate a real-life development effort.
To start, you’ll need to fork a GitHub repository. The
Quote Of The Day repo contains a microservice written in Go that we’ll use for this article. Don’t worry if you’ve never worked with Go. This is a simple program and we’ll only change one line of code.
After you fork the repo, make note of (or copy) your fork’s URL. We’ll be using that information in a moment.
Continue reading “Editing, debugging, and GitHub in Red Hat CodeReady Workspaces 2”
A previous article, Debugging applications within Red Hat OpenShift containers, gives an overview of tools for debugging applications within Red Hat OpenShift containers, and existing restrictions on their use. One of the restrictions discussed in that article was an inability to install debugging tool packages into an ordinary, unprivileged container once it was already instantiated. In such a container, debugging tool packages have to be included when the container image is built, because once the container is instantiated, using package installation commands requires elevated privileges that are not available to the ordinary container user.
However, there are important situations where it is desirable to install a debugging tool into an already-instantiated container. In particular, if the resolution of a problem requires access to the temporary state of a long-running containerized application, the usual method of adding debugging tools to the container by rebuilding the container image and restarting the application will destroy that temporary state.
To provide a way to add debugging tools to unprivileged containers, I developed a utility, called
oc-inject, that can temporarily copy a debugging tool into a container. Instead of relying on package management or other privileged operations,
oc-inject’s implementation is based on the existing and well-supported OpenShift operations
oc rsync and
oc exec, which do not require any elevated privileges.
This article describes the current capabilities of the
oc-inject utility, which is available on GitHub or via a Fedora COPR repository. The
oc-inject utility works on any Linux system that includes Python 3, the
ldd utility, and the Red Hat OpenShift command-line tool
Continue reading “Installing debugging tools into a Red Hat OpenShift container with oc-inject”
When debugging an application within a Red Hat OpenShift container, it is important to keep in mind that the Linux environment within the container is subject to various constraints. Because of these constraints, the full functionality of debugging tools might not be available:
- An unprivileged OpenShift container is restricted from accessing kernel interfaces that are required by some low-level debugging tools.
Note: Almost all applications on OpenShift run in unprivileged containers. Unprivileged containers allow the use of standard debugging tools such as
strace. Examples of debugging tools that cannot be used in unprivileged containers include
perf, which requires access to the kernel’s
perf_events interface, and SystemTap, which depends on the kernel’s module-loading functionality.
- Debug information for system packages within OpenShift containers is not accessible. There is ongoing work (as part of the elfutils project) to develop a file server for debug information (
debuginfod), which would make such access possible.
- The set of packages in an OpenShift container is fixed ahead of time, when the corresponding container image is built. Once a container is running, no additional packages can be installed. A few debugging tools are preinstalled in commonly used container base images, but any other tools must be added when the container image build process is configured.
To successfully debug a containerized application, it is necessary to understand these constraints and how they determine which debugging tools can be used.
Continue reading “Debugging applications within Red Hat OpenShift containers”
In an earlier article, Aaron Merey introduced the new elfutils
debuginfo-server daemon. With this software now integrated and released into elfutils 0.178 and coming to distros near you, it’s time to consider why and how to set up such a service for yourself and your team.
debuginfod exists to distribute ELF or DWARF debugging information, plus associated source code, for a collection of binaries. If you need to run a debugger like
gdb, a trace or probe tool like
systemtap, binary analysis tools like
pahole, or binary rewriting libraries like
dyninst, you will eventually need
debuginfo that matches your binaries. The
debuginfod client support in these tools enables a fast, transparent way of fetching this data on the fly, without ever having to stop, change to root, run all of the right
yum debuginfo-install commands, and try again. Debuginfo lets you debug anywhere, anytime.
We hope this opening addresses the “why.” Now, onto the “how.”
Continue reading “Deploying debuginfod servers for your developers”
Because bugs are inevitable, developers need quick and easy access to the artifacts that debugging tools like Systemtap and GDB depend on, which are typically DWARF (Debugging With Attributed Record Formats) debuginfo or source files. Accessing these resources should not be an issue when debugging your own local build tree, but all too often they are not readily available.
For example, your distro might package debuginfo and source files separately from the executable you’re trying to debug and you may lack the permissions to install these packages. Or, perhaps you’re debugging within a container that was not built with these resources, or maybe you simply don’t want these files taking up space on your machine.
Debuginfo files are notorious for taking up large amounts of space, and it is not unusual for their size to be five to fifteen times that of the corresponding executable.
debuginfod aims to resolve these problems.
Continue reading “Introducing debuginfod, the elfutils debuginfo server”
You’ve probably heard about Quarkus, the Supersonic Subatomic Java framework tailored for Kubernetes and containers. In this article, I will show how easy is it to create and set up a Quarkus project in an Eclipse IDE based environment.
Continue reading “Create your first Quarkus project with Eclipse IDE (Red Hat CodeReady Studio)”
Find out how to configure the CodeReady workspace for debugging, set up breakpoints, and debug the application using the integrated browser-based IDE in the workspace. The steps explained in this video are also available in the tutorial here.
Continue reading “How to debug code in CodeReady Workspaces”
In the world of distributed computing, containers, and microservices, a lot of the interactions and communication between services is done via RESTful APIs. While developing these APIs and interactions between services, I often have the need to debug the communication between services, especially when things don’t seem to work as expected.
Before the world of containers, I would simply deploy my services on my local machine, start up Wireshark, execute my tests, and analyze the HTTP communication between my services. This for me has always been an easy and effective way to quickly analyze communication problems in my software. However, this method of debugging does not work well in a containerized world.
First of all, the containers most likely run on an internal container platform network that is not directly accessible by your machine. A second problem is that, in compliance with container design best practices, containers contain only the minimal set of applications and libraries needed to execute their task. This means that a tool like
tcpdump is usually not available in a container. This makes debugging and analyzing network traffic between containers and, thus, debugging of inter-microservice communication a bit harder than in the non-containerized world. This article shows one solution.
Continue reading “Using sidecars to analyze and debug network traffic in OpenShift and Kubernetes pods”
Microservices have become mainstream in the enterprise. This proliferation of microservices applications generates new problems, which requires a new approach to managing problems. A microservice is a small, independently deployable, and independently scalable software service that is designed to encapsulate a specific semantic function in the larger applicationl. This article explores several approaches to deploying tools to debug microservices applications on a Kubernetes platform like Red Hat OpenShift, including OpenTracing, Squash, Telepresence, and creating a Squash Operator in Red Hat Ansible Automation.
Continue reading “Solving the challenges of debugging microservices on a container platform”