c++11

Recommended compiler and linker flags for GCC

Recommended compiler and linker flags for GCC

Did you know that when you compile your C or C++ programs, GCC will not enable all exceptions by default?  Do you know which build flags you need to specify in order to obtain the same level of security hardening that GNU/Linux distributions use (such as Red Hat Enterprise Linux and Fedora)? This article walks through a list of recommended build flags.

The GNU-based toolchain in Red Hat Enterprise Linux and Fedora (consisting of GCC programs such as gcc, g++, and Binutils programs such as as and ld)  are very close to upstream defaults in terms of build flags. For historical reasons, the GCC and Binutils upstream projects do not enable optimization or any security hardening by default. While some aspects of the default settings can be changed when building GCC and Binutils from source, the toolchain we supply in our RPM builds does not do this. We only align the architecture selection to the minimum architecture level required by the distribution.

Consequently, developers need to pay attention to build flags, and manage them according to the needs of their project for optimization, level of warning and error detection, and security hardening.

Continue reading “Recommended compiler and linker flags for GCC”

Share
Trip Report: April 2017 WG14 Meeting

Trip Report: April 2017 WG14 Meeting

Overview

The week of April 3, I attended a meeting of WG14, the C standardization committee, in Markham, ON.  Markham is a suburb of Toronto about 40 minutes drive north. Unlike Toronto itself, it’s not a particularly interesting destination. We had four days of rain followed by snow, freezing temperatures, and the wind, which was perfect for spending time indoors and made it easy to resist any temptation to go sightseeing.

Continue reading “Trip Report: April 2017 WG14 Meeting”

Share

Node.js – Harnessing the power of Java (for PDF generation and more)

At Red Hat, we all love playing with new technologies, and sometimes we find gaps that haven’t yet been filled. I want to take a few minutes to share a personal project I’ve been working on in my spare time. It is a native C++11 add-on that allows you to run a JVM in Node.js, giving access to the mature Java ecosystem of libraries and frameworks.

Motivation

I just wanted some mature PDF library in Node.js, like iText, PDFBox . Those libraries are very mature, for example it took PDFBox 3 years to get to version 2.0 from version 1.8, so I realized that it would be easier to look for a way to use those existing Java libraries, rather than reproduce the effort in Node.js.

Continue reading “Node.js – Harnessing the power of Java (for PDF generation and more)”

Share
Why < cstdlib > is more complicated than you might think

Why < cstdlib > is more complicated than you might think

One of the reasons that C++ has been so successful and become so widely used is that it was (at least initially) compatible with C, which was already very popular. C++ programs were able to make use of lots of pre-existing C code, and in particular to make use of the C Standard Library. Although the C and C++ languages have diverged, the C++ Standard Library incorporates most of the C99 library by reference.

This article explains some of the difficulties that arise when implementing the parts of the C++ standard library that are shared with C. Some of the difficulties were the cause of longstanding bugs in the GNU C++ standard library, bugs which have only recently been fixed for the forthcoming GCC 6 release. Understanding these issues involves some history of the relevant standards, and some gory details of how a C++ standard library interacts with a C standard library. The gory details will be kept to a minimum though, only referring to them from a high level.

Continue reading “Why < cstdlib > is more complicated than you might think”

Share
Toward a Better Use of C11 Atomics – Part 2

Toward a Better Use of C11 Atomics – Part 2

Continued from Part 1.

Static Initialization

The C and C++ standards specify that

…the default (zero) initialization for objects with static or thread-local storage duration is guaranteed to produce a valid state.

This means that, for example, defining an atomic object at file scope without an initializer as shown below is guaranteed to initialize it to a default state and default (zero) value.

atomic_int counter;

Other than zero-initialization, the standards require that atomic objects with static and thread storage duration be initialized using one of the ATOMIC_VAR_INIT() and atomic_init() macros. This requirement is a vestige of the original proposal which specified atomic types as structs. The expectation was that emulated implementations that stored a mutex in the struct along with the value would define the ATOMIC_VAR_INIT() macro along the following lines

Continue reading “Toward a Better Use of C11 Atomics – Part 2”

Share
Toward a Better Use of C11 Atomics – Part 1

Toward a Better Use of C11 Atomics – Part 1

Introduction

Following the lead of C++, along with a memory model describing the requirements and semantics of multithreaded programs, the C11 standard adopted a proposal for a set of atomic types and operations into the language. This change has made it possible to write portable multi-threaded software that efficiently manipulates objects indivisibly and without data races. The atomic types are fully interoperable between the two languages so that programs can be developed that share objects of atomic types across the language boundary. This paper examines some of the trade-offs of the design, points out some of its shortcomings, and outlines solutions that simplify the use of atomic objects in both languages.

History

The need for software to operate on objects of basic types in an atomic way goes back to the first multiprocessor systems. Many solutions were developed over the decades, each with its own unique characteristics that made writing portable code that took advantage of those features difficult. The C11 and C++11 standards codify an approach that allows software to make use of the hardware support for atomics on the broadest spectrum of processors. The authors of the proposals for C and C++ atomics envisioned efficient, lock-free implementations of these interfaces on architectures that provide robust support for such operations. The authors, however, didn’t want to preclude lock-based emulations on older or less capable hardware. Although emulated implementations were expected to store the lock associated with each atomic object separately from the object itself, this aspect too wasn’t mandated by the proposed specification. The proposers didn’t want to preclude stateful implementations of atomics.

Continue reading “Toward a Better Use of C11 Atomics – Part 1”

Share

Recent improvements to concurrent code in glibc

gnu logoIn this post, I will give examples of recent improvements to concurrent code in glibc, the GNU C library, in the upstream community project. In other words, this is code that can be executed by multiple threads at the same time and has to coordinate accesses to shared data using synchronization. While some of these improvements are user-visible, many of them are not but can serve as examples of how concurrent code in other code bases can be improved.

One of the user-visible improvements is a new implementation of Pthreads semaphores that I contributed. It puts less requirements on when a semaphore can be destructed by a program. Previously, programs had to wait for all calls to sem_wait or sem_post to return before they were allowed to call sem_destroy; now, under certain conditions, a thread that returned from sem_wait can call sem_destroy immediately even though the matching sem_post call has woken this thread but not returned yet. This works if, for example, the semaphore is effectively a reference counter for itself; specifically, the program must still ensure that there are no other concurrent, in-flight sem_wait calls or sem_post calls that are yet to increment the semaphore. The new semaphore implementation is portable code due to being based on C11 atomic operations (see below) and replaces several architecture-specific implementations.

Continue reading “Recent improvements to concurrent code in glibc”

Share

Red Hat at the ISO C++ Standards Meeting (November 2014): Core

The Red Hat toolchain team was well-represented at the Fall 2014 meeting of the standardization committee (JTC1/SC22/WG21) in Urbana-Champaign, IL, USA. In this article, Jason Merrill summarizes the main highlights and developments of interest to Red Hat Enterprise Linux developers. Stay tuned for separate articles summarizing the library and concurrency working group aspects.

gnu logoThe fall meeting of WG21 (the C++ standardization committee) this year was hosted by the CS department at the University of Illinois at Urbana-Champaign.  This was the first meeting after ratification of the C++14 standard, and we weren’t changing the working paper while C++14 was out for voting ISO doesn’t allow changes to the working paper while there’s an open ballot, so there was a lot of leftover business from the last few meetings that was waiting to be voted on.

As usual, I spent the week in the Core Language Working Group.  We spent the majority of the week reviewing papers for new language features.

Continue reading “Red Hat at the ISO C++ Standards Meeting (November 2014): Core”

Share

C/C++ Programming Abstractions for Parallelism and Concurrency – Part 2

Welcome to part 2 of this two-part article on C/C++ Programming Abstractions for Parallelism and Concurrency.  If you missed Part 1, view it here.

Supporting task-based parallelism

Let us now switch from concurrency to parallelism. I already mentioned that C++11 and C11 provide support for creating threads that execute additional work in parallel or concurrently. However, these facilities are rather resource abstractions (i.e., for operating system threads) than abstractions aimed purely at parallelism. One target for the latter is often task-based parallelism, which allows programmers to split a part of a program into tasks (i.e., units of work). These tasks will run in parallel, but they can also depend on other tasks in which case a dependent task will not start executing until all it’s dependencies are fulfilled (e.g., until a prior task has finished generating output that constitutes input for the current task). This essentially creates a directed acyclic graph (DAG) of tasks; tasks that are not ordered in the DAG wrt. each other can execute in parallel.

So, how can programmers express that they want to run a parallel task? When managing threads explicitly using the thread abstractions (explicit threading for short), this may look like this:

try {
  auto task = std::thread(work); // Execute the work() function
  // ... Do something else ...
  task.join();
  // ... Use the task's result ...
}
catch (std::system_error e) { error_fallback(); }

We explicitly create a new thread and join the thread (i.e., wait for it to complete its work) at task dependencies. We need error handling and a fallback in case we cannot create another thread for some reason.

Continue reading “C/C++ Programming Abstractions for Parallelism and Concurrency – Part 2”

Share

C/C++ Programming Abstractions for Parallelism and Concurrency – Part 1

When writing parallel or multi-threaded programs, programmers have to deal with parallelism and concurrency. Both are related concepts but are not the same. In this article, we will review the differences between them and outline a few programming abstractions for both (in particular, atomic data types, Transactional Memory, and task-based parallelism). Red Hat Developer Toolset 1.1 ships with GCC-4.7, which provides (experimental) support for these particular features. Finally, a short outlook on future features proposed for inclusion in the C/C++ standards and considered for upstream GCC.

Concurrent execution refers to situations in which several (logical) processes or threads execute at the same time and are not guaranteed to be independent; for example, they could communicate with each other, wait for other threads to make progress, or could have to execute operations mutually exclusive with the execution of other threads’ operations. In contrast, parallel execution refers to several processes performing independent operations that, informally, do not have to consider what the other parallel parts are doing.

Nonetheless, parallelism is related to concurrency in that a typical parallel program will also contain concurrent pieces of code (e.g., to merge the results of parallel computations into a single output value). Also, to benefit from parallelism in hardware, concurrent code often tries to execute as much as possible in parallel (see Amdahl’s Law).

Continue reading “C/C++ Programming Abstractions for Parallelism and Concurrency – Part 1”

Share