Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Toward a Better Use of C11 Atomics - Part 2

January 19, 2016
Martin Sebor
Related products:
Developer Toolset

    Continued from Part 1.

    Static Initialization

    The C and C++ standards specify that

    ...the default (zero) initialization for objects with static or thread-local storage duration is guaranteed to produce a valid state.

    This means that, for example, defining an atomic object at file scope without an initializer as shown below is guaranteed to initialize it to a default state and default (zero) value.

    atomic_int counter;

    Other than zero-initialization, the standards require that atomic objects with static and thread storage duration be initialized using one of the ATOMIC_VAR_INIT() and atomic_init() macros. This requirement is a vestige of the original proposal which specified atomic types as structs. The expectation was that emulated implementations that stored a mutex in the struct along with the value would define the ATOMIC_VAR_INIT() macro along the following lines

    #define ATOMIC_VAR_INIT(Value) \
        { .mutex = PTHREAD_MUTEX_INITIALIZER, .value = (Value) }

    But since the atomic types C11 ended up with are not structs but rather qualified basic types, such an initializer would be invalid. Not only that, since in the same implementation some atomic types may be lock-free and others not, the same macro couldn't be used to initialize objects of both kinds of types. As a result, the only reasonable way to define the macro is as simply expanding to its argument, regardless of whether the implementation is lock-free or stateful. This is also how known implementations implement it. Consequently, the ATOMIC_VAR_INIT() macro is entirely unnecessary and atomic objects can be simply initialized using the same syntax as ordinary, non-atomic objects.

    Dynamic Initialization

    Although atomic types are new in C11 and C++11, integers and pointers have been atomically operated on in various ways for many years. Most low-level software such as operating system kernels and much higher-level software, including many general-purpose libraries such as the C or POSIX library have made use of the hardware atomic instructions to efficiently implement various primitives such as counters. The major benefit that the C and C++ standards provide is a uniform interface so that the low level software can be written more portably, without relying on the knowledge of the specifics of the hardware on each target system. The goal is not only for new software to adopt the standard interfaces to atomics but also for existing software to over time transition to using them instead of the legacy, less portable and less well specified and tested APIs. To that end, it's important that the standard interfaces provide an easy migration path that minimizes the risk of introducing bugs in the process.

    Unfortunately, the C and C++ standard specify that each atomic object must be initialized exactly once, and only using one of the two initialization macros. Initializing an object multiple times is undefined. This again is a vestige of the original specification where atomic types were structs possibly with additional state holding the object's mutex. Because a mutex must be initialized exactly once, it made sense to require the same of atomics. But since the standard C atomic types are basic types and not structs, this restriction is no longer necessary, either in lock-free implementations, or in stateful ones. Let's first see why atomic_init() isn't necessary in lock-free implementations and then explore why it isn't needed in the hypothetical stateful ones.

    Known lock-free implementations define atomic_init() as some form of assignment. GCC 5, for example, implements it with sequentially consistent semantics:

    #define atomic_init(PTR, VAL) \
        do {                      \
            *(PTR) = (VAL);       \
        } while (0)

    It turns out that this isn't the most efficient implementation (see GCC bug 68868) but it's a correct one nonetheless. A more efficient definition of the macro for GCC and compatible compilers is simply this:

    #define atomic_init(PTR, VAL) \
        atomic_store_explicit (PTR, VAL, memory_order_relaxed)

    The difference between the two is that the first form will likely result in a fence emitted by the compiler while the second form will not. As compilers get better at optimizing atomic operations, they will likely avoid emitting the fence when it's not necessary. For example, when a compiler can prove that the atomic object isn't shared with other threads (a common situation when initializing an object that has just been dynamically allocated), it can safely treat the atomic variable as an ordinary one. (See GCC bug 68622 and the GCC Atomic Optimizations Wiki page).

    Either way, since the macro expands to either a simple construct in the language (assignment) which is already valid for atomics, or to an invocation of the atomic_store_explicit() API, there is no need for yet another way to do the same thing.

    Let's now see why the macro wouldn't be useful in a stateful implementation if one were to emerge that stored a mutex alongside the value in atomic objects. Suppose that this implementation defined the atomic_init() macro to first initialize the mutex as if by calling either the C11 mtx_init() function, or some other mutex initialization function like the POSIX pthread_mutex_init(). For illustration, the macro definition might then look something like this:

    #define atomic_init(PTR, VAL)               \
        do {                                    \
            mtx_init ((PTR)->mutex, mtx_plain); \
            (PTR)->value = (VAL);               \
        } while (0)

    In reality, since atomic types aren't structs but basic types, the macro would have to be somewhat more involved and likely employ some compiler magic to access the hidden mutex and value members of the struct. But that detail is not important for this discussion.

    Since both mtx_init() and pthread_mutex_init() allocate resources and can fail, how should this hypothetical implementation let programs detect and handle the failure? The answer is that it couldn't because the atomic_init() macro doesn't return a value. When the atomic object were to be used after a failed initialization, an attempt to lock its uninitialized mutex would be undefined. It could lead to a crash of the program, or worse, it could allow the program to continue to run and access the object without synchronization, causing a data race.

    But let's suppose that this hypothetical implementation guaranteed the initialization of its mutex object to never fail. Could that implementation be then used safely? It turns out that it most likely could not. Here's why. All implementations of mutexes provide a pair of functions: one to initialize it and one to destroy it and release the system resources acquired during its construction. In the C11 threads library these functions are mtx_init() and mtx_destroy(). In the POSIX threads library, they are pthread_mutex_init() and pthread_mutex_destroy(). Calls to these two functions must be paired to avoid resource leaks. The Windows SDK provides the CreateMutex() and CloseHandle() pair of functions with equivalent semantics. But since the C11 atomic API provides only atomic_init() and no equivalent of atomic_destroy(), programs using such an implementation would have no way to release the resources acquired in atomic_init(). As a result, implementations must avoid storing any state that requires non-trivial initialization in atomic objects.

    Consequently, not only is it safe to use the atomic_init() macro to "initialize" the same atomic variable more than once, since it has the same effects as plain assignment or, more efficiently, as the atomic_store_explicit() generic function invoked with the memory_order_relaxed argument, the macro is unnecessary. More than that, since the most efficient implementation of the atomic_init() macro provides only weakly consistent semantics which normally must be explicitly requested by passing memory_order_relaxed as the last argument to the atomic_store_explicit() generic function, the macro is arguably less safe than the explicit alternative. This is especially true for legacy software migrating to C11 atomics, since in the migration it may not always be clear which assignment is initialization that doesn't require synchronization and which assignment is to a shared variable that must be protected from data races and on whose sequential order the program might depend. Using plain assignment in this case with its sequentially consistent guarantees or, equivalently, calling atomic_store(), is the safer alternative. Conversely, when using atomic_init() is known to be safe, using atomic_store_explicit(..., memory_order_relaxed) instead is also safe and guaranteed to be at least as efficient and could be more.

    By the way of an example, suppose we are changing the following legacy code to use C11 atomics:

    typedef struct SomeStruct {
          int         counter;   // treated as atomic
          struct Data data;      // ordinary non-atomic data
    } SomeStruct;
    
    void initSomeStruct (SomeStruct *ps) {
        legacy_atomic_store (&ps->counter, 0);   // possible initialization
        ...
    }

    As the first step, we change the type of counter to atomic_int, and replace the call to legacy_atomic_store() presumably made to initialize it, with an ordinary assignment:

    typedef struct SomeStruct {
          atomic_int  counter;
          struct Data data;
    } SomeStruct;
    
    void initSomeStruct (SomeStruct *ps) {
        ps->counter = 0;   // safe but slow
        // Equivalently:
        //   atomic_store (&ps->counter, 0);
        // or
        //   atomic_store_explicit (&ps->counter, 0, memory_order_seq_cst);
        ...
    }

    After the changed code has been tested and validated and after it has been confirmed that initSomeStruct() really is called only to initialize the data and not to enforce any particular memory order it may be possible to optimize it by replacing the sequentially consistent store with a weak one:

    void initSomeStruct (SomeStruct *ps) {
        atomic_store_explicit (&ps->counter, 0, memory_order_weak);   // fast initialization
        ...
    }

    Conclusion

    C and C++ atomic types and operations are a powerful interface to write high performance, data race-free software running on multi-processor or multi-core systems. As one might expect, they provide full interoperability between mixed code written in the two languages. In this article we have shown that while the interfaces aren't perfect, the problems that have been uncovered so far are relatively minor and can be easily corrected to make atomics even easier and safer to use than they are today. In fact, users of existing implementations don't need to wait for the standards to change to take advantage of the corrections. It turns out that they are already available and the standards simply need to catch up with existing practice.

    Last updated: February 22, 2024

    Recent Posts

    • Federated identity across the hybrid cloud using zero trust workload identity manager

    • Confidential virtual machine storage attack scenarios

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.