Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Red Hat at the ISO C++ Standards Meeting (July 2017): Parallelism and Concurrency

<p>&nbsp;</p> <quillbot-extension-portal></quillbot-extension-portal>

August 29, 2017
Torvald Riegel
Related topics:
Developer tools
Related products:
Developer Toolset

    Several Red Hat engineers attended the JTC1/SC22/WG21 C++ Standards Committee meetings in July 2017. This post focuses on the sessions of SG1, the study group on parallelism and concurrency.  We discussed several synchronization-related proposals, improvements for futures, and, of course, executors. Also, I proposed a few steps that the SG1 community could take to get more efficient in how it conducts its work, which are all inspired by how successful open source projects work.

    Most of the proposals we discussed at this meeting were related to concurrency.  Latches were moved into the draft for the next version of the standard, and we made progress on semaphores, hazard pointers, and deferred reclamation. We discussed bugs in the memory model that, however, seem to arise only in arguably odd pieces of synchronization code or are just bugs in the specification but not in actual implementations. The trade-off for fixing the former seems to be between (1) decreasing performance of common synchronization code and (2) changing the behavior guaranteed by the memory model for those truly odd cases; it seems likely that we will accept the latter because the former would decrease performance too much for too many users. Any change to the memory model should be accompanied by good explanations of the change and an update of the important tools (e.g., cppmem), in my opinion.

    The role of futures, or std::future, in particular, is increasing in C++ due to recent proposals related to asynchronous execution, and due to more facilities that can spawn work tasks (e.g., executors) and thus need to return a handle to such work that may not yet have completed yet. However, std::future seems to not be quite the best fit for all these uses, and we discussed several proposals that explain problems in the design or present different interfaces for futures.

    I presented a particular problem of futures when combined with threads of execution that are run with only weaker forward progress guarantees (i.e., parallel or weakly parallel forward progress, as for example threads of execution spawned by the parallel algorithms in C++17 under certain execution policies). I argued that if a future is used to block on such weak-progress threads of execution, the blocking operation should use blocking with progress guarantee delegation. This ensures, informally, that dependencies on other work drive execution of that other work, which is a very useful property in my opinion -- it is also how the parallel algorithms can safely make use of threads with weak progress guarantees.  However, current futures can be used both as (1) a means to express dependencies on other work and (2) as a synchronization mechanism, and blocking with progress guarantee delegation isn't meant for the latter.  This is yet another indication that a re-design of futures could be helpful so that these two different use cases can be distinguished.

    Executors were, of course, also discussed in the meeting, in particular, a new proposal regarding how to simplify the previously proposed unified interface. SG1 is making progress regarding executors and is getting closer to agreeing on a design, but I can't report that we would have consensus on a design yet, unfortunately.

    We ended the week by discussing how we could improve the way that SG1 operates and how we collaborate with each other and other contributors. I have been arguing for doing things that successful open source projects do: SG1, its contributors, and our users are effectively a community, and even though the C++ standard is not the same as a codebase with an open-source license, we do want to and need to collaborate effectively and efficiently to be successful and serve our users well. Even just considering SG1 and regular contributors, there are several things we can do to improve collaboration, which should eventually lead to SG1 being more productive, and thus users likely getting good programming abstractions at a faster pace. Specific actions I suggested where (1) making the reasoning behind the design of the programming abstractions we specify more accessible, in particular for contributors that cannot participate full time, (2) increasing our focus on consensus building, and (3) enabling that more work can be done between meetings (i.e., in the spirit of continuous integration). I'm hopeful that we can agree on doing at least some of these things, and I'm looking forward to any improvements.


    Whether you are new to Containers or have experience, downloading this cheat sheet can assist you when encountering tasks you haven’t done lately.

    Last updated: March 22, 2023

    Recent Posts

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    • Red Hat UBI 8 builders have been promoted to the Paketo Buildpacks organization

    • Using eBPF in Red Hat products

    • How we made one data layer serve the UI, the mocks, and the E2E tests

    • Build trusted Python containers with Project Hummingbird and Calunga

    What’s up next?

     

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility