Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • Guided learning
      Receive custom learning paths powered by our AI assistant.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Improving GCC’s internals

September 16, 2014
David Malcolm
Related topics:
Developer tools
Related products:
Developer Toolset

    If you've done any C or C++ development on Fedora or Red Hat Enterprise Linux (RHEL), you'll have used GCC, the GNU Compiler Collection.

    Red Hat has long been a leading contributor to GCC, and this continues as we work with others in the "upstream" GCC community on the next major release:  GCC 5.

    gnu logo

    In this post I'll talk about some of the deep architectural changes I've been making to GCC. You won't directly see these changes unless you look at GCC's own source code, but they make GCC more robust - you'll be less likely to see an "Internal Compiler Error", and they make GCC development easier.

    Under the hood: GCC's backend

    The code-generation part of the compiler (the "backend") uses an internal representation called Register Transfer Language (RTL). The core data structure is a hierarchy of expressions of type "rtx", organized into trees.

    For example, given the trivial C function:

    double
    product (double x, double y)
    {
      return x * y;
    }
    

    the multiplication might be expressed as this pattern (printed using a
    Lisp-like syntax):

    (set (reg:DF 63 [ D.1732 ])
         (mult:DF (reg/v:DF 61 [ x ])
                  (reg/v:DF 62 [ y ])))
    

    What does this mean? We have:

    (set
    

    which describes assigning a value to a destination from a source. In this case, the destination is a write to register 63:

         (reg:DF 63 [ D.1732 ])
    

    using the result of multiplying registers 61 and 62 as the source:

         (mult:DF (reg/v:DF 61 [ x ])
                  (reg/v:DF 62 [ y ]))
    

    The "DF" means that we're dealing with double-precision floats.

    You can also see annotation nodes attached to the main tree nodes, recording which higher-level constructs they relate to: in this case the three registers correspond to a temporary value with the internal name "D.1732", and to the function parameters "x" and "y". This gets used when writing out the debuginfo, for use when stepping through the code in
    gdb.

    Making the backend easier to hack on

    In the above representation, everything is an rtx node, with a list of operands. It's flexible and powerful - for example it's used for writing CPU descriptions, expressing the kinds of instructions that are available on a given CPU, and these descriptions are used in numerous ways by the backend, such as for selecting the most efficient opcode available for a given operation.

    The drawback of the "everything is just an rtx node" approach is that it can be awkward to work with when writing optimization passes. Since nodes in the tree are built and accessed as just a list of numbered operands, there is no type-checking when the nodes are accessed. There are other data structures built using this framework that aren't tree-like: in particular, linked lists of instructions, used throughout the backend. Plenty of routines in the backend expect to receive an rtx node of a particular kind, and if they don't get what they expect, you'd see an "Internal Compiler Error".

    So one of many internal changes we've been working on for GCC 5 is to express the kinds of rtx nodes as types in a C++ inheritance hierarchy, so that type errors of this kind become build-time errors when the compiler itself is built, rather than a run-time failure.

    I've written and committed over 250 patches implementing such cleanups to the latest development branch of GCC.   Doing this uncovered two bugs where optimizations were being missed, albeit on CPU architectures not supported by RHEL, which I've now fixed.

    This is my favorite kind of bug-fixing: eliminating an entire category of mistake, so that bugs of that kind can't occur again.

    As well as reducing the likelihood of you seeing "Internal Compiler Error", this approach also leads to more readable code for the compiler's internals. For example, previously this loop from the instruction-scheduling code might be rather mystifying to a newcomer to GCC development:

    for (rtx link = insn_queue[q]; link; link = XEXP (link, 1))
      {
        rtx x = XEXP (link, 0);
        QUEUE_INDEX (x) = QUEUE_NOWHERE;
        INSN_TICK (x) = INVALID_TICK;
      }
    

    You might reasonably wonder what those "XEXP (link, 1)" and "XEXP (link, 0)" mean. Knowing that this means accessing operands 1 and 0 respectively of "link" isn't necessarily very enlightening.

    Using rtx subclasses it can be rewritten as:

    for (rtx_insn_list *link = insn_queue[q]; link; link = link->next ())
      {
        rtx_insn *x = link->insn ();
        QUEUE_INDEX (x) = QUEUE_NOWHERE;
        INSN_TICK (x) = INVALID_TICK;
      }
    

    replacing the "XEXP (link, 1)" with "link->next ()" to make it clear we're simply walking down a linked list, operating on the nodes.

    What's next

    The changes described above will make GCC's backend more robust, and easier to hack on. Indeed, the simpler implementation code should help us to make GCC generate faster code.  Stay tuned for more posts on the work we're doing in GCC 5 (targeting 2015 upstream), and the kinds of improvements that the above work enables.

    Last updated: February 23, 2024

    Recent Posts

    • Every layer counts: Defense in depth for AI agents with Red Hat AI

    • Fun in the RUN instruction: Why container builds with distroless images can surprise you

    • Trusted software factory: Building trust in the agentic AI era

    • Build a zero trust AI pipeline with OpenShift and RHEL CVMs

    • Red Hat Hardened Images: Top 5 benefits for software developers

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.