Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Improving GCC’s internals

September 16, 2014
David Malcolm
Related topics:
Developer Tools
Related products:
Developer Tools

Share:

    If you've done any C or C++ development on Fedora or Red Hat Enterprise Linux (RHEL), you'll have used GCC, the GNU Compiler Collection.

    Red Hat has long been a leading contributor to GCC, and this continues as we work with others in the "upstream" GCC community on the next major release:  GCC 5.

    gnu logo

    In this post I'll talk about some of the deep architectural changes I've been making to GCC. You won't directly see these changes unless you look at GCC's own source code, but they make GCC more robust - you'll be less likely to see an "Internal Compiler Error", and they make GCC development easier.

    Under the hood: GCC's backend

    The code-generation part of the compiler (the "backend") uses an internal representation called Register Transfer Language (RTL). The core data structure is a hierarchy of expressions of type "rtx", organized into trees.

    For example, given the trivial C function:

    double
    product (double x, double y)
    {
      return x * y;
    }
    

    the multiplication might be expressed as this pattern (printed using a
    Lisp-like syntax):

    (set (reg:DF 63 [ D.1732 ])
         (mult:DF (reg/v:DF 61 [ x ])
                  (reg/v:DF 62 [ y ])))
    

    What does this mean? We have:

    (set
    

    which describes assigning a value to a destination from a source. In this case, the destination is a write to register 63:

         (reg:DF 63 [ D.1732 ])
    

    using the result of multiplying registers 61 and 62 as the source:

         (mult:DF (reg/v:DF 61 [ x ])
                  (reg/v:DF 62 [ y ]))
    

    The "DF" means that we're dealing with double-precision floats.

    You can also see annotation nodes attached to the main tree nodes, recording which higher-level constructs they relate to: in this case the three registers correspond to a temporary value with the internal name "D.1732", and to the function parameters "x" and "y". This gets used when writing out the debuginfo, for use when stepping through the code in
    gdb.

    Making the backend easier to hack on

    In the above representation, everything is an rtx node, with a list of operands. It's flexible and powerful - for example it's used for writing CPU descriptions, expressing the kinds of instructions that are available on a given CPU, and these descriptions are used in numerous ways by the backend, such as for selecting the most efficient opcode available for a given operation.

    The drawback of the "everything is just an rtx node" approach is that it can be awkward to work with when writing optimization passes. Since nodes in the tree are built and accessed as just a list of numbered operands, there is no type-checking when the nodes are accessed. There are other data structures built using this framework that aren't tree-like: in particular, linked lists of instructions, used throughout the backend. Plenty of routines in the backend expect to receive an rtx node of a particular kind, and if they don't get what they expect, you'd see an "Internal Compiler Error".

    So one of many internal changes we've been working on for GCC 5 is to express the kinds of rtx nodes as types in a C++ inheritance hierarchy, so that type errors of this kind become build-time errors when the compiler itself is built, rather than a run-time failure.

    I've written and committed over 250 patches implementing such cleanups to the latest development branch of GCC.   Doing this uncovered two bugs where optimizations were being missed, albeit on CPU architectures not supported by RHEL, which I've now fixed.

    This is my favorite kind of bug-fixing: eliminating an entire category of mistake, so that bugs of that kind can't occur again.

    As well as reducing the likelihood of you seeing "Internal Compiler Error", this approach also leads to more readable code for the compiler's internals. For example, previously this loop from the instruction-scheduling code might be rather mystifying to a newcomer to GCC development:

    for (rtx link = insn_queue[q]; link; link = XEXP (link, 1))
      {
        rtx x = XEXP (link, 0);
        QUEUE_INDEX (x) = QUEUE_NOWHERE;
        INSN_TICK (x) = INVALID_TICK;
      }
    

    You might reasonably wonder what those "XEXP (link, 1)" and "XEXP (link, 0)" mean. Knowing that this means accessing operands 1 and 0 respectively of "link" isn't necessarily very enlightening.

    Using rtx subclasses it can be rewritten as:

    for (rtx_insn_list *link = insn_queue[q]; link; link = link->next ())
      {
        rtx_insn *x = link->insn ();
        QUEUE_INDEX (x) = QUEUE_NOWHERE;
        INSN_TICK (x) = INVALID_TICK;
      }
    

    replacing the "XEXP (link, 1)" with "link->next ()" to make it clear we're simply walking down a linked list, operating on the nodes.

    What's next

    The changes described above will make GCC's backend more robust, and easier to hack on. Indeed, the simpler implementation code should help us to make GCC generate faster code.  Stay tuned for more posts on the work we're doing in GCC 5 (targeting 2015 upstream), and the kinds of improvements that the above work enables.

    Last updated: February 23, 2024

    Recent Posts

    • Run Qwen3-Next on vLLM with Red Hat AI: A step-by-step guide

    • How to implement observability with Python and Llama Stack

    • Deploy a lightweight AI model with AI Inference Server containerization

    • vLLM Semantic Router: Improving efficiency in AI reasoning

    • Declaratively assigning DNS records to virtual machines

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue