Anyone who engages in C/C++ development on a modern GNU/Linux system typically ends up using the GCC or LLVM compiler, to which Red Hat actively contributes. As a member of the toolchain engineering team, I mostly work on runtime libraries (glibc
), but being acquainted with the internals of the compiler that builds Red Hat Enterprise Linux (RHEL) is useful.
With this goal in mind, and with a bit of advice from experienced engineers on the team, I decided to dip my toes into GCC internals by attempting to implement a small optimization. I was pointed to a list of open "tree optimization" bugs marked as easyhack
. These are beginner-friendly optimizations identified by the community.
Finding a suitable GCC bug
I went through a number of reports in the bug tracker. Some were already fixed by a recent commit and just needed closing, and others weren't nearly as simple as I'd like for my first attempt. Eventually, I found what seemed like an easy bug to work on, in the form of bug 96703, reported by Gabriel Ravier. The gist of the bug is that the following expression:
(x > y) && (y == 0)
can be simplified to:
(x > 0) && (y == 0)
The optimization works because you're not interested in the result of x > y
unless y
is zero.
In his initial comment Gabriel mentioned that, although this optimization doesn't appear to make the code run any faster, it can potentially run faster on pipelined CPUs and make further optimizations easier. This is an optimization because it removes an "input dependence" between the two comparisons, and makes the first comparison depend only on one variable instead of two.
Clang/LLVM already implements this optimization, an advantage that happens to be a common thread in many of the bugs reported by Gabriel. It's nice to observe the healthy competition between the two compiler communities that leads to great compilers for everyone.
Using GCC's Match and Simplify DSL
So how would one go about implementing this optimization in GCC, as well? Thankfully, GCC comes with a purpose-built, domain-specific language (DSL) for writing expression simplifications like this one. It's described in the Match and Simplify chapter of the GCC Internals manual.
The language is declarative. A new expression simplification only needs to be declared in gcc/match.pd
. The equivalent optimizer code is generated at GCC build time. A simplification takes the form of:
(simplify <expression> <simplified_expression>)
An example from the GCC Internals manual is:
(simplify
(bit_and @0 integer_all_onesp)
@0)
Items in this language are combined through Polish or prefix notation (along with a parenthesis-delimited syntax suspiciously reminiscent of Lisp). The bit_and
prefix represents the &&
that appears between expressions in C, but here bit_and
appears before the two expressions instead of between them.
This domain-specific code declares a simplification where the bitwise AND
of any operand (captured as @0
) and "all ones" (i.e., 0xFFFFFFFF
) is equivalent to the operand @0
itself. integer_all_onesp
is a predicate that returns true for any operand that has all of its bits turned ON
.
After looking around in match.pd
for inspiration, I could easily see that operations can be nested inside other operations, and that GCC will gladly accept, match, and simplify them. For the optimization I wanted to implement, the initial expression would be something like:
(bit_and (gt @0 @1) (eq @1 integer_zerop@2))
integer_zerop
is another predicate, matching integers with the value 0. The @2
captures the operand so that it can be reused during the simplification if needed.
The optimized expression should look a bit like:
(bit_and (gt @0 ZERO) (eq @1 @2))
At this point, I didn't know how to emit 0 in the resulting expression. But once again, reading match.pd
gives the idea that build_zero_cst (some_type)
probably translates to a 0 value of the given type. Also, TREE_TYPE (@0)
returns the type of operand @0
. Combining these constructs leads to a simplified expression that GCC is likely to grok:
(bit_and
(gt @0 { build_zero_cst (TREE_TYPE (@0)); }) (eq @1 @2); }))
Putting all of this together, the final declaration of the simplification is:
/* (x > y) && (y == 0) -> (x > 0) && (y == 0) */
(simplify
(bit_and (gt @0 @1) (eq @1 integer_zerop@2))
(bit_and (gt @0 { build_zero_cst (TREE_TYPE (@0)); })
(eq @1 @2)))
Adding this declaration to match.pd
and compiling it was all I needed to do to insert the necessary change into generated code. Now, let's see whether the change was truly an optimization.
Did it work?
Before I added the declaration, the following C code:
bool
f(int x, int y)
{
return x > y && y == 0;
}
translated (on x86-64) to:
f:
cmpl %esi, %edi # compare X and Y
setg %al # set AL if X gt Y
testl %esi, %esi # test Y
sete %dl # set DL if Y eq 0
andl %edx, %eax # AL and DL
ret
After adding the declaration, GCC produces:
f:
testl %esi, %esi # test Y
sete %al # set AL if Y equals 0
testl %edi, %edi # test X
setg %dl # set DL if X gt 0
andl %edx, %eax # AL and DL
ret
The change breaks the input dependency, as I hoped, and brings the code in line with what Clang/LLVM generates.
Conclusion
Although my learning curve—from never having looked at match.pd
or tree optimizations to having hacked together my first optimization—was a bit steep, I found it was relatively easy for a beginner to implement a minor tree optimization using GCC's "Match and Simplify" infrastructure.
I have yet to properly test this new pattern, and there's possibly a fatal flaw in my approach. Also, there are several open questions, such as whether there is a more general pattern that optimizes for constants other than 0 and comparisons other than "greater than." I hope to answer these questions before I try submitting my very first patch. In the meantime, I'm happy to report that beginners can indeed quickly work toward their first contribution to GCC.
Happy hacking!
Last updated: September 27, 2024