Shenandoah GC

In this series of articles, I'll be discussing new developments of Shenandoah GC coming up in JDK 13. In part 1, I looked at the switch of Shenandoah's barrier model to load reference barriers and what that means.

The change I want to talk about here addresses another frequent—perhaps the most frequent—concern about Shenandoah GC: the need for an extra word per object. Many believe this is a core requirement for Shenandoah, but it is actually not, as you'll see below.

Let's first look at the usual object layout of an object in the Hotspot JVM:

0: [mark-word ]
8: [class-word ]
16: [field 1 ]
24: [field 2 ]
32: [field 3 ]

Each section here marks a heap word. That would be 64 bits on 64-bit architectures and 32 bits on 32-bit architectures.

The first word is the so-called mark word, or header of the object. It is used for a variety of purposes. For example, it can keep the hash-code of an object; it has 3 bits that are used for various locking states; some GCs use it to track object age and marking status; and it can be "overlaid" with a pointer to the "displaced" mark, to an "inflated" lock, or, during GC, the forwarding pointer.

The second word is reserved for the klass pointer. This is simply a pointer to the Hotspot-internal data structure that represents the class of the object.

Arrays would have an additional word next to store the array length. What follows is the actual payload of the object, that is, fields and array elements.

When running with Shenandoah enabled, the layout would look like this instead:

-8: [fwd pointer]
0: [mark-word ]
8: [class-word ]
16: [field 1 ]
24: [field 2 ]
32: [field 3 ]

The forward pointer is used for Shenandoah's concurrent evacuation protocol:

  • Normally, it points to itself -> the object is not evacuated yet.
  • When evacuating (by the GC or via a write-barrier), we first copy the object, then install a new forwarding pointer to that copy using an atomic compare-and-swap, possibly yielding a pointer to an offending copy. Only one copy wins.
  • Now, the canonical copy to read-from or write-to can be found simply by reading this forwarding pointer.

The advantage of this protocol is that it's simple and cheap. The cheap aspect is important here, because, remember, Shenandoah needs to resolve the forwardee for every single read or write, even primitive ones. And, using this protocol, the read-barrier for this would be a single instruction:

mov %rax, (%rax, -8)

That's about as simple as it gets.

The disadvantage is obviously that it requires more memory. In the worst case, for objects without any payload, that's one more word for an otherwise two-word object. That's 50% more. With more realistic object size distributions, you'd still end up with 5%-10% more overhead, YMMV. This also results in reduced performance: allocating the same number of objects would hit the ceiling faster than without that overhead—prompting GCs more often—and thus reduce throughput.

If you've read carefully so far, you will have noticed that the mark word is also used/overlaid by some GCs to carry the forwarding pointer. So, why not do the same in Shenandoah? The answer is (or used to be), that reading the forwarding pointer required a little more work. We need to somehow distinguish a true mark word from a forwarding pointer. That is done by setting the lowest two bits in the mark word. Those are usually used as locking bits, but the combination 0b11 is not a legal combination of lock bits. In other words, when they are set, the mark word, with the lowest bits masked to 0, is to be interpreted as the forwarding pointer. This decoding of the mark word is significantly more complex than the above simple read of the forwarding pointer. I did in fact build a prototype a while ago, and the additional cost of the read-barriers was prohibitive and did not justify the savings.

All of this changed with the recent arrival of load reference barriers:

  • We no longer require read-barriers, especially not on (very frequent) primitive reads.
  • The load-reference-barriers are conditional, which means their slow-path (actual resolution) is only activated when 1. GC is active and 2. the object in question is in the collection set. This is fairly infrequent. Compare that to the previous read-barriers which would be always-on.
  • We no longer allow any access to from-space copies. The strong invariant guarantees that we only ever read from and write to to-space copies.

Two consequences are as follows. The from-space copy is not actually used for anything, and we can use that space for the forwarding pointer, instead of reserving an extra word for it. We can basically nuke the whole contents of the from-space copy and put the forwarding pointer anywhere. We only need to be able to distinguish between "not forwarded" (and we don't care about other contents) and "forwarded" (the rest is forwarding pointer).

It also means that the actual mid- and slow-paths of the load reference barriers are not all that hot, and we can easily afford to do a little bit of decoding there. It amounts to something like (in pseudocode):

oop decode_forwarding(oop obj) {
  mark m = obj->load_mark();
  if ((m & 0b11) == 0b11) {
    return (oop) (m & ~0b11);
  } else {
    return obj;
  }
}

While this looks noticeably more complicated than the simple load of the forwarding pointer, it is still basically a free lunch because it's only ever executed in the not-very-hot mid-path of the load reference barrier. With this, the new object layout would be:

0: [mark word (or fwd pointer)]
8: [class word]
16: [field 1]
24: [field 2]
32: [field 3]

This approach has several advantages:

  • Obviously, it reduces Shenandoah's memory footprint by doing away with the extra word.
  • Not quite as obviously, it results in increased throughput: We can now allocate more objects before hitting the GC trigger, resulting in fewer cycles spent in actual GC.
  • Objects are packed more tightly, which results in improved CPU cache pressure.
  • Again, the required GC interfaces are simpler: Where we needed special implementations of the allocation paths (to reserve and initialize the extra word), we can now use the same allocation code as any other GC.

To give you an idea of the throughput improvements, note that all the GC sensitive benchmarks that I have tried showed gains between 10% and 15%. Others benefited less or not at all, but that is not surprising for benchmarks that don't do any GC at all.

It is, however, important to note that the extra decoding cost does not actually show up anywhere; it is basically negligible. It probably would show up on heavily evacuating workloads, but most applications don't evacuate that much, and most of the work is done by GC threads anyway, making mid-path decoding cheap enough.

The implementation of this has recently been pushed to the Shenandoah/JDK repository. We are currently shaking out one last known bug, and then it will be ready to go upstream into JDK 13 repository. The plan is to eventually backport it to Shenandoah's JDK 11 and JDK 8 backports repositories, and from there into RPMs. If you don't want to wait, you can already have it: check out the Shenandoah GC Wiki.

Read more

Shenandoah GC in JDK 13, Part 1: Load reference barriers

Shenandoah GC in JDK 13, Part 3: Architectures and operating systems

Last updated: July 2, 2019