Dive deeper in NUMA systems
A common performance related issue we are seeing is how certain instructions
are causing bottlenecks. Sometimes it just doesn’t make sense. Especially
when it involves lots of threads or shared memory on NUMA systems.
For quite awhile a bunch of us have been writing tools to help exploit features
of the CPU to provide us insight to not only the instruction of the bottleneck
but the data address too.
See, the instruction is only half the picture. Having the data address allows
you to see two distinct functions operating on what looks like distinct data,
but yet are intertwined on a cache-line. Thus these functions are tugging
memory back and forth causing huge latency spikes.
Sometimes the answer is to separate the data onto different cache-lines, other
times (in the case of locks) perhaps change the granularity to reduce
Intel CPUs have support for providing data addresses for load and stores (along
with latency times for loads) through its performance counters. Userspace
exploits this feature with a tool called ‘perf’.
Latest perf can be run with:
#perf record -a -e cpu/mem-loads,ldlat=100/pp
This samples system-wide loads on all cpus. You can display them with
#perf report --mem --stdio
This shows some useful information:
# Overhead Samples Local Weight Memory access Symbol Data Symbol # ........ ............ ............ ................. .................. ....................... # 17.84% 904 213 L3 hit [.] spin [.] 0x00007f1f84137080 15.47% 1478 113 L3 hit [.] spin [.] 0x00007f1f84137080 13.29% 780 184 L3 hit [.] spin [.] 0x00007f1f84137000 12.69% 637 215 L3 hit [.] spin [.] 0x00007f1f84137080 10.87% 624 188 L3 hit [.] spin [.] 0x00007f1f84137000 6.45% 330 211 L3 hit [.] spin [.] 0x00007f1f84137080 3.95% 384 111 L3 hit [.] spin [.] 0x00007f1f84137080
Now I run the same thing except looking for stores (writes)
#perf record -a -e cpu/mem-loads/pp #perf report --mem --stdio
and I see:
# Overhead Samples Symbol Data Symbol # ........ ............ .................. ....................... # 10.64% 2048 [.] spin [.] 0x00007f1f84137080 10.21% 1967 [.] spin [.] 0x00007f1f84137000 5.22% 1006 [.] proc2 [.] 0x00007fffd8987c68 5.13% 987 [.] proc1 [.] 0x00007fffd8987c68 4.90% 943 [.] acquire_lock [.] 0x00007fffd8987c58 2.69% 518 [.] release_lock [.] 0x00007f1f84137080 2.67% 514 [.] release_lock [.] 0x00007f1f84137000
The next step is to combine the data symbols from the loads and stores to get a bigger picture of what is
going on. If you look at the top two lines of the outputs you see they both are accessing data addresses
0x00007f1f84137080 and 0x00007f1f84137080.
What this tells us is that the load latency is being caused by the constant stores (or writes) to the same
data address. Reading the symbol reveals the function doing the loading and storing.
Now in this simple example, the test is just ping-ponging back and forth on two spin locks. Reducing the
locking time or moving the threads onto the same NUMA node would reduce the latency and contention.
This example just shows how an expensive cache contention issue can slow down a NUMA system.
Come and see my talk (“NUMA – Verifying it’s not hurting your application performance” @ Red Hat Developer Exchange ) about the types of tools we have been working on to see if your system has these
types of problems and how they can be exposed easily.
Join the Red Hat Developer Program (it’s free) and get access to related cheat sheets, books, and product downloads.