Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Determining whether an application has poor cache performance

March 10, 2014
William Cohen
Related topics:
Developer ToolsLinux
Related products:
Developer ToolsRed Hat Enterprise Linux

Share:

    Modern computer systems include cache memory to hide the higher latency and lower bandwidth of RAM memory from the processor. The cache has access latencies ranging from a few processor cycles to ten or twenty cycles rather than the hundreds of cycles needed to access RAM. If the processor must frequently obtain data from the RAM rather than the cache, performance will suffer. With Red Hat Enterprise Linux 6 and newer distributions, the system use of cache can be measured with the perf utility available from the perf RPM.

    perf uses the Performance Monitoring Units (PMUs) hardware in modern processors to collect data on hardware events such as cache accesses and cache misses without undue overhead on the system. The PMU hardware is processor implementation specific and the specific underlying events may differ between processors. For example one processor implementation measure the first-level cache events of the cache closest to the processor and another processor implementation may measure lower-level cache events for a cache farther from the processor and closer to main memory. The configuration of the cache may also differ between processors models; one processor in the processor family may have 2MB of last level cache and another member in the same processor family may have 8MB of last level cache. These differences makes direct comparison of event counts between processors difficult.

    NOTE: Depending the architecture, the software versions and system configuration use of the PMU hardware by perf utility may not be supported in KVM guest machines. In Red Hat Enterprise Linux 6, the guest virtual machine does not support the Performance Monitoring Unit hardware.

    To get a general understanding of how well or poorly an application is using the system cache, perf can collect statistics on the application and its children using the following command:

    $ perf stat -e task-clock,cycles,instructions,cache-references,cache-misses  ./stream_c.exe

    When the application completes perf will generate a summary of the measurement like the following:

     Performance counter stats for './stream_c.exe':
    
            229.872935 task-clock                #    0.996 CPUs utilized
           626,676,991 cycles                    #    2.726 GHz
           525,543,766 instructions              #    0.84  insns per cycle
            18,587,219 cache-references          #   80.859 M/sec
             6,605,955 cache-misses              #   35.540 % of all cache refs
    
           0.230761764 seconds time elapsed

    The output above is for a simple example program that tests the memory bandwidth performance. The test is relatively short - only 525,543,766 instructions executed requiring 626,676,991 processor cycles. The output also estimates the instructions per processor clock cycle (IPC). The higher IPC the more efficiently the processor is executing instruction on the system. In this case it is 0.84 instructions per clock cycle. The superscalar processor this example is run on could potentially execute four instructions per clock cycle, giving an upper bound of 4 IPC. The IPC will be affected by delay due to cache misses.

    The ratio of cache-misses to instructions will give an indication how well the cache is working; the lower the ratio the better. In this example the ratio is 1.26% (6,605,955 cache-misses/525,543,766 instructions). Because of the relatively large difference in cost between the RAM memory and cache access (100's cycles vs <20 cycles) even small improvements of cache miss rate can significantly improve performance. If the cache miss rate per instruction is over 5%, further investigation is required.

    It is possible for perf to provide more information about where the cache-misses events occur in code. Ideally, the application code should be compiled with debuginfo (GCC -g option) and debuginfo RPMs installed for the related system supplied executables and libraries, so that perf can map the samples back to the source code and give you a better idea where the cache misses occur.

    The perf stat command only provides an overall count of the events. The perf record command performs sampling recording where those hardware events occur. In this case we would like to know where cache misses occur during the execution of the code and we can use the following line to collect that information:

    $ perf record -e cache-misses ./stream_c.exe

    When the command completes execution output like the following will be printed indicating the amount of data collected (~2093 samples) and stored in a file perf.data.

    [ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0.048 MB perf.data (~2093 samples) ]

    The perf.data file is analyzed with perf report. By default perf report will put up a simple Text User Interface (TUI) to navigate the collected data. The --stdio option provides simple text output. Below is the "perf report" output for the earlier "perf record". The "perf report" output starts with a header listing when and where the data was collected. The header also includes details about the hardware/software environment and the command used to collect the data.

    A sorted list follows the header information and shows which functions the instructions associated with those evens are located in. In this case the simple stream benchmark has over 85% of the samples in main. This is not surprising given this is a simple benchmark that stresses memory performance. However, the second function on the list is the kernel function clear_page_c_e; the [k] indicates this function is in the kernel.

    $ perf report --stdio
    # ========
    # captured on: Wed Oct  2 14:38:59 2013
    # hostname : dhcp129-131.rdu.redhat.com
    # os release : 3.10.0-31.el7.x86_64
    # perf version : 3.10.0-31.el7.x86_64.debug
    # arch : x86_64
    # nrcpus online : 8
    # nrcpus avail : 8
    # cpudesc : Intel(R) Core(TM) i7-3740QM CPU @ 2.70GHz
    # cpuid : GenuineIntel,6,58,9
    # total memory : 7859524 kB
    # cmdline : /usr/bin/perf record -e cache-misses ./stream_c.exe
    # event : name = cache-misses, type = 0, config = 0x3, config1 = 0x0, config2 =
    # HEADER_CPU_TOPOLOGY info available, use -I to display
    # HEADER_NUMA_TOPOLOGY info available, use -I to display
    # pmu mappings: cpu = 4, software = 1, tracepoint = 2, uncore_cbox_0 = 6, uncore
    # ========
    #
    # Samples: 888  of event 'cache-misses'
    # Event count (approx.): 6576993
    #
    # Overhead       Command      Shared Object                        Symbol
    # ........  ............  .................  ............................
    #
        85.19%  stream_c.exe  stream_c.exe       [.] main
        11.20%  stream_c.exe  [kernel.kallsyms]  [k] clear_page_c_e
         2.23%  stream_c.exe  stream_c.exe       [.] checkSTREAMresults
         0.62%  stream_c.exe  [kernel.kallsyms]  [k] _cond_resched
         0.20%  stream_c.exe  [kernel.kallsyms]  [k] trigger_load_balance
         0.13%  stream_c.exe  [kernel.kallsyms]  [k] task_tick_fair
         0.13%  stream_c.exe  [kernel.kallsyms]  [k] free_pages_prepare
         0.11%  stream_c.exe  [kernel.kallsyms]  [k] perf_pmu_disable
         0.09%  stream_c.exe  [kernel.kallsyms]  [k] tick_do_update_jiffies64
         0.09%  stream_c.exe  [kernel.kallsyms]  [k] __acct_update_integrals
         0.00%  stream_c.exe  [kernel.kallsyms]  [k] flush_signal_handlers
         0.00%  stream_c.exe  [kernel.kallsyms]  [k] perf_event_comm_output
    
    #
    # (For a higher level overview, try: perf report --sort comm,dso)
    #

    You may want more details on which lines of code in main those caches misses are associated with. The perf annotate can provide exactly that information. The left column of the perf annotate output shows the percentage of samples associated with that instruction. The right column shows intermixed source code and assembly language. You can use cursor keys to scroll through the output and 'H' to look at hottest instructions with the most samples. In this case, the addition of elements from two arrays is the hottest area of code.

    $ perf annotate
    
    main
           │       movsd  %xmm0,(%rsp)                                             ▒
           │       xchg   %ax,%ax                                                  ▒
           │     #ifdef TUNED                                                      ▒
           │             tuned_STREAM_Add();                                       ▒
           │     #else                                                             ▒
           │     #pragma omp parallel for                                          ▒
           │             for (j=0; j<N; j++)                                       ▒
           │                 c[j] = a[j]+b[j];                                     ▒
      1.96 │2a0:   movsd  0x24868e0(%rax),%xmm0                                    ▒
     11.17 │       add    $0x8,%rax                                                ▒
      2.21 │       addsd  0x15444d8(%rax),%xmm0                                    ▒
     13.74 │       movsd  %xmm0,0x6020d8(%rax)                                     ▒
           │             times[2][k] = mysecond();                                 ◆
           │     #ifdef TUNED                                                      ▒
           │             tuned_STREAM_Add();                                       ▒
           │     #else                                                             ▒
           │     #pragma omp parallel for                                          ▒
           │             for (j=0; j<N; j++)                                       ▒
      3.56 │       cmp    $0xf42400,%rax                                           ▒
           │     ↑ jne    2a0                                                      ▒
           │                 c[j] = a[j]+b[j];                                     ▒
           │     #endif                                                            ▒
    Press 'h' for help on key bindings

    Techniques to reduce the cache misses are very dependent on the specifics of the cache organization and the code operation. For more information refer the the software optimization manuals from the processor manufacturers:

    AMD:

    • Software Optimization Guide for AMD Family 16h Processors, Publication # 52128 Revision: 1.1 Issue Date: March 2013. http://developer.amd.com/wordpress/media/2012/10/SOG_16h_52128_PUB_Rev1_1.pdf
    • Software Optimization Guide for AMD Family 15h Processors Publication No. 47414, Revision 3.06, Date January 2012. http://support.amd.com/us/Processor_TechDocs/47414_15h_sw_opt_guide.pdf
    • Software Optimization Guide for AMD Family 10h and 12h Processors Publication #40546, Revision: 3.13, Issue Date: February 2011. http://developer.amd.com/wordpress/media/2009/04/40546.pdf

    Intel:

    • Intel® 64 and IA-32 Architectures Optimization Reference Manual Order Number: 248966-028, July 2013. http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf
    Last updated: October 18, 2018

    Recent Posts

    • Ollama or vLLM? How to choose the right LLM serving tool for your use case

    • How to build a Model-as-a-Service platform

    • How Quarkus works with OpenTelemetry on OpenShift

    • Our top 10 articles of 2025 (so far)

    • The benefits of auto-merging GitHub and GitLab repositories

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue