Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

How to count software events using the Linux perf tool

April 23, 2019
William Cohen
Related topics:
Developer tools
Related products:
Red Hat Enterprise Linux

    The Linux perf tool was originally written to allow access to the performance monitoring hardware that counts hardware events, such as instructions executed, processor cycles, and cache misses. However, it can also be used to count software events, which can be useful in gauging how frequently some part of the system software is executed.

    Recently, someone at Red Hat asked whether there was a way to get a count of system calls being executed on the system. The kernel has a predefined software trace point, raw_syscalls:sys_enter, which collects that exact information. It counts each time a system call is made. To use the trace point events, the perf command needs to be run as root.

    The following code will give system-wide count (-a option) of system calls (-e raw_syscalls:sys_enter) every second (-I 1000):

    # perf stat -a -e raw_syscalls:sys_enter -I 1000
    #           time             counts unit events
         1.000640941              1,250      raw_syscalls:sys_enter                                      
         2.001183785              1,901      raw_syscalls:sys_enter                                      
         3.001601593              1,922      raw_syscalls:sys_enter   
    

    The raw_syscalls:sys_enter trace point is just one predefined trace point event in the kernel. To list the other 1000+ predefined trace points events, run the following as root:

    # perf list tracepoint
    
    List of pre-defined events (to be used in -e):
    
      block:block_bio_backmerge                          [Tracepoint event]
      block:block_bio_bounce                             [Tracepoint event]
      block:block_bio_complete                           [Tracepoint event]
      block:block_bio_frontmerge                         [Tracepoint event]
      block:block_bio_queue                              [Tracepoint event]
      block:block_bio_remap                              [Tracepoint event]
      block:block_dirty_buffer                           [Tracepoint event]
      block:block_getrq                                  [Tracepoint event]
      block:block_plug                                   [Tracepoint event]
      ...
    

    You may want to have a counter for some arbitrary function in the kernel that does not yet have a trace point. No problem. You can define your own probe points and then use them in the perf stat command to monitor functions that implement expensive operations. For example, clearing a 2MB huge page has latency that is approximately 500 times longer than clearing a traditional 4KB page. These latencies can be noticeable, and you might want to know when a significant number of these delays occur.

    The following sets up the probe point in the clear_huge_page function accessible to perf:

    # perf probe --add clear_huge_page
    Added new event:
      probe:clear_huge_page (on clear_huge_page)
    
    You can now use it in all perf tools, such as:
    
    	perf record -e probe:clear_huge_page -aR sleep 1
    

    The following provides the count for every 10 seconds (10,000 milliseconds):

    # perf stat -a -e probe:clear_huge_page -I 10000
    #           time             counts unit events
        10.000241215                 73      probe:clear_huge_page                                       
        20.001129381                  4      probe:clear_huge_page                                       
        30.001567364                  3      probe:clear_huge_page                                       
        40.002202895                  2      probe:clear_huge_page                                       
        50.003554968                  1      probe:clear_huge_page                                       
        50.316752807                  0      probe:clear_huge_page
        ...
    

    When you no longer need the probe point for the clear_huge_page function, it can be removed as shown below.

    # perf probe --del=probe:clear_huge_page
    Removed event: probe:clear_huge_page
    

    The perf probe points can also be placed user-space executables. You may need to compile the code with debuginfo enabled (GCC's -g option) or to install the debuginfo RPMs to allow perf to find the location of the functions. To place a probe on the malloc function in the glibc library, the executable needs to be specified with the --exec option.

    # perf probe --exec=/lib64/libc-2.17.so --add malloc
    Added new event:
      probe_libc:malloc    (on malloc in /usr/lib64/libc-2.17.so)
    
    You can now use it in all perf tools, such as:
    
    	perf record -e probe_libc:malloc -aR sleep 1
    

    Using probe_libc:malloc, you can get a count of the number of malloc calls occurring every 10 seconds. Below is the output from a machine that is initially sitting idle for the first 20 seconds. After 20 seconds, a parallel kernel build is started, and the number of times that malloc is called increases dramatically.

    # perf stat -a -e probe_libc:malloc -I 10000
    #           time             counts unit events
        10.000900150                  2      probe_libc:malloc                                           
        20.001803180                  0      probe_libc:malloc                                           
        30.002286255          1,829,385      probe_libc:malloc                                           
        40.002442647         12,553,306      probe_libc:malloc                                           
        50.002578104         15,579,692      probe_libc:malloc
        ...
    

    Once you're done with the user-space probe, it can be deleted:

    # perf probe --exec=/lib64/libc-2.17.so --del malloc
    Removed event: probe_libc:malloc
    

    Using perf stat with the software probe points can help you answer the question of how frequently some code is being executed. For more information about setting up software probe points, take a look at the perf-probe man page.

    Last updated: November 5, 2025

    Related Posts

    • Event-driven architecture: What types of events are there?

    • Troubleshooting and FAQ: Red Hat Enterprise Linux

    • Speed up SystemTap script monitoring of system calls

    • Reducing the startup overhead of SystemTap monitoring scripts with syscall_any tapset

    Recent Posts

    • Federated identity across the hybrid cloud using zero trust workload identity manager

    • Confidential virtual machine storage attack scenarios

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    What’s up next?

    Red Hat Insights API

    This cheat sheet is a helpful guide for using Red Hat Lightspeed APIs. You can use Red Hat Lightspeed APIs to identify and address operational and vulnerability risks in your Red Hat Enterprise Linux environments before an issue results in downtime.

    Get the cheat sheet
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.