Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Malloc Internals and You

March 2, 2017
DJ Delorie
Related topics:
Developer tools

    Introduction

    In my last blog, I mentioned I was asked to look at a malloc performance issue, but discussed the methods for measuring performance. In this blog, I'll talk about the malloc issue itself, and some measures I took to address it. I'll also talk a bit about how malloc's internals work, and how that affects your performance.

    First off, a bit of terminology - throughout this blog, these terms are defined to mean as follows...

    memory
    a region of the process's virtual address space, independent of whether it's in RAM or swapped out to disk.
    allocated
    used to mean a chunk or region of memory is set aside for the application to use.
    chunk
    A region of contiguous memory which may be returned by malloc() or has been passed to free(), plus overhead.
    heap
    A large contiguous region of memory, in which chunks and other overhead (such as the arena structure) exist.
    arena
    A collection of one or more heaps, plus overhead needed to keep track of lists of chunks in various states, such as the fastbins and an unsorted chunk list.

    In an old-school single-threaded application, when you malloc memory, a chunk is chosen (or created) from the heap (back then, there was only one), and returned to the application. In a modern multi-threaded application[*], sometimes two threads want to malloc memory at the same time. If you're lucky, they don't step on each others' toes, but... you're not that lucky. So, malloc uses a lock to make the threads "take turns" accessing malloc's internal structures. Still, taking turns means one thread is doing nothing for a while, which can't be good for performance, and these locks themselves can take a significant amount of time as they often need to synchronize access across CPU caches and sometimes across physical CPUs.

    [*] Even if your application never starts a second thread, it's handled like a multi-threaded application, just in case.

    In The GNU C Library's (glibc's) malloc, this is partly addressed by allowing applications to have more than one arena from which memory can be allocated. Each arena "owns" one or more heaps, or regions of memory space, from which memory is allocated. If a thread needs to malloc memory and another thread has locked the arena it wants to use, it may switch to a different arena, or create a new one. A single-threaded application will only need one arena, since there won't be any thread contention. The number of arenas is limited to eight times the number of processor cores (for 64-bit systems, else for 32-bit, it's 2x), so if you have a 64-bit quad-core machine, you'll get up to 32 arenas. You can change this limit by setting the MALLOC_ARENA_MAX environment variable to the number of arenas you want.

    For further reading about glibc's malloc internals, visit the glibc Wiki's "Malloc Internals" page at https://sourceware.org/glibc/wiki/MallocInternals

    Tunings

    So what does this mean to your application? If you know how malloc works, you might be able to tweak your application, or tune glibc's implementation, to improve performance. Here are some things to try:

    • If you know how many threads will be making allocations, you can set MALLOC_ARENA_MAX (or, you can call mallopt() in your allocation) to set the number of arenas that make sense for your threads.
    • If your application mixes very large and small allocations, try setting the M_MMAP_THRESHOLD environment variable to just under the "very large" size. Allocations larger than this size go directly to mmap(), which means when you free() it, it won't end up in the middle of your heap and prevent coalescing of smaller chunks later.
    • Further, if you do lots of small allocations, changes to the MALLOC_TRIM_THRESHOLD environment variable might help keep your heap from being too fragmented.
    • The more consistent your allocation sizes are, the easier it is for malloc to find chunks to give you. However, malloc rounds up sizes internally to the size of two pointers (i.e. 8 bytes for 32-bit, or 16 bytes for 64-bit). If you separate "consistent" allocations in one thread and "varying" ones in another, they're more likely to be kept separate in separate arenas, which might help.
    • Likewise, if your allocations tend to be just a bit bigger than the internal alignment, you're going to be wasting a lot of address space. I.e. don't make a lot of 17-byte requests, because you'll waste 15 bytes per, or nearly half your memory! If you need a lot of these types of blocks, and they need not be as aligned as malloc guarantees, consider layering an app-specific cache of blocks over malloc, so you can pack them tightly and return them quickly.

    For further information about malloc tuning, see the malloc and mallopt man pages ("man malloc" and "man mallopt") and the glibc tunables documentation (a copy is installed with glibc, or see https://www.gnu.org/software/libc/manual/html_node/Tunables.html).

    Thread-Local Cache

    Still, every thread must lock an arena just in case some other thread comes along and tries to use it also. This lock turned out to be quite expensive compared to the cost of doing the allocation itself. Can we allocate memory without locking? There are two solutions, one which involves "lockless" programming using atomic operations (which are used for part of fastbins, an intermediate chunk cache inside malloc), and one which involves some thread-local storage (TLS) which can be accessed without needing a lock. I chose to use TLS to add a per-thread cache (tcache) of memory which could be quickly accessed, as TLS access does not require any locks or atomics. Memory returned to the heap via free() might be quickly stored in the tcache, so it can be quickly returned by a call to malloc(), without the cost of a lock. The cache might also be pre-filled once a lock is taken for other reasons.

    The tcache will be tunable via the following tunables (part of glibc 2.25's new tunables infrastructure, and subject to change):

    glibc.malloc.tcache_max
    the maximum size chunk that may be stored in a tcache (in bytes)
    glibc.malloc.tcache_count
    the maximum number of chunks of each size that may be stored in a tcache. Remember that chunk sizes are rounded up; "each size" refers to the rounded size, not the unrounded value you pass to malloc().
    glibc.malloc.tcache_unsorted_limit
    how many entries in the unsorted list are checked while trying to pre-fill the tcache.

    Thus, the maximum number of chunks that can be stored in a tcache is (for example, on a 64-bit machine) tcache_max * tcache_count / 16 (the 16 accounts for rounding).

    If you have many threads that do lots of mallocs and frees of a small number of block sizes, watch for a future update about my tcache work. It's currently being reviewed for inclusion in the upstream glibc. If you're impatient, you can checkout the dj/malloc-tcache branch from glibc's upstream git repo and play with it yourself :-)

    Last updated: March 5, 2017

    Recent Posts

    • Federated identity across the hybrid cloud using zero trust workload identity manager

    • Confidential virtual machine storage attack scenarios

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.