This article describes application compatibility with cgroups v2 and how to address related concerns. This article is a bit deep, so if you only have a minute, you might want to skip to the FAQ or the Overview sections. This latest version of the post includes community feedback.
Container awareness is one of the topics I most enjoy researching and testing, both professionally and academically. I consider this article the official part 2 of How to use Java container awareness in OpenShift 4, serving as its expansion package.
Version
It goes without saying that using the latest images is always recommended. This holds true for Node.js, OpenJDK, and other middleware products.
What is cgroups?
cgroups (Control Groups) is a core Linux kernel feature designed to allocate and limit resources (like CPU, memory, and disk I/O) for a group of processes. It is the fundamental technology that powers container boundaries.
cgroups detection is about more than the memory; it includes both memory and CPU settings. This is critical for heap calculation and thread pool calculations in Java and heap calculation in Node.js.
What is cgroups v2 (cgv2)?
cgroups is a core feature in the kernel that allows setting containers boundaries. cgroups v2 later refined this feature with improvements to memory handling and hierarchical structure.
cgroups v2 uses a more modular approach that includes controllers like cgroup, cpu, cpuset, io, irq, memory, misc controllers, and PIDs. It is structured like this:
cgroups v2
└── modules
└── module valuesFor example, what was a single file named cpu_cpuset_cpus in cgroups v1 is now handled by the cpuset module with its various configuration options.
From /sys/fs/cgroup/memory/memory.stat:
- RSS: Memory actively used by your JVM process (Java heap, metaspace, threads, native buffers).
- cache: Page cache (file system cache, JARs, class files, logs). Kernel can reclaim this if needed.
See Table 1 for a comparison of how v1 and v2 report memory.
| cgroups version | Output |
| cgv1 | In cgroups v1, it reported active memory and used a file counter. |
| cgv2 | In cgroups v2, it reports: active and cache memory, offering more detail with file_mapped memory. |
Which specific versions of Red Hat OpenShift correspond to the cgroups version?
Table 2 details the cgroups compatibility and the default version used in new installations across specific Red Hat OpenShift Container Platform versions.
| OpenShift Container Platform | Compatible cgroups version | Default (new installation) |
| OpenShift Container Platform 4.13 | cgroups v1 | cgroups v1 |
| OpenShift Container Platform 4.14 | cgroups v1 and cgroups v2 | cgroups v2 |
| OpenShift Container Platform 4.15 | cgroups v1 and cgroups v2 | cgroups v2 |
| OpenShift Container Platform 4.16 | cgroups v1 and cgroups v2 | cgroups v2 |
| OpenShift Container Platform 4.17 | cgroups v1 and cgroups v2 | cgroups v2 |
| OpenShift Container Platform 4.18 | cgroups v1 and cgroups v2 | cgroups v2 |
| OpenShift Container Platform 4.19 | cgroups v2 | cgroups v2 |
| OpenShift Container Platform 4.20 | cgroups v2 | cgroups v2 |
Table 3 provides a simplified overview.
| Version | cgroups v1 | cgroups v2 |
| OpenShift Container Platform 4.13 or earlier | Yes | No |
| OpenShift Container Platform 4.14-4.18 | Yes | Yes |
| OpenShift Container Platform 4.19 | No | Yes |
Note
Migrations will keep the previous cgroups version already set. New installations default to cgroups v2 after OpenShift 4.15+.
You cannot upgrade directly from OpenShift 4.10 to 4.19. This migration requires several intermediate steps.
Is Node.js cgroups v2 compatible?
Yes, Node.js 22 is fully cgroups v2 compatible. For full details, see Node.js 20+ memory management in containers.
Is .NET cgroups v2 compatible?
Yes, .NET 5+ is fully cgroups v2 compatible. Read the full details in the .NET core pull request.
Is Java cgroups v2 compatible?
Yes. The following minimum OpenJDK versions include the required support and detection mechanisms for cgroups v2:
- OpenJDK versions 8u372 and later support cgroups v2 detection.
- OpenJDK versions 11.0.16 and later support cgroups v2 detection.
- OpenJDK 17 or any later release fully supports cgroups v2.
Java cgroups v2 compatibility was implemented through two key JIRA tickets in the upstream OpenJDK codebase, detailed in Table 4.
| JIRA | Purpose | cgroups v1 or cgroups v2 |
| JDK-8146115 | Introduces an upstream mechanism for cgroups detection. Before this JIRA, there was no upstream mechanism for cgroups detection. | cgroups v1 |
| JDK-8230305 | This JIRA specifically adds cgroups v2 coverage in the OpenJDK code. | cgroups v1 and v2 |
The latest images use OpenJDK C++ natively. Previously, Red Hat container images used workaround scripts to detect cgroups and impose Xmx directly, but these scripts are obsolete.
The MaxRAMPercentage change enabled cgroups v1 and v2 support via the JDK. As Table 5 shows, OpenJDK's code now handles 100% of the cgroups v1/v2 detection—no extra script used in any capacity to determine the Xmx. The C++ code reads both cgroups v1 and cgroups v2 hierarchies in src/os/linux/vm.
| Before changes | After changes |
| OpenJDK C++ does not detect the cgroups | OpenJDK C++ code detects the cgroups v1 and v2 hierarchy |
| Initial script is used to detect cgroups | Initial script is not used to detect cgroups. |
Before these changes, run-java.sh used to impose limits by calculating the container size and setting Xmx/Xms. Now, the OpenJDK C++ code itself reads the cgroups limits upon startup. For example:
INFO exec -a "java" java -XX:MaxRAMPercentage=80.0 -XX:+UseParallelGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:+ExitOnOutOfMemoryError -cp "." -jar /deployments/quarkust-test-1.0.0-SNAPSHOT-runner.jar
The following list details the seven default arguments provided by run-java.sh:
UseParallelGC: Enables the Parallel Garbage Collector (GC)MaxRAMPercentage: Sets the maximum heap size at 80% of the container (previously 50%).MinHeapFreeRatio: Default value is 10.MaxHeapFreeRatio: Default value is 20.GCTimeRatio: Default value is 4.AdaptiveSizePolicyWeight: Default value is 90. The Adaptive Size Policy Weight controls the weight of previous garbage collections relative to the target time.ExitOnOutOfMemoryError: Forces the JVM to exit when a JavaOutOfMemoryErrorexception occurs.
Do I need to upgrade the application or JDK for cgroups v2?
Probably not. Java has been cgroups v2 compatible since OpenJDK 8u372+, OpenJDK 11.0.16+, OpenJDK 17, and OpenJDK 21. Node.js has been compatible since version 22.
You will not need to change the application itself, only update the OpenJDK to a compatible version. The application almost certainly inherits this from the OpenJDK layer.
However, Red Hat middleware applications, despite inheriting compatibility from the OpenJDK layer, might still use specific scripts for cgroups memory calculation and memory settings (Red Hat JBoss Enterprise Application Platform and Red Hat Data Grid, for example). This is due to a legacy approach of using a script to detect and impose the max heap before the OpenJDK C++ code handled container detection. All later versions are cgroups v2 compatible.
Therefore, those applications can be impacted if their scripts are not cgroups v2 compatible.
If I upgrade to OpenShift 4.18 or earlier, is this a concern?
No. If you are migrating OpenShift from a version that has cgroups v1, it will still be cgroups v1. It does not change it. In other words, if a cluster is being upgraded, cgroups is not updated automatically.
If I upgrade to OpenShift 4.19 or later, is this a concern?
It can be, depending on the current cgroups version, because OpenShift 4.19 removes cgroups v1 entirely - so it does not exist.
- If you migrate OpenShift from a version using cgroups v2 to OpenShift 4.19, the cgroups version stays the same. Your applications should continue to work as expected.
- If you migrate OpenShift from a version using cgroups v1 to OpenShift 4.19, you must manually change to cgroups v2. Check your applications and platforms for compatibility before you begin. The OpenShift 4.19 migration will halt if hosts still use cgroups v1.
What if my application is not cgroups v2 compatible?
An application that is not cgroups v2 compatible will use the host limits. This means its heap will be decoupled from the container boundaries, becoming not container aware - decoupling the application limits from the container boundaries.
The direct impact is clear:
- Memory: the cgroups OutOfMemory Killer on the host eventually stops the process if memory usage balloons.
- CPU: running without limits means the host will be used as reference for threads calculations (any that are CPU associated/bounded), causing CPU throttling.
Because the host is always larger than the container, the memory will likely balloon, crossing the container limits. The OpenShift node's kernel will then trigger a cgroups OOMKill, causing the application to crash. This may not happen immediately, but it is inevitable because the application is not container aware.
The indirect impact is also important: CPU settings. As explained in the companion article, How to use Java container awareness in OpenShift 4, Java uses CPU limits to calculate thread pools, including GC thread pools and application pools.
Addressing cgroups v2 incompatibility
To fully solve this problem, one step is required: upgrading any scripts that set memory limits or configurations inside the container to ensure they are cgroups v2 aware.
If you cannot upgrade to OpenJDK 8u372 or later, you must mitigate the risk for applications that are not cgroups v2 compatible. To do so, you can simulate the container size by setting the limits directly when the application is deployed:
| Java |
This means the full heap memory (Xmx) will be set according to the a specific and static value—not the container nor the host. Do not use a percentage, as it would be calculated against the host).
The ActiveProcessorCount would just change the CPU count the JVM understands it has in default thread quantity calculations. So fewer GC threads or fewer threads in pools auto scaling to CPU count, but those fewer threads could still be scheduled on any CPUs. However, OS would schedule the Java application to CPUs, always all of them by default. |
| Node.js | In Node.js, you can set the heap directly via the --max-old-space-size flag, ensuring the percentage is not taken from the host. |
Scenarios with Java
The following examples illustrate how different OpenJDK versions and deployment methods impact cgroups v2 compatibility and container awareness.
Scenario 1: Spring Boot OpenJDK deployment
Let's say the application is a WAR deployed with OpenJDK 17 or OpenJDK 21:
## Example
$ cat Dockerfile-21
#FROM registry.redhat.io/ubi8/openjdk-11:1.3-8
FROM registry.redhat.io/ubi9/openjdk-21-runtime:1.23 <---------------------- OpenJDK 21 runtime
RUN touch readme.md
COPY ./quarkust-test-1.0.0-SNAPSHOT-runner.jar /deployments
ENV JAVA_ARGS -DanexampleResult:
$ podman build -f Dockerfile-21 --tag example21
STEP 1/4: FROM registry.redhat.io/ubi9/openjdk-21-runtime:1.23 <---------------------- OpenJDK 21 runtime
Trying to pull registry.redhat.io/ubi9/openjdk-21-runtime:1.23...
Getting image source signatures
Checking if image destination supports signatures
Copying blob 0c06634bf84c done |
Writing manifest to image destination
Storing signatures
STEP 2/4: RUN touch readme.md
--> e5dde9acf54c
STEP 3/4: COPY ./quarkust-test-1.0.0-SNAPSHOT-runner.jar /deployments
--> 3ea924ff8fc5
STEP 4/4: ENV JAVA_ARGS -Danexample
COMMIT example21
--> 0f5365fe9ba8
Successfully tagged localhost/example21:latest
...
...
...
$ podman run --memory=1000m --rm -it localhost/example21
INFO exec -a "java" java -XX:MaxRAMPercentage=80.0 -XX:+UseParallelGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:+ExitOnOutOfMemoryError -cp "." -jar /deployments/quarkust-test-1.0.0-SNAPSHOT-runner.jar <------------------------ This shows ParalleGC and MaxRAMPercentage at 80%
INFO running in /deployments
__ ____ __ _____ ___ __ ____ ______
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
2025-10-31 19:18:44,373 INFO [io.quarkus] (main) quarkust-test 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.1.1.Final) started in 0.565s. Listening on: http://0.0.0.0:8080
2025-10-31 19:18:44,392 INFO [io.quarkus] (main) Profile prod activated.
2025-10-31 19:18:44,392 INFO [io.quarkus] (main) Installed features: [cdi, kubernetes, resteasy, smallrye-context-propagation]
C2025-10-31 19:18:50,125 INFO [io.quarkus] (Shutdown thread) quarkust-test stopped in 0.014sThis application is fully cgroups compliant and, therefore, fully container aware. Its memory and CPU settings will come from the container specifications.
Scenario 2A: Spring Boot OpenJDK 8u362 (lower) deployment using OpenJDK's run-java.sh
This application will not be cgroups v2 compatible, and it will use the host limits for memory and CPU. Any percentage calculation will be based on the host.
However, since this application uses run-java.sh, you can easily add Xmx as an argument in JAVA_OPTS at runtime without major problems.
Scenario 2B: Spring Boot direct OpenJDK 8u362 (lower) deployment without OpenJDK's run-java.sh
This application will not be cgroups v2 compatible, and it will use the host limits for memory and CPU. Additionally, not using run-java.sh directly means the user cannot use JAVA_OPTS to set Xmx directly. Therefore, the DevOps team will need to see it on the deployment (possibly Containerfile directly; see below).
Consequently, this means the application needs these settings added to the container or the Xmx needs to be added at build time because it cannot use JAVA_OPTS.
An example Containerfile (Dockerfile) that does not use run-java.sh:
## Example
$ cat Dockerfile-21
FROM registry.redhat.io/ubi9/openjdk-21-runtime:1.23 <---------------------- OpenJDK 21 runtime
RUN touch readme.md
COPY ./quarkust-test-1.0.0-SNAPSHOT-runner.jar /deployments
ENTRYPOINT java -jar /deployments/quarkust-test-1.0.0-SNAPSHOT-runner.jarScenario 3A: Spring Boot OpenJDK 8u372+ deployment using OpenJDK's run-java.sh
In this scenario, the OpenJDK layer is fully cgroups v2 compatible. The fact that the user is taking advantage of run-java.sh will have the following impacts:
- The JVM achieves container awareness (memory and CPU).
- The heap will be calculated at 50% of the container size (not the host). Later images will default to 80%.
- The user can use
JAVA_OPTSand other environment variables fromrun-java.sh, such asJAVA_OPTS,JAVA_OPTS_APPEND,GC_MAX_METASPACE_SIZE
Scenario 3B: Spring Boot OpenJDK 8u372+ deployment without OpenJDK's run-java.sh
In this scenario, the OpenJDK layer is fully cgroups v2 compatible. The fact that the user is not taking advantage of run-java.sh will have the following impacts:
- The JVM will be container aware: memory and CPU
- The heap will be calculated at 25% of the container size. For details, see Why my Java container does not use 80% and ignores Environment variables in OpenShift 4?
- The user cannot use
JAVA_OPTSand other valid environment variables that are allowed by the usage ofrun-java.sh
Scenario 4: JBoss EAP 7 and 8
In this scenario, JBoss EAP 8 will fully rely on the OpenJDK layer for cgroups detection; therefore, EAP 8 images will be fully cgroups compatible.
EAP 7 previously used a script to set Xmx directly, and that script was only cgroups v1 compatible, which led to Xmx miscalculations and OOMKills. This issue was fixed years ago, and the latest images are fully cgroups v2 compatible.
Scenario 5: Migrating deployment to a new OpenShift version causes latency
In this scenario, a few applications are migrated fully for a new OpenShift installation, which has cgroups v2 at the same time. Although the applications does not suffer OOMKill, they start to face latency caused by throttling—the kernel is clearly throttling the threads.
Reason: the applications are not cgroups v2 compliant and uses host as reference for CPUs (threads), the host has 200+ CPUs. Now the threads of several applications balloon two orders of magnitude competing more resources inside a container—given a container bounded, the kernel will throttle given the container just the corresponding amount of time delimited.
In this scenario, removing the limits inside a container may not solve the problem, because the problem is not the throttling, which is kernel feature to allow the other threads to run preemtively, but the fact the applications' threads are bounded (i.e. many) and competing for the CPUs resources.
Scenario 6: Red Hat Data Grid 8
This scenario is complex, so let's start with the conclusion, which we'll then demonstrate in detail.
| Red Hat Data Grid version | cgroups v2 compatible? | Default heap calculation |
|---|---|---|
| 8.3.6+ (on OpenJDK 11.0.16) | No | 25% of the host limits |
| 8.3.7+ (on OpenJDK 11.0.16) | Yes | 25% of container memory |
| 8.4.6+ (on OpenJDK 17+) | Yes | 50% of container memory (fixed) |
Explanation: Red Hat Data Grid 8 relies on OpenJDK for cgroups detection. This will entirely come from the OpenJDK layer.
The Data Grid image currently calculates via an initialization script and sets Xmx from the cgroups details, which overwrites the default values—even the percentage-based ones—because Xmx has precedence (over percentage, in case both are mistakenly used). Without setting Xmx manually via the an initialization script, Data Grid delegates the Xmx calculation to OpenJDK's code. The value will, therefore, be 25% of the container size—not the 50% or 80% that comes from Red Hat OpenJDK's run-java.sh script.
Prior to JDG-6489, the initialization script that calculates Xmx was specific to cgroups v1 and did not work for cgroups v2. So the default 25% would be used rather than 50% intended, but the OpenJDK version assures the application is cgroups v2 compatible.
I'll provide a full example below, and then I will demonstrate this step by step.
Data Grid 8.3.x does detect cgroups v2, specifically images deployed by Data Grid CSV 8.3.7.
The test is easy: deploy Data Grid 8.3.x in OpenShift 4.18, which defaults to cgroups v2:
$ oc version
Client Version: 4.17.14
Server Version: 4.18.26 <------------------------------------- that's OCP 4.18, default cgroups is v2
$ oc get csv
NAME DISPLAY VERSION REPLACES PHASE
datagrid-operator.v8.3.9 Data Grid 8.3.9 datagrid-operator.v8.3.6 Succeeded<- DG CSV 8.3.9, which will bring DG 8.3.x
...
...
$ oc get infinispan
NAME AGE
example-infinispan 6s <----------------------------------------------- NOT the default Infinispan, which will come as Cache
$ oc get pod
NAME READY STATUS RESTARTS AGE
example-infinispan-0 1/1 Running 0 80s
example-infinispan-config-listener-678c95bc57-2wbqq 1/1 Running 0 55s <-- 8.3.9 introduced configlistener
infinispan-operator-controller-manager-5b69c57bdb-bzq68 1/1 Running 0 2m16s
$ oc logs
...
...
15:08:47,613 INFO (main) [BOOT] JVM OpenJDK 64-Bit Server VM Red Hat, Inc. 11.0.17+8-LTS 15:08:47,621 INFO (main) [BOOT] JVM arguments = [-server, -Dinfinispan.zero-capacity-node=false, -XX:+ExitOnOutOfMemoryError, -XX:MetaspaceSize=32m, -XX:MaxMetaspaceSize=96m, -Djava.net.preferIPv4Stack=true, -Djava.awt.headless=true, -Dvisualvm.display.name=redhat-datagrid-server <----- does not have Xmx
...
...
$ oc rsh example-infinispan-0
sh-4.4$ java -XshowSettings:system -version
Operating System Metrics:
Provider: cgroupv2 <--------------------------------- that's cgroups v2
Effective CPU Count: 4 <---------------------- that's the CPU allocated to that containerThe output shows Xmx is not set, but the value is calculated automatically. This demonstrates container awareness: it deduces Xmx from the container size unless you manually overwrite it with Xmx.
Even though it doesn't show as Xmx, OpenJDK 11.0.17 will detect cgroups v2 by default, because it supports cgroups v2.
### DG pod size:
oc get pod example-infinispan-0 -o yaml
apiVersion: v1
kind: Pod
...
containers:
- args:
- -Dinfinispan.zero-capacity-node=false
- -l
- /opt/infinispan/server/conf/operator/log4j.xml
- -c
- operator/infinispan-base.xml
- -c
- operator/infinispan-admin.xml
env:
- name: MANAGED_ENV
value: "TRUE"
- name: JAVA_OPTIONS
...
resources:
limits:
memory: 1Gi <----------------------------------- 1Gi container size
...
...
### :::::::::::::::::::::::: cgroups v2 Verification::::::::::::::::::::::
...
...
sh-4.4$ ./jcmd org.infinispan.server.loader.Loader VM.info <--- inside the container.
126:
#
# JRE version: OpenJDK Runtime Environment (Red_Hat-11.0.17.0.8-2.el8_6) (11.0.17+8) (build 11.0.17+8-LTS)
...
--------------- P R O C E S S ---------------
Heap address: 0x00000000f0000000, size: 256 MB <--------------- 25% of 1Gi
### :::::::::::::::::::::::: cgroups v2 Verification::::::::::::::::::::::
sh-4.4$ stat -fc %T /sys/fs/cgroup/
cgroup2fsThat's because the container is set at 1 Gi in the Infinispan Custom Resource:
### :::::::::::::::::::::::: Infinispan Custom Resource details ::::::::::::::::::::::
$ oc get infinispan -o yaml
- apiVersion: infinispan.org/v1
kind: Infinispan
...
spec:
...
configListener:
enabled: true <--- CL comes by default
container:
memory: 1Gi <---------------------- 1Gi comes by defaultDeploying with 2 Gi has the same output (25% percent):
### :::::::::::::::::::::::: Infinispan Custom Resource details ::::::::::::::::::::::
$ oc get infinispan -o yaml
- apiVersion: infinispan.org/v1
...
container:
memory: 2Gi <---------------------- 2Gi
...
...
sh-4.4$ ./jcmd 138 VM.info
# Java VM: OpenJDK 64-Bit Server VM (Red_Hat-11.0.17.0.8-2.el8_6)
(11.0.17+8-LTS, mixed mode, sharing, tiered, compressed oops, g1 gc, linux-amd64) <---------- G1GC
...
--------------- S U M M A R Y ------------
Command Line: -Dinfinispan.zero-capacity-node=false -zXX:+ExitOnOutOfMemoryError -XX:MetaspaceSize=32m -XX:MaxMetaspaceSize=96m .../opt/infinispan/server/conf/operator/log4j.xml -c operator/infinispan-base.xml
...
...
--------------- P R O C E S S ---------------Heap address:
... size: 512 MB ............ <----- 25% of the container, which is 2Gi.
With a container size of 5 Gi, the heap will be around 1,250 Mb—so 25%:
$ oc get infinispan -o yaml apiVersion: infinispan.org/v1 ... container: memory: 5Gi <---------------------- 5Gi ... ... sh-4.4$ ./jcmd org.infinispan.server.loader.Loader VM.info | head -n 20 138: # # JRE version: OpenJDK Runtime Environment (Red_Hat-11.0.17.0.8-2.el8_6) ... --------------- S U M M A R Y ------------Command Line: -Dinfinispan.zero-capacity-node=false -XX:+ExitOnOutOfMemoryError -XX:MetaspaceSize=32m -XX:MaxMetaspaceSize=96m ... Time: Thu Oct 23 17:47:44 2025 UTC elapsed time: 2388.511063 seconds (0d 0h 39m 48s) --------------- P R O C E S S ---------------Heap address: size: 1280 MB <------------------------------- 25% is the default percentage from upstream
The preceding code demonstrates that Java detects settings from the cgroups v2 container:
- Memory: Detects the size of the container (1Gi, 2Gi, 5Gi) and sets the heap at 25% of that value.
- CPU: Detects 4 CPUs and sets the default garbage collector (GC) to G1GC instead of the 1-CPU default of SerialGC.
There is only one way to disable container awareness in Java: The user needs to set the flag -UseContainerSupport when starting Java. The default is +UseContainerSupport, so Java is container aware by default. If you do not set this flag, OpenJDK will be container aware.
There are two ways to overwrite the automatically calculated memory percentage in cgroups v2 compatible versions:
- Set
Xmxto overwrite the default heap size. - Set a different
MaxRAMPercentagethan the upstream OpenJDK default of 25%.
Here's an example of when it does not detect cgroups v2:
================ DG Operator 8.3.6--> DG 8.3.1 --> OpenJDK 11.0.15 ===========
## DG Operator 8.3.6:
$ oc get csv
NAME DISPLAY VERSION REPLACES PHASE
datagrid-operator.v8.3.6 Data Grid 8.3.6 datagrid-operator.v8.3.3 Succeeded
## DG 8.3.6:
...
...
$ oc get infinispan
NAME AGE
example-infinispan 3m45s <-------------------------- Infinispan CR
...
...
# oc logs example-infinispan-020:19:55,124
JVM OpenJDK 64-Bit Server VM Red Hat, Inc. 11.0.15+10-LTS <-------------
20:19:55,145 INFO (main) [BOOT] JVM arguments = [-server, -Dinfinispan.zero-capacity-node=false, -XX:+ExitOnOutOfMemoryError, -XX:MetaspaceSize=32m, -XX:MaxMetaspaceSize=96m, -Djava.net.preferIPv4Stack=true, -Djava.awt.headless=true,
...
...
sh-4.4$ java -XshowSettings:system -version
Operating System Metrics:
No metrics available for this platform <-----------
openjdk version "11.0.15" 2022-04-19 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.15+10-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.15+10-LTS, mixed mode, sharing)
...
... The output shows Java is not detecting cgroups v2, and the memory will be 25% of the host:
sh-4.4$ ./jcmd org.infinispan.server.loader.Loader VM.info | head -n 10 133: # # JRE version: OpenJDK Runtime Environment 18.9 (11.0.15+10) (build 11.0.15+10-LTS) # Java VM: OpenJDK 64-Bit Server VM 18.9 (11.0.15+10-LTS, mixed mode, sharing, tiered, compressed oops, g1 gc, linux-amd64)--------------- S U M M A R Y ------------Command Line: -Dinfinispan.zero-capacity-node=false -XX:+ExitOnOutOfMemoryError -XX:MetaspaceSize=32m -XX:MaxMetaspaceSize=96m -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Dvisualvm.display.name=redhat-datagrid-server -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager -Dinfinispan.server.home.path=/opt/infinispan org.infinispan.server.loader.Loader org.infinispan.server.Bootstrap --bind-address=0.0.0.0 -l /opt/infinispan/server/conf/operator/log4j.xml -c operator/infinispan.xmlHost: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz, 4 cores, 15G, Red Hat Enterprise Linux release 8.6 (Ootpa) sh-4.4$ ./jcmd org.infinispan.server.loader.Loader VM.info | head -n 20 133: # # JRE version: OpenJDK Runtime Environment 18.9 (11.0.15+10) (build 11.0.15+10-LTS) # Java VM: OpenJDK 64-Bit Server VM 18.9 ------------Command Line: -Dinfinispan.zero-capacity-node=false -XX:+ExitOnOutOfMemoryError -XX:MetaspaceSize=32m -XX:MaxMetaspaceSize=96m -Djava.net.preferIPv4Stack=true Time: Fri Oct 24 15:47:33 2025 UTC elapsed time: 1523.590245 seconds (0d 0h 25m 23s) --------------- P R O C E S S --------------- Heap address: size: 3926 MB <----- that's 4Gi<
The problem is the pod has only 1 Gi of container limit:
$ oc get pod example-infinispan-0 -o yaml
apiVersion: v1
kind: Pod
spec:
...
- name: JAVA_OPTIONS
...
image: registry.redhat.io/datagrid/datagrid-8-rhel8@sha256:ec0185d369c661f0da8a477e7bd60437f32a2aabba21a7a89f8aeefba19cc895
...
resources:
limits:
memory: 1Gi <----This means:
- The JVM has almost 4 Gi of memory limit inside a 1 Gi pod. It is a matter of time before the JVM uses this for the heap, breaches the container limit, and causes an OOMKill.
- Because the JVM is using G1GC, this is even more certain because G1GC is greedy and will balloon.
- A JVM with a 4 Gi heap is not even accounting for off-heap size, which can be 400 Mb in Data Grid.
Therefore, the preceding example demonstrates that the Data Grid Operator version 8.3.6, which deploys Data Grid 8.3.1 (tag registry.redhat.io/datagrid/datagrid-8-rhel8@sha256:ec018...)
Given the OpenJDK version (before 11.0.16), this image above does not detect cgroups v2, so it uses the host limit and will certainly OOMKill because it is not container aware.
Images after 8.3.7+ will not OOMKill in this condition, even though they use 25% instead of 50%. This is because the JVM is container aware; it uses a 25% heap calculation based on the container size (rather than 50%), which is fixed in Data Grid 8.4.5.
Later versions of Data Grid will introduce MaxRAMPercentage as an argument instead of relying on Xmx for maximum heap settings. We recommend using percentages because they are flexible and less complex.
Troubleshooting: How to verify this
How to verify XshowSettings for cgroups details or use -Xlog:os+container=trace can help tremendously:
#####
##### ::::::::::::: podman shows -Xshow Settings :::::::::::::::::::::::
$ podman run -it registry.access.redhat.com/ubi8/openjdk-8:1.15-1.1682399183 java -XshowSettings:system -version
Operating System Metrics:
Provider: cgroupv2 <---------------------------------
Effective CPU Count: 1
CPU Period: 100000us
CPU Quota: -1
CPU Shares: -1
List of Processors: N/A
List of Effective Processors: N/A
List of Memory Nodes: N/A
List of Available Memory Nodes: N/A
Memory Limit: Unlimited
Memory Soft Limit: 0.00K
Memory & Swap Limit: Unlimited
openjdk version "1.8.0_372"
OpenJDK Runtime Environment (build 1.8.0_372-b07)
OpenJDK 64-Bit Server VM (build 25.372-b07, mixed mode)
#####
##### ::::::::::::: GC logs with -Xlog:os+container=trace :::::::::::::::::::::::
GC(8) Pause Young (Normal) (G1 Evacuation Pause) 32M->17M(41M) 5.005ms
Path to /cpu.max is /sys/fs/cgroup/cpu.max
Raw value for CPU quota is: max
CPU Quota is: -1
Path to /cpu.max is /sys/fs/cgroup/cpu.max
CPU Period is: 100000
OSContainer::active_processor_count: 4
GC(9) Pause Young (Concurrent Start) (Metadata GC Threshold)
GC(9) Using 4 workers of 4 for evacuation
Path to /cpu.max is /sys/fs/cgroup/cpu.max
Raw value for CPU quota is: max
CPU Quota is: -1
CPU Period is: 100000
OSContainer::active_processor_count: 4
CgroupSubsystem::active_processor_count (cached): 4If cgroups is not detected by Java, the following is returned:
##### ::::::::::::: -Xshow output when cgroups is not detected :::::::::::::::::::::::
$ java -XshowSettings:system -version
Operating System Metrics:
No metrics available for this platformTo verify the cgroups version on the OpenShift host, run the following command:
##### ::::::::::::: Expected outputs :::::::::::::::::::::::
### Output for cgv1
$ stat -fc %T /sys/fs/cgroup/
tmpfs
### Output for cgv2
$ stat -fc %T /sys/fs/cgroup/
cgroup2fsVM.info is always useful for container investigations, especially when interpreting the file structure in OpenJDK/OracleJDK. Note that /proc/meminfo section is not from the container:
$ jcmd $PID VM.info
container (cgroup) information:
container_type: cgroupv1 <---------------- cgv1
cpu_cpuset_cpus: 0-3
cpu_memory_nodes: 0
active_processor_count: 3 <---------------- process count
cpu_quota: 220000 <------------------------ 2.2 cpus
cpu_period: 100000
cpu_shares: no shares
memory_limit_in_bytes: 12582912 k <-------- 12gb limit
memory_and_swap_limit_in_bytes: 12582912 k
memory_soft_limit_in_bytes: unlimited
memory_usage_in_bytes: 758948 k
memory_max_usage_in_bytes: 759060 k
kernel_memory_usage_in_bytes: 10624 k
kernel_memory_max_usage_in_bytes: unlimited
kernel_memory_limit_in_bytes: 10628 k
maximum number of tasks: 98938
current number of tasks: 98FAQ
OpenShift
Q1. How do I verify the cgroups version on an OpenShift node and ensure application compatibility?
A1. Verify the cgroups version in the node via stat -fc %T /sys/fs/cgroup/:
- The output
tmpfsis expected for cgroups v1. - The output
cgroup2fsis expected for cgroups v2.
Q2. Do I need to upgrade the application to move to OpenShift 4.16+ (below OpenShift 4.19)?
A2. Probably not. When upgrading from a cgroups v1 version, the upgrade will not directly force cgroups v2. If it's a brand-new installation, it will use cgroups v2 by default.
Q3. Should I migrate the application first before upgrading to OpenShift 4.19?
A3. Possibly. Refer to the detailed explanation in the previous section.
Q4. Can I change the cgroups to v2, upgrade to OpenShift 4.19 and then change back to cgroups v1?
A3. No. cgroups v1 is removed from Kubernetes. While Red Hat Enterprise Linux 9 supports cgroups v1, version 10 does not. OpenShift does not support cgroups v1 in newer versions.
OpenJDK
Q1. Do I need to upgrade the application's JDK to move to OpenShift 4.19 (or later)?
A1. It depends on whether the application is cgroups v2 compatible, later versions are fully cgv2 compatible.
Q2. If my application is not cgroups v2 compatible, what will happen?
A2. The application's memory and CPU settings will be decoupled from the container and it will use from the host.
Memory size issues can lead to OOMkills (cgroups kills). CPU settings issues can lead to more threads than adequate for the container, potentially in probe failures and misbehavior.
Q3. How and when should I use run-java.sh? If I deploy Java without run-java.sh, is this a problem?
A3. Although run-java.sh is a script that does not detect cgroups limits (currently), it adds default JVM arguments and offers flexibility by allowing the user to set JAVA_OPTS / JAVA_OPTS_APPEND environment variables.
This is the default ENTRYPOINT of the OpenJDK containers. It will be used unless you overwrite the ENTRYPOINT directive. Examples of using and not using the default ENTRYPOINT are shown above.
Q4. If cgroups v2 is now in the OpenJDK layer code, could there be bugs?
A4. Yes, as any with any code potentially yes. One is JDK-8347129, a bug that is only in JDK 17 and JDK 21. You can review the details on the OpenJDK bug tracker. There is also JDK-8347811, which is only in JDK 25 for now.
Q5. If run-java.sh script starts the Java process, so then I could modify it to detect the cgroups v2?
A5. Yes, potentially one could detect the cgroups v2 hierarch via script. But, it would be much simpler just to upgrade OpenJDK for example.
Node.js
Q1. Will Node.js be a problem in OpenShift 4.19?
A1. Node.js 20+ is fully cgroups v2 compatible, but Node.js 22 offers additional improvements. Verify via version.
.NET
Q1. Is .NET fully cgroups v2 compatible?
A1. .NET 5+ has been fully cgroups v2 compatible since 2020. We only support .NET 10 9 8 at this current time and those are fully cgroups v2 aware.
Overview: Compatibility by platform
Table 7 summarizes the details of the minimum required versions for various programming platforms to achieve cgroups v2 compatibility.
| Platform | cgroups v2 compatible | Introduction |
| .NET | Since .NET 5+ | 2020 |
| Node.js | Since Node.js 20 | 2022 |
| Java/OpenJDK* | OpenJDK versions 8u372+ OpenJDK versions 11.0.16+ OpenJDK 17/21 | 2022 |
For the Java platform: See extra details for applications that rely on initialization scripts for Xmx calculation that are an extra layer on top of the image. Those scripts can be cgroups v1 specific and could misfire, causing the MaxRAMPercentage to come at 25% (default upstream value) rather than 50% or 80%. Although the application would still detect the contianer memory and CPU limits, it could cause a different memory/usage pattern.
Additional resources
Refer to the following solutions related to this topic:
- Cgroups v2 in OpenJDK container in Openshift 4
- Verifying Cgroup v2 Support in OpenJDK Images
- What Red Hat middleware software is cgroups v2 compatible?
- Amedeo Salvati has developed a Python tool that extracts runtime versions by executing the binary inside the container, which facilitates this review process.
To learn more about Java container awareness and how it prevents the application's heap decoupling from the container size, see How to use Java container awareness in OpenShift 4 and, of course, Severin Gehwolf's OpenJDK 8u372 to feature cgroup v2 support, which provides all details and links to related bugs.
Conclusion
This article covered the steps to understand and address cgroups v2 compatibility for .NET, Node.js, and OpenJDK. It also described platform compatibility for each cgroups version. For scope limitations, other applications, such as Nginx, might not be container-aware in the same way latest layers above are, which means testing might be required to make sure the application behavior is the same.
Finally, this article providds specific examples and step-by-step guides to help you address common challenges.
Acknowledgments
Special thanks to Alexander Barbosa and Jordan Bell for the great review of this article, and to Lucas Oakley for hours of helpful discussion on cgroups and kernel topics.
I'd also like to recognize three great Red Hatters:
- Giovanni Astarita and his team, for diligently raising and tracking the answer to this question, which anticipates the needs of our customers during OpenShift migrations.
- Severin Gehwolf, for his continued collaboration on container awareness over the years. Vielen Dank (thank you very much).
- Michael Dawson, for his collaboration on Node.js content such as How to build good containers in Node.js.
For specific inquiries, open a case with Red Hat support. Our global team of experts can help you with this and other matters. As Dennis Reed says: That's how it works.
Last updated: January 30, 2026