Is Java really greedy for memory?

Java is often blamed for being an over-hungry consumer of physical memory. Indeed, until recently our OpenShift team were tempted to draw this same conclusion. OpenShift is Red Hat's open source Platform as a Service (PaaS) product. You can access it via public Cloud infrastructure managed by Red Hat (OpenShift Online) or even deploy it to your own data centre/private cloud (OpenShift Enterprise). OpenShift Online provides simple and manageable scalability to anyone developing and deploying web services. One of the OpenShift team's goals is to maximize the number of guest Java EE deployments on any underlying physical host. However, in the first release of OpenShift meeting this goal was challenged by the high physical memory use observed in many of the deployed JVMs. With only so much memory available on any given physical host the more memory each JVM uses the lower the density of guest deployments.

This experience did not square with knowledge of how the Java EE container and the deployed applications consume virtual memory. The latest JBoss application servers, our EAP6 supported product and our WildFly community release, both have extremely fast startup times and exceptionally low memory footprint. Many of the applications deployed to our OpenShift Online platform are simple, example web servers which operate on tiny data sets and, therefore, only have a small amount of objects in their working set.

The missing link in this puzzle is the JVM itself, OpenJDK. The free-to-use version of  OpenShift Online constrains a JVM to use no more than 256Mb of physical memory for the Java heap space. Measurements showed that OpenJDK was using almost all of that space no matter how small the application's memory needs. It turns out that the problem was the configuration of the OpenJDK garbage collector (GC) -- that's the part of the JVM which manages allocation and freeing of Java data, including space for Java application objects. Well, actually, to be more precise it was lack of any configuration that was making the JVM operate greedily.

You get what you ask for

If OpenJDK is run using the default, parallel scavenge GC with no configuration flags provided, other than the heap maximum, OpenJDK will try to use all the available heap right up to that maximum. It keeps allocating new data out of the available address space until it runs out. Only then does it collect all the live data and compact it down into the bottom of the heap, before continuing to fill up the free space and so on. That's true even when the application would run perfectly happily in much less space. The serial GC tries a bit harder to save on memory footprint. It occasionally tries to collect and compact live data within the currently used space but a lot of the time it still just grabs unused space.

In cases where a single machine is dedicated to the JVM that's not necessarily a problem. But in Cloud deployments like OpenShift Online many JVMs are deployed as virtualized guests sharing the resources of an underlying host machine. Clearly, when there is competition for memory it is preferable for the JVM to use as small a memory footprint as is compatible with keeping down memory management costs.

A JVM can easily monitor how much live data an application is holding on to. If this is much lower than  the configured heap maximum, then garbage collection and compaction can be performed early, before all the heap space is filled. That allows each JVM to unmap the unused address space at the top end of the heap, making more physical memory available for other JVMs. The gain is that you can either run more JVMs on the same box or run the same number of JVMs on a similar box installed with less memory. Both options translate to saved money.

The importance of asking OpenJDK nicely

This is not just an academic problem. With the first release of OpenShift Online, deployments based on OpenJDK were simply configuring a single GC option on the command line, -Xmx256m i.e. the only instruction to the GC was "don't use more than 256 Mb for the heap region". Nothing about trying to use less. Many deployments turned out to have have live data sets of a few 10s of Mbs, yet many of them were using up to 200Mb of  physical memory. That's wasting enough memory to provision another whole JVM, or maybe two, with a similar, small memory footprint.

Luckily, OpenJDK provides a very simple remedy to this problem. OpenJDK's serial and parallel scavenge garbage collectors already implement a footprint management policy. If you set the correct configuration flags on the java command line then OpenJDK will keep the mapped heap space fairly close to the application's live data set size. The most important thing to do is set a heap minimum (e.g. passing -Xms40M to the java command would set it at 40Mb) since footprint management is ignored unless both a maximum and minimum are specified. Beyond that you can use a few other options to control how tightly the GC limits the growth of physical memory above the working set size.

The footprint policy comes with a safety net. If the application suddenly needs to allocate a lot of new objects then OpenJDK will still map in extra heap pages to accommodate them, if necessary right up to the heap maximum. The safety net works in reverse too. If references to all those objects get dropped and the retained live data set shrinks significantly in size then, at the next GC, those extra pages will be unmapped, releasing them again for other JVMs to use. Obviously you might have a problem if you increased the total number of JVMs on a machine and they suddenly wanted to use the full 256 Mb of heap all at the same time. But we know from using a similar overcommit of memory in operating systems that this sort of thing does not happen. Careful sharing of resources is a very powerful and valuable optimization.

It's also a relatively cheap optimization. Checking the heap size and performing any necessary extra page map/unmap requests is actually quick and easy, adding very little performance overhead. The GC already has to do most of the needed work anyway. A more significant cost is an increase in the frequency of GCs. Imagine you are shelling peas into a bowl, pods and all. The smaller the bowl the more often you have to sort out the peas and throw away the pods before you can go back to shelling more peas. Similarly, if an application is run in a smaller heap then the same rate of object allocation will fill that space quicker and, hence require more frequent garbage collections to pick out the live objects and throw away the dead ones to make some free space.

It doesn't cost much to ask

It turns out that the more frequent GCs needed to maintain a lower footprint will not actually impose a noticeable cost on any normal application. Even with very demanding server deployments running at high throughput GC costs tend to be well below 1% of execution time. So, even if, say, you doubled or quadrupled the frequency of GCs by cutting the free heap space by a half or a quarter it would still be unlikely to damage throughput by more than 1%. In addition, developer deployments are usually much less performance-critical than live customer deployments. A 1% decrease in throughput is way less than would arise from switching on logging or execution trace or attaching a debugger.

So, why does the default OpenJDK GC configuration not just configure strict footprint management options by default? Well, there is a reason. Traditionally Java applications have been run on dedicated servers where every ounce of speed will matter and where every resource is expected to be made available to the server in order to achieve peak performance. It's also highly relevant that this concern for speed is particularly applicable to that one truly special class of applications known as benchmarks. So, the out-of-the-box configuration is set to be very memory-greedy in order to gain as much speed as possible. That's true even though the cost of configuring footprint management is really only marginal.

With only a small amount of investigation I have been able to prove that it is possible to see dramatic reductions in physical memory use without suffering any very noticeable loss in terms of server throughput or responsiveness. As a result of this investigation we have already made an initial change to OpenShift Online Java gears in order to obtain better memory performance from OpenJDK. We will soon be rolling out further improvements to avoid wasting physical memory use as part of our continuing updates to OpenShift.

Draw conclusions first then explain?

In the second part of this post I will explain how I went about testing the effect of the various OpenJDK configuration flags and proving that footprint management is worth using. So, I will describe the test web service I developed as typical of the sort of deployment we see on OpenShift Java instances. Then I'll also explain how I exercised this web app and configured the GC with each of 4 successively improved GC configurations. I'll also explain how I designed and deployed a Java native agent to collect highly accurate GC performance statistics without slowing down the GC or JVM. Finally I'll present my results showing the gain (reduction) in memory footprint vs the correspondingly tiny overhead in GC timings for each configuration.

However, before I do that, for those who want to cut to the chase and see a management summary, I'm going to state my conclusions up front. Yes, if you are a suit and you have got this far (kudos!) you only have to suffer one more section and then you can point your techies at the follow-up post for all the gory details.

Save memory, save money

In essence, the result is simple. A few simple command line configuration switches cut physical memory use for a typical web server by well over 50% compared to the default parallel GC configuration and by well over 35% compared to the default serial GC configuration. That's true when the web server is running at a reasonably high throughput (100s of requests per second) and holding on to a live data set in the range 40Mb to 85Mb with a maximum heap of 256Mb. For a lighter loaded server (slower request rate or smaller live data set) the savings would be expected to be bigger still. What's the downside? Well, it almost doesn't exist and its certainly hard to notice. The extra time spent in GC was negligible (0.5% of total execution time).

This outcome was not just limited to the sort of configurations found on our free-to-use OpenShift Online deployments. The tests were also run on larger OpenShift instances, more typical of those used by our OpenShift Online Bronze or Silver customers, with a heap maximum of 1Gb and live data sets up to 230Mb. The same savings were observed as for the original tests, again with no significant increase in GC overheads. The same benefit would accrue to OpenShift Enterprise customers running their own private clouds.

So, what's the bottom line. You cannot expect Java to make good use of physical memory if you don't configure it to do so. However, just a few simple configuration settings can dramatically reduce the amount of physical memory OpenJDK will use to run your app with little or no noticeable cost in server throughput. When you are paying for every Mb of physical memory that your servers use then this saving translates direct to dollars shaved off your company's bottom line.

Ok, now for the technical details, including pointers to the test and monitoring code.

Stay tuned for Part 2

Last updated: February 26, 2024