Quarkus is Kubernetes native, and to accomplish that we’ve spent a lot of time working across a number of different areas, such as the Java Virtual Machine (JVM) and various framework optimizations. And, there’s much more work still to be done. One area that has piqued the interest of the developer community is Quarkus’s comprehensive and seamless approach to generating an operating system specific (aka native) executable from your Java code, as you do with languages like C and C++, which we believe will typically be used at the end of the build-test-deploy cycle.
Although the native compilation is important, as we’ll discuss later, Quarkus works really well with vanilla OpenJDK Hotspot, thanks to the significant performance improvements we’ve made to the entire stack. The native executable aspect Quarkus offers is optional and, if you don’t want it or your applications don’t need it, then you can ignore it. In fact, even when you are using native images, Quarkus still relies heavily on OpenJDK. The well-received dev mode is able to deliver near-instantaneous change-test cycles all due to Hotspot’s rich dynamic code execution capabilities. Additionally, GraalVM internally uses OpenJDK’s class library and HotSpot to produce a native image.
Still, there's the question: Why have native compilation at all if the other optimizations are so good? That's the question we'll look at more closely here.
Let’s start with the obvious: JBoss and Red Hat have a long track record of knowing how to optimize the JVM, stacks, and frameworks including:
- The first app server to run in the cloud on Red Hat OpenShift
- The first app server to run on a plug computer.
- The first app server to run on a Raspberry Pi.
- Running many of our projects on Android.
As this shows, we have been working on running Java in the cloud and on constrained devices (aka Internet of Things) for many years, always looking at how we could squeeze the next performance or memory optimization out of the JVM. We and others have long looked at the potential for compiled Java, whether through GCJ, Avian, Excelsior JET, or even Dalvik because we understood the trade-offs (e.g., lack of build-once-run-anywhere versus lower on-disk footprint or quicker startup time).
Why are these trade-offs important? The answer is that, for some important scenarios, it makes complete sense:
- Consider serverless/event-driven environments where we need to spin up a service in real time (soft or hard) to react to an event. Unlike a persistent service, cold start cost lengthens the response time to a request. Today the JVM still takes “a while” to start and although throwing hardware at the problem can help, in some situations, the difference between 1 second and 5 ms could be life or death. Yes, we can play tricks such as hot standby JVMs (we’ve been there with our Knative port of OpenWhisk, for example), but that doesn’t guarantee you’ll have enough JVMs waiting to field requests as they scale up, nor does it sound like a good way to save money when you’d be paying for processes that aren’t being used all the time.
- Then there’s the multi-tenancy aspect we often hear about. Although the JVM has grown to duplicate many operating system capabilities, it still doesn’t have the kind of process isolation we take for granted in operating systems such as Linux, and the failure of a thread can take down the entire JVM. Many people get around this by only running a single user’s applications in a single JVM. That’s a legitimate thing to do: the unit of failure is, therefore, the JVM and failures for one user don’t necessarily impact any other users. However, running a JVM per user often presents problems at scale.
- Density also matters to cloud-native applications. Embracing 12-factor app, microservices, and Kubernetes leads to many JVMs per app. While you gain elasticity and robustness, the cost of base memory footprint per service starts to add up, even though a portion of that cost isn’t strictly necessary. Statically compiled executables have the ability to benefit from closed-world optimizations, such as fine-grained dead-code elimination, where only the portions of frameworks (including the JDK itself) actually in use by the service are included in the resulting image. By tailoring the application to be native-friendly, Quarkus can be used to densely pack many service instances on a host, without compromising security.
For these reasons alone, a native executable option presents a valid solution for us and others in our communities. There’s another less technical reason, which is no less important: Over the past few years, many Java developers and companies have abandoned it in favor of newer languages, often because they believe that the JVM, the stacks, and the frameworks are bloated, slow, etc.
Trying to force-fit an existing hammer to a new nail isn’t always the best approach, and sometimes it’s best to take a step back and consider a new tool. If Quarkus makes people pause and rethink, then it's a good thing for the entire Java ecosystem. Quarkus takes an innovative view of how to deliver more efficient applications, making Java relevant to application architectures previously thought taboo (like serverless). Additionally, through its extension capability, we hope to see a Quarkus Java extension ecosystem that offers a large set of frameworks that can be natively compiled along with your application out of the box.
Last updated: February 11, 2024