Cryostat is a profiling and monitoring tool that leverages the JDK Flight Recorder (JFR) framework already present in your HotSpot JVM applications. Cryostat provides an in-cluster collection hub for easy and secure access to your JDK Flight Recorder data from outside of the cluster.
This article follows our recent announcement of Cryostat 2.0. It is the first of several hands-on guides to using Cryostat 2.0 in your Java applications. In this article, we'll explore how to set up and configure a Quarkus-based Java application to use Cryostat on Red Hat OpenShift.
Read all of the articles in this series:
- Part 1: Get started with Cryostat 2.0
- Part 2: Configuring Java applications to use Cryostat
- Part 3: Java monitoring for custom targets with Cryostat
- Part 4: Automating JDK Flight Recorder in containers
- Part 5: Creating Custom JFR event templates with Cryostat 2.0
Note: The Red Hat build of Cryostat 2.0 is now widely available in technology preview. Cryostat 2.0 introduces many new features and improvements, such as automated rules, a better API response JSON format, custom targets, concurrent target JMX connections, WebSocket push notifications, and more. The Red Hat build includes the Cryostat Operator to simplify and automate Cryostat deployment on OpenShift.
JMX and Cryostat
The main prerequisite for using Cryostat is that your application must have Java Management Extensions (JMX) enabled and exposed. In OpenShift, exposing the JMX port means either using the Cryostat-default port number of 9091
or naming the port jfr-jmx
. We will explore the procedure for exposing JMX with a Quarkus-based Java application deployed on OpenShift.
Step 1: Generate the Quarkus sample application
Let’s get started with the Quarkus sample application. To begin, we will visit code.quarkus.io and generate a new application. For the purpose of this article, we will use the default org.acme
group and code-with-quarkus
artifactId
. Feel free to customize these as you see fit. Once you have set up your generation options, download and extract the .zip to a working directory:
$ mv Downloads/code-with-quarkus.zip workspace
$ cd workspace
$ unzip code-with-quarkus.zip
$ cd code-with-quarkus
Now we have our Quarkus application generated and ready for modification. Let’s do a quick sanity check before continuing. Once mvnw
has downloaded Quarkus' dependencies and built the application, you should see a message something like: “Listening on: http://localhost:8080.” Visit this URL to ensure that you see the Quarkus default welcome page.
$ ./mvnw compile quarkus:dev
Step 2: Configure the application for JMX
If everything looks good so far, we can now set up the Quarkus application for JMX. Switch back to the terminal running your Quarkus application and press Ctrl-C to stop the dev server.
Now that the dev server is stopped we will go ahead and edit the project's Dockerfile.jvm
. This file contains directives for Open Container Initiative (OCI) image builders such as Podman, Buildah, and Docker to follow when assembling the Quarkus application into an OCI image (specifically, when the application is built in JVM mode as opposed to native image mode). Once we have our JVM-mode application packaged into an OCI image then we can deploy and run that image as a container in OpenShift or Podman. Let's continue and edit the Dockerfile:
$ $VISUAL src/main/docker/Dockerfile.jvm
There should be a line like this one:
ENV JAVA_OPTIONS=”-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager”
Let’s modify this line and add options to enable JMX:
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Dcom.sun.management.jmxremote.port=9096 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
Here, we have enabled JMX on port 9096. For the purpose of this article, I have disabled JMX SSL and JMX authentication. In production, both should be enabled (see the Oracle documentation for more about JMX administration).
Now, we need to make one last change to the Dockerfile: The line EXPOSE 8080
should become EXPOSE 8080 9096
. This will add metadata to the OCI image hinting that the application within will listen on ports 8080
and 9096
, so that when we deploy this on OpenShift later those two ports will be automatically included in the generated Service for intra-cluster network traffic.
Why we're running Quarkus in JVM mode: You may have noticed that we have edited only the Dockerfile.jvm and not the other Dockerfiles, like Dockerfile.native. This is because Quarkus in native-image mode does not currently support JMX. We will need to run Quarkus in JVM mode to have access to JMX and, therefore, to be Cryostat-compatible.
Step 3: Take it for a test drive (optional)
This configuration will set up the sample Quarkus application to use JMX when built and run as an OCI image. If you would like to test drive JMX and JFR with Quarkus before we get to that stage, you can do the following:
$ ./mvnw -Djvm.args=”-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Dcom.sun.management.jmxremote.port=9096 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false” compile quarkus:dev
This will run the quarkus:dev
server with JMX enabled and configured the same way we have specified in the Dockerfile.jvm. Now, open JDK Mission Control, and you should see the Quarkus application in the JVM browser panel.
Step 4: Build the application into a container image
Once we are satisfied with our configuration, let’s go ahead and build the application into a container image. Since we want to deploy this application to OpenShift, I will tag it with an example quay.io
tag, which you should replace with your own username and application name:
$ ./mvnw package
$ podman build -f src/main/docker/Dockerfile.jvm -t quay.io/<namespace>/code-with-quarkus .
We'll do one more sanity check, and if it looks good, push it to quay.io:
$ podman run -i --rm -p 8080:8080 -p 9096:9096 quay.io/<namespace>/code-with-quarkus
$ # open http://localhost:8080 in your browser once more and ensure you see the Quarkus welcome page
$ podman push quay.io/<namespace>/code-with-quarkus
Once that image is pushed we need to visit quay.io (for example, https://quay.io/<namespace>/code-with-quarkus?tab=settings) and make the image repository public. After that, we can deploy it to our OpenShift cluster:
$ oc new-app quay.io/<namespace>/code-with-quarkus:latest
$ oc edit svc code-with-quarkus
$ # rename the port “9096-tcp” to “jfr-jmx”, but leave the port, targetPort, etc. the same
$ oc expose --port=8080 svc code-with-quarkus
$ oc status # check that the code-with-quarkus app is accessible and still displays the Quarkus welcome page
Step 5: Install Cryostat 2.0 on OpenShift
See the Cryostat 2.0 announcement for how to install Cryostat in your OpenShift cluster using the Cryostat Operator. Once you have Cryostat installed and present in the same OpenShift namespace as your code-with-quarkus example application, you can verify that everything is wired up correctly:
$ oc get flightrecorders
There should be an item like code-with-quarkus-5645dbdd47-z7pl7
. This indicates that the Cryostat Operator is running and recognizes our Quarkus application as being Cryostat-compatible.
Check oc status
again and visit the Cryostat URL. Enter your OpenShift account token (you can retrieve it from the OpenShift console or with oc whoami -t
), then select the code-with-quarkus application
and go to the Events view. If you see a list of event templates loaded, then your Cryostat and Quarkus instances are communicating successfully!
Conclusion
This article was the first of several that will provide hands-on introductions to using Cryostat with your Java applications. Look for the next article in this series, which introduces you to defining custom targets for Cryostat.
Last updated: September 20, 2023