Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training

    Featured resources

    • Open Source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

How Cryostat 2.2 application management is more flexible

December 12, 2022
Andrew Azores
Related topics:
ContainersJavaQuarkus
Related products:
Red Hat OpenShift

    Many Java programmers use Cryostat to monitor and report statistics on their applications. Since its early days, Cryostat has performed the discovery of target Java Virtual Machine (JVM) applications in various ways. This article demonstrates how you can fully customize your application selection with Cryostat 2.2.

    Previous options for application discovery

    By default, Cryostat detects the platform it is running on at startup. Users can set an environment variable to force a specific platform.

    Once Cryostat starts up and selects a platform, it uses a hardcoded mechanism specific to that platform to discover target JVM applications. In Red Hat OpenShift, for instance, Cryostat queries the OpenShift API server for Endpoints objects within the same namespace as Cryostat, then filters the objects to choose ones where the port either is named jfr-jmx or has the number 9091. Each pair of IP addresses and port numbers is assumed to be a JVM application exposing a JMX port from within a pod, so Cryostat maps these Endpoints objects to its internal representation and exposes them through its API and web user interface.

    Unfortunately, this way of finding applications is inflexible, and problems are quickly apparent. What if your application deployment already uses port 9091 for something other than JMX, and you want to add Cryostat now? Or what if something else in your system depends on a port naming convention, and you can't use jfr-jmx?

    The past several major versions of Cryostat have also supported custom targets, allowing you to make an API request with a JMX Service URL pointing to a JVM application and a human-friendly alias to identify it. This mechanism can help make discovery accurate by informing Cryostat that you have target applications it isn't discovering on its own.

    But using custom targets does not prevent Cryostat from discovering things that are not JVM applications or aren't ones that you care to interact with through Cryostat. Custom targets also miss out on more recent Cryostat versions' awareness of the deployment scenario: In OpenShift, for instance, Cryostat knows that the endpoint represents a JVM within a pod, which belongs to a deployment, which is within a namespace. In contrast, a custom target definition is just a member of a flat list outside this hierarchy. Cryostat also applies annotations to nodes in this hierarchical tree structure and copies OpenShift annotations and labels into them, which would also be missed when you define a custom target.

    And finally, custom target definitions can't accommodate the disappearance or reappearance of a target. If the target is reachable when the definition is created, Cryostat accepts the definition but never rechecks the target's reachability and never deletes the definition if the target becomes unreachable.

    The discovery plugin customizes application discovery

    What is the solution? It is the new Cryostat 2.2 discovery plugin. The old platform-specific mechanisms still exist. Internally, they are wrapped in and registered as a built-in discovery plugin. The discovery plugin API opens up this system for extension by external clients. By registering a discovery plugin and disabling the built-ins (which you can do by setting the environment variable CRYOSTAT_DISABLE_BUILTIN_DISCOVERY), you can fully customize Cryostat's mechanism for discovering target applications and tailor it to suit your exact deployment needs.

    Using the Discovery Plugin is fairly simple through Cryostat's API. The API exposes the following new endpoints to control discovery.

    Register the plugin with Cryostat

    You must register the plugin with Cryostat before publishing information about target applications. Use the following request in your plugin to register it:

    POST /api/v2.2/discovery

    The request must include a POST body in JSON form. Include an item named realm that identifies the kind of plugin (say, myproject or servicemesh-bridge) and an item named callbackUrl that points back to the plugin instance. Cryostat uses the callbackUrl to verify that the plugin is reachable and that communication is open in both directions.

    The plugin must also pass an Authorization header with the request and pass a standard Cryostat platform authz check. You might choose to create a separate OpenShift service account for authorization in production.

    Cryostat's response to this request is a JSON object containing an id and a token. The id is a unique identifier specific to this plugin instance and is required for follow-up API requests. The token is a JWT token with a relatively short expiration date, which the plugin must use for follow-up API requests in place of the Authorization header.

    Application discovery

    Denote the applications you want to be discovered through a request in the following format:

    POST /api/v2.2/discovery/:<id>?token=:<token>

    The <id> path parameter here is the id that was supplied to the plugin in the previous step. Likewise, the <token> query parameter is the token from the previous step.

    The plugin should send a JSON array as the body of this request. The elements within the array are JSON objects representing either target JVM applications directly or some hierarchical node above the target, such as a pod or deployment. For more specifics about the format of these objects, refer to the upstream project documentation. Cryostat inserts this array into the total discovery scenario tree that Cryostat tracks, with a node representing the plugin and its realm as one of the high-level subtrees and the published array as the children of that realm node.

    Removing a plugin

    To stop monitoring the applications requested by the plugin, remove it from Cryostat as follows:

    DELETE /api/v2.2/discovery/:<id>?token=:<token>

    Again, the <id> and <token> are the ones supplied from registration. The plugin uses this request to deregister itself from Cryostat and perform a graceful shutdown.

    If the plugin shuts down without issuing this request, Cryostat eventually notices the missing plugin when trying to ping the plugin's callbackUrl. When that ping fails, Cryostat internally deregisters the plugin.

    Plugins can also time out. Cryostat uses the callbackUrl to inform the plugin that its token is going to expire soon. The plugin can then re-register itself to obtain a refreshed token. For more information on this process, please refer to the upstream project documentation.

    A sample Quarkus JVM mode application

    I have provided a general description but not much practical implementation or demonstration of this plugin. To partly fill the gap, I have prepared a small sample application built with Quarkus that implements the Cryostat discovery plugin API contract outlined in the previous section. The example is a Quarkus JVM mode application that specifies where it is and how to find it rather than letting Cryostat use the platform (the OpenShift API server) to locate the application. You can find the implementation on my GitHub repository. You might be surprised at how small and simple the implementation is, compared to the specification in the previous section.

    My sample application is not ready for widespread use yet and is not part of the downstream Red Hat build of the Cryostat distribution, but work is in progress upstream on a Cryostat agent. This project is a standard JVM tool interface (JVM TI) agent that will be published as a simple JAR to be attached to workload applications.

    This agent will require a few configuration options to indicate the location of its Cryostat backend instance, but the intent is for the agent to register itself as a discovery plugin and notify Cryostat of its presence.

    By using this discovery plugin agent, developers can enjoy the flexibility of the discovery API within their deployment setup without having to write a custom discovery plugin and encode domain-specific knowledge of that deployment setup. They can simply add the Cryostat discovery agent to your container image builds and redeploy your workloads.

    The agent works similarly to the Quarkus sample application, except the Cryostat discovery plugin API requests are extracted into a pluggable JVM TI agent that you can bundle with your existing application without writing a single additional line of code. The next Cryostat 2.3 release will include the Cryostat Agent six months from this publication. So keep an eye out for it.

    Last updated: September 20, 2023

    Related Posts

    • Java monitoring for custom targets with Cryostat

    • Install Cryostat with the new Helm chart

    • Collect JDK Flight Recorder events at runtime with JMC Agent

    Recent Posts

    • Red Hat UBI 8 builders have been promoted to the Paketo Buildpacks organization

    • Using eBPF in Red Hat products

    • How we made one data layer serve the UI, the mocks, and the E2E tests

    • Build trusted Python containers with Project Hummingbird and Calunga

    • Simplify distributed tracing: ObservabilityInstaller installation

    What’s up next?

    Find out how you can move your legacy Java application into a container and deploy it to Kubernetes in minutes using the Developer Sandbox for Red Hat OpenShift.

    Try Java in the sandbox
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility