Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Install the Cryostat Operator on Kubernetes from OperatorHub.io

January 20, 2022
Hareet Dhillon
Related topics:
ContainersJavaKubernetes
Related products:
Red Hat build of CryostatRed Hat OpenShift

    Cryostat is a container-native JVM application that provides a secure API for profiling and monitoring containers with JDK Flight Recorder (JFR). Among other features, Cryostat 2.0 introduced the Cryostat Operator, which is now available as part of the OperatorHub.io catalog. Using the Cryostat Operator is an easy way to install Cryostat in your existing Kubernetes environment. This article guides you through the installation procedure.

    Prerequisites

    To get started, you'll need a running Kubernetes cluster with cert-manager installed. The steps outlined in this article assume a local Minikube cluster.

    Step 1. Install the Cryostat Operator

    Start by heading over to the Cryostat Operator page on OperatorHub.io. This page contains useful information about the Cryostat Operator, such as a brief overview, prerequisites, Kubernetes custom resource definitions (CRDs), and links to more information about Cryostat. Clicking Install opens a pop-up window with the steps required to install the operator in your cluster.

    As instructed, begin by installing the Operator Lifecycle Manager (OLM):

    curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.19.1/install.sh | bash -s v0.19.1
    

    Once the command finishes, verify that the install was successful:

    kubectl get pods -n olm
    

    The output should look like something like this:

    ​NAME                               READY  STATUS    RESTARTS  AGE
    catalog-operator-84976fd7df-9w7ds  1/1    Running   0         44s
    olm-operator-844b4b88f8-pvrtn      1/1    Running   0         44s
    operatorhubio-catalog-5p87x        1/1    Running   0         43s
    packageserver-5b7c7b9c65-7nfwc     1/1    Running   0         42s
    packageserver-5b7c7b9c65-stvq8     1/1    Running   0         42s

    Next, install the Cryostat Operator. The following command will deploy the operator in the my-cryostat-operator namespace:

    kubectl create -f https://operatorhub.io/install/cryostat-operator.yaml

    Note: See OperatorHub.io for more information about what this command does.

    It will take 20 to 30 seconds for the Cryostat Operator to deploy. You can watch it happen using the following command:

    kubectl get csv -n my-cryostat-operator -w

    Once the operator phase reads Succeeded, you're almost ready to create a Cryostat deployment. However, you need to tackle some networking setup first.

    Step 2. Configure the Ingress routing

    On Kubernetes, the Cryostat Operator requires Ingress configurations for each of its services to make them available outside of the cluster. For a more detailed explanation, see the Network Options section of the Cryostat Operator's documentation. (The documentation also contains information regarding other Cryostat configuration options to suit a variety of needs.)

    In order to set up the required Ingress configurations, you need an Ingress Controller running on your cluster. There are various Ingress Controller options for Kubernetes, but for this demonstration, you'll use the NGINX Ingress Controller.

    The Kubernetes documentation provides helpful information on how to set up Ingress on Minikube using the NGINX Ingress Controller. Begin by enabling the Ingress Controller:

    minikube addons enable ingress

    Next, verify that the NGINX Ingress Controller is running as expected:

    kubectl get pods -n ingress-nginx
    

    When the output looks something like the following, with the controller pod READY, you can move on to the next step.

    ​NAME                                       READY  STATUS      RESTARTS  AGE
    ingress-nginx-admission-create--1-9x9s4    0/1    Completed   0         37s
    ingress-nginx-admission-patch--1-76m8b     0/1    Completed   1         37s
    ingress-nginx-controller-5f66978484-ntw6f  1/1    Running     0         37s

    Step 3. Create a deployment of Cryostat

    To deploy the Cryostat instance, you need to create a Cryostat object, using a YAML file to represent the Cryostat CRD. The Network Options section of the Cryostat Operator's documentation provides an example with Ingress specifications that will work for this demonstration after you've made a couple of small changes.

    First, you need to add the minimal field under spec, as this is a required field in the Cryostat CRD, and set it to false. This ensures that you will have a non-minimal Cryostat deployment.

    Second, under the annotations field for all three Ingress configurations, increase the proxy-read-timeout and proxy-send-timeout values to 3600 seconds from default 60. Doing this will avoid closing the web socket connection Cryostat relies upon.

    Once you've modified the example, it should look like this:

    ​apiVersion: operator.cryostat.io/v1beta1
    kind: Cryostat
    metadata:
     name: cryostat-sample
    spec:
     minimal: false
     networkOptions:
       coreConfig:
         annotations:
           nginx.ingress.kubernetes.io/backend-protocol : HTTPS
           nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
           nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
         ingressSpec:
           tls:
           - {}
           rules:
           - host: testing.cryostat
             http:
               paths:
               - path: /
                 pathType: Prefix
                 backend:
                   service:
                     name: cryostat-sample
                     port:
                       number: 8181
       commandConfig:
         annotations:
           nginx.ingress.kubernetes.io/backend-protocol : HTTPS
           nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
           nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
         ingressSpec:
           tls:
           - {}
           rules:
           - host: testing.cryostat-command
             http:
               paths:
               - path: /
                 pathType: Prefix
                 backend:
                   service:
                     name: cryostat-sample-command
                     port:
                       number: 9090
       grafanaConfig:
         annotations:
           nginx.ingress.kubernetes.io/backend-protocol : HTTPS
           nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
           nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
         ingressSpec:
           tls:
           - {}
           rules:
           - host: testing.cryostat-grafana
             http:
               paths:
               - path: /
                 pathType: Prefix
                 backend:
                   service:
                     name: cryostat-sample-grafana
                     port:
                       number: 3000

    Copy this code into a file named cryostat-sample.yaml. Once the file is ready to go, you can use it to create a deployment of Cryostat in the my-cryostat-operator namespace:

    kubectl apply -f cryostat-sample.yaml -n my-cryostat-operator

    Next, verify the health of the deployment:

    kubectl get pods -n my-cryostat-operator -w
    

    Once all three containers comprising the cryostat-sample pod are READY, you can proceed.

    ​NAME                                                   READY  STATUS             RESTARTS  AGE
    cryostat-operator-controller-manager-6f7fdb5c68-cwgc5  1/1    Running            0         85s
    cryostat-sample-d57dd74bb-5bqj9                        0/3    Pending            0          0s
    cryostat-sample-d57dd74bb-5bqj9                        0/3    Pending            0          0s
    cryostat-sample-d57dd74bb-5bqj9                        0/3    ContainerCreating  0          0s
    cryostat-sample-d57dd74bb-5bqj9                        2/3    Running            0          2s
    cryostat-sample-d57dd74bb-5bqj9                        2/3    Running            0         11s
    cryostat-sample-d57dd74bb-5bqj9                        3/3    Running            0         11s

    Step 4. Route the Cryostat service URLs to Minikube

    Next, you need to get the Ingress configurations available to you:

    kubectl get ingress -n my-cryostat-operator

    If your setup so far has been correct, the output from this command should look similar to the following:

    ​NAME                     CLASS  HOSTS                     ADDRESS    PORTS    AGE
    cryostat-sample          nginx  testing.cryostat          localhost  80, 443  5m4s
    cryostat-sample-command  nginx  testing.cryostat-command  localhost  80, 443  5m4s
    cryostat-sample-grafana  nginx  testing.cryostat-grafana  localhost  80, 443  5m4s

    Since you're running Minikube locally, the IPv4 address in the ADDRESS column is actually the internal address of the three services. To get the external address, through which the services in the cluster will be exposed, run the following command:

    minikube ip

    Note: For the purpose of this article, IP_ADDRESS will stand in for the address this command returns to you.

    In the next step, you'll need to edit the /etc/hosts file on your computer. (That file path is valid for both Linux and macOS users; on Windows, the path is C:\Windows\System32\Drivers\etc\hosts.) Please note that editing this file requires administrator privileges:

    SUDO_EDITOR=gedit sudoedit /etc/hosts

    Add the following line to the bottom of the file:

    {IP_ADDRESS} testing.cryostat testing.cryostat-command testing.cryostat-grafana

    Remember: {IP_ADDRESS} is the result you got from running the minikube ip command above.

    This addition ensures that your web browser will route requests for HTTPS URLs containing any of the above three hosts to Minikube. The testing.cryostat-grafana host provides access to the Grafana dashboard that is linked to the current Cryostat instance. The testing.cryostat-command host is a leftover from the deprecated command channel.

    With your hosts file modified, you can now access the Cryostat web client UI at https://testing.cryostat.

    Conclusion

    After you've installed the Cryostat Operator using the steps described in this article, there are several things you could do next. You could configure a Java application to work with Cryostat, so that it can be monitored using the web client. You could also configure custom targets for monitoring Java applications. To explore other useful features new in Cryostat 2.0, see the Cryostat 2.0 announcement blog post. For more about what you can do with Cryostat, check out this introduction to JDK Flight Recorder on Red Hat Developer.

    Last updated: November 17, 2023

    Related Posts

    • Announcing Cryostat 2.0: JDK Flight Recorder for containers

    • Configuring Java applications to use Cryostat

    • Java monitoring for custom targets with Cryostat

    • Introduction to Cryostat: JDK Flight Recorder for containers

    Recent Posts

    • Confidential virtual machine storage attack scenarios

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    • Red Hat UBI 8 builders have been promoted to the Paketo Buildpacks organization

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.