Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Build embedded cache clusters with Quarkus and Red Hat Data Grid

December 17, 2020
Daniel Oh
Related topics:
QuarkusMicroservicesArtificial intelligenceJava
Related products:
Red Hat Data Grid

Share:

    There are many ways to configure the cache in a microservices system. As a rule of thumb, you should use caching only in one place; for example, you should not use the cache in both the HTTP and application layers. Distributed caching both increases cloud-native application performance and minimizes the overhead of creating new microservices.

    Infinispan is an open source, in-memory data grid that can run as a distributed cache or a NoSQL datastore. You could use it as a cache, such as for session clustering, or as a data grid in front of the database. Red Hat Data Grid builds on Infinispan with additional features and support for enterprise production environments.

    Data Grid lets you access a distributed cache via an embedded Java library or a language-independent remote service. The remote service works with protocols such as Hot Rod, REST, and Memcached. In this article, you will learn how to build a distributed cache system with Quarkus and Data Grid. We'll use Quarkus to integrate two clustered, embedded Data Grid caches deployed to Red Hat OpenShift Container Platform (RHOCP). Figure 1 shows the distributed cache architecture for this example.

    Clustered embedded RHDG
    Clustered embedded RHDG
    Figure 1: Clustered embedded Data Grid caches in an OpenShift Container Platform deployment.

    Build a distributed cache system with Quarkus and Data Grid

    In this example, we’ll create a simple ScoreCard application to store data in two clustered embedded caches. We'll go through the process step-by-step.

    Step 1: Create and configure a Quarkus application

    The first step is to set up the Quarkus application. You can use Quarkus Tools to generate a Quarkus project in your preferred IDE. See the complete source code for the example Quarkus application.

    Step 2: Add the Infinispan-embedded and OpenShift extensions

    After you've configured your Quarkus project with an artifactId (embedded-caches-quarkus), you will need to add the required Quarkus extensions. Add the following dependencies to your Maven pom.xml:

    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-infinispan-embedded</artifactId>
    </dependency>
    <dependency>
      <groupId>org.infinispan.protostream</groupId>
      <artifactId>protostream-processor</artifactId>
    </dependency>
    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-openshift</artifactId>
    </dependency>
    

    As an alternative, run the following Maven plug-in command from your Quarkus project's base directory:

    ./mvnw quarkus:add-extension -Dextensions="infinispan-embedded,openshift"
    

    Step 3: Create a caching service

    Next, we'll create a caching service. Red Hat Data Grid provides an EmbeddedCacheManager interface for creating, modifying, and managing clustered caches. The CacheManager interface runs in the same Java virtual machine (JVM) as the client application. Create a new service named ScoreService.java in your Quarkus project, then add the following code to save, delete, and retrieve score data in the example application:

    EmbeddedCacheManager cacheManager;
    public List<Score> getAll() {
      return new ArrayList<>(scoreCache.values());
    }
    
    public void save(Score entry) {
      scoreCache.put(getKey(entry), entry);
    }
    
    public void delete(Score entry) {
      scoreCache.remove(getKey(entry));
    }

    Step 4: Initialize the cache nodes

    Next, you’ll define a GlobalConfigurationBuilder to initialize all cache nodes at startup. The existing cache data will be replicated automatically when a new node is created. Add the following onStart method in ScoreService.java class:

    void onStart(@Observes @Priority(value = 1) StartupEvent ev) {
    
       GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder();
       global.transport().clusterName("ScoreCard");
       cacheManager = new DefaultCacheManager(global.build());
    
       ConfigurationBuilder config = new ConfigurationBuilder();
       config.expiration().lifespan(5, TimeUnit.MINUTES).clustering().cacheMode(CacheMode.REPL_SYNC);
    
       cacheManager.defineConfiguration("scoreboard", config.build());
       scoreCache = cacheManager.getCache("scoreboard");
       scoreCache.addListener(new CacheListener());
    
       log.info("Cache initialized");
    
    }
    

    Step 5: Configure the transport layer

    Red Hat Data Grid uses a JGroups library to join clusters and replicate data over the transport layer. To deploy the Quarkus application with the embedded Infinispan caches, we need to add the following DNS_PING configuration in the resources/default-configs/jgroups-kubernetes.xml file:

    <dns.DNS_PING dns_query="jcache-quarkus-ping" num_discovery_runs="3" />
    

    Step 6: Add a CacheListener

    Red Hat Data Grid provides CacheListener APIs for clients to get notifications about cache events, such as when a new cache entry is created. We can implement a listener by adding the @Listener annotation to a plain old Java object (POJO) class. Let’s create a new CacheListener.java class and add the following code to it:

    @Listener
    public class CacheListener {
    
       @CacheEntryCreated
       public void entryCreated(CacheEntryCreatedEvent<String, Score> event) {
           System.out.printf("-- Entry for %s created \n", event.getType());
       }
    
       @CacheEntryModified
       public void entryUpdated(CacheEntryModifiedEvent<String, Score> event){
           System.out.printf("-- Entry for %s modified\n", event.getType());
       }
    
    }
    

    Step 7: Build and deploy the Quarkus application to OpenShift

    The OpenShift extension that we added earlier is a wrapper extension that combines the Kubernetes and container-image-s2i extensions with a default configuration. The extension makes it easy to get started with Quarkus on OpenShift.

    Quarkus lets us automatically generate OpenShift resources based on a default or user-supplied configuration. To deploy our Quarkus application to OpenShift, we only need to add the following configuration to the application's application.properties file:

    quarkus.openshift.expose=true
    quarkus.kubernetes-client.trust-certs=true
    quarkus.container-image.build=true
    quarkus.kubernetes.deploy=true
    

    Use the following command to log into the OpenShift cluster:

    $ oc login --token=<YOUR_USERNAME_TOKEN> --server=<OPENSHIFT API URL>
    

    Then, execute the following Maven plug-in command:

    ./mvnw clean package
    

    Step 8: Scale-up to two pods

    Assuming that we've successfully deployed the Quarkus application, our next step is to scale up to two pods. Run the following command:

    oc scale dc/embedded-caches-quarkus --replicas=2
    

    To verify the pods, go to OpenShift's topology view in your developer console. Check to see whether the two pods are running in the embedded-caches-quarkus project, as shown in Figure 2.

    The two pods are shown in OpenShift's topology view.
    Figure 2: The two pods  are running in our Quarkus application

    Step 9: Test the application

    As a final step, let's test the application. We can start by using a RESTful API invocation to add a few scores in the clustered embedded caches:

    curl --header "Content-Type: application/json" --request PATCH -d '{"card":[5,4,4,10,3,0,0,0,0,0,0,0,0,0,0,0,0,0],"course":"Bethpage","currentHole":4,"playerId":"4","playerName":"Daniel"}' <APP_ROUTE_URL>
    

    If you are using the sample application, you need to run a predefined bash script (in this case, it's sh scripts/load.sh).

    Return to the topology view and click View logs for each pod in its own web browser. This lets you monitor cache events from CacheListener. You should see the same cache entry logs in both Data Grid nodes. Figure 3 shows the output from Node 1.

    The CacheListener output from Node 1.
    Figure 3: Monitoring cache events from the CacheListener (Node 1).

    Figure 4 shows the output from Node 2.

    The CacheListener output from Node 2.
    Figure 4: Monitoring cache events from the CacheListener (Node 2).

    What’s next for Quarkus and Red Hat Data Grid

    Red Hat Data Grid increases application response times and lets developers dramatically improve performance while providing availability, reliability, and elastic scale. Using Quarkus' serverless function for the front end and external Red Hat Data Grid servers, you can integrate these benefits into a serverless architecture designed for high-performance, auto-scaling, and superfast response times. 

    Last updated: May 17, 2021

    Recent Posts

    • Why some agentic AI developers are moving code from Python to Rust

    • Confidential VMs: The core of confidential containers

    • Benchmarking with GuideLLM in air-gapped OpenShift clusters

    • Run Qwen3-Next on vLLM with Red Hat AI: A step-by-step guide

    • How to implement observability with Python and Llama Stack

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue