Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Create and manage Data Grid services

June 8, 2021
Katia Aresti Don Naro Ryan Emerson
Related topics:
Event-DrivenKubernetesOperatorsQuarkus
Related products:
Red Hat Data GridRed Hat OpenShift

Share:

    In a recent article, we described how we used Red Hat Data Grid, built from the Infinispan community project, to deliver a global leaderboard that tracked real-time scores for an online game.

    In this article, we’re back to demonstrate how we used the Red Hat Data Grid Operator on Red Hat OpenShift to create and manage services running on AWS (Amazon Web Services), GCP (Google Cloud Platform), and Microsoft Azure. In this case, we used the Data Grid Operator to create a global Data Grid cluster across multiple cloud platforms that appeared as a single Data Grid service to external consumers.

    Note: The global leaderboard was featured during Burr Sutter's Red Hat Summit keynote in April 2021. Get a program schedule for the second half of this year's virtual summit, coming June 15 to 16, 2021.

    Global leaderboard across the hybrid cloud

    Figure 1 shows the global leaderboard that we introduced in our previous article.

    Cross-site replication with three sites.
    Figure 1: Cross-site replication using the Data Grid Operator.

    As we explained in the previous article, the online game and all related services were hosted on different cloud providers in three separate geographical regions. To ensure the system performed well and was responsive enough to show real-time results for each match, we used Data Grid to provide low-latency access to in-memory data stored close to the player’s physical location.

    The leaderboard represents a global ranking of players, so the system needs to use all scores across each of the three clusters to determine overall winners. To achieve all this, we devised a solution that brought together the following technologies:

    • Indexing and querying caches using Protobuf encoding.
    • Quarkus extensions for Infinispan, RESTEasy, Websockets, and Scheduler.
    • Data Grid cross-site replication.
    • Data Grid Operator for OpenShift.

    Data Grid Operator

    To make our lives easier, we created a Data Grid Operator subscription on each OpenShift cluster at each data center. The Operator then automatically configured and managed the Data Grid server nodes and did all the heavy lifting to establish cross-site network connections. As a result, we didn’t have as much deployment complexity to deal with, and could concentrate on our implementation. The following Infinispan custom resource contains the Kubernetes API URL for each site and a reference to a secret that contains each cluster’s credentials. We referenced each site name in our cache’s <backups/> configuration.

    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: datagrid
    spec:
      image: quay.io/infinispan/server:12.1.0.Final
      container:
        cpu: "1000m"
        memory: 4Gi
      expose:
        type: LoadBalancer
      replicas: 2
      security:
        endpointEncryption:
          type: None
        endpointAuthentication: false
      service:
        type: DataGrid
        sites:
          local:
            name: SITE_NAME
            expose:
              type: LoadBalancer
          locations:
            - name: AWS
              url: openshift://api.summit-aws.28ts.p1.openshiftapps.com:6443
              secretName: aws-token
            - name: GCP
              url: openshift://api.summit-gcp.eior.p2.openshiftapps.com:6443
              secretName: gcp-token
            - name: AZURE
              url: openshift://api.g9dkpkud.centralus.aroapp.io:6443
              secretName: azure-token
      logging:
        categories:
          org.jgroups.protocols.TCP: error
          org.jgroups.protocols.relay.RELAY2: error

    To create and initialize our caches, we used a combination of the Cache and Batch custom resources, which the Data Grid Operator provides. We'll look at those next.

    The Cache custom resource

    The caches that we needed to back up to other locations were game, players-scores, and players-shots. We created these caches with the following Cache custom resource, where the CACHE_NAME and BACKUP_SITE_ placeholders were replaced with correct values for each site deployment:

    --- 
        apiVersion: infinispan.org/v2alpha1
        kind: Cache
        metadata: 
          name: CACHE_NAME
        spec: 
          clusterName: datagrid
          name: CACHE_NAME
          adminAuth: 
            secretName: cache-credentials
          template: |
            <infinispan>
                <cache-container>
                    <distributed-cache name="game" statistics="true">
                        <encoding>
                            <key media-type="text/plain" />
                            <value media-type="text/plain" />
                        </encoding>
                        <backups>
                            <backup site="BACKUP_SITE_1" strategy="ASYNC" enabled="true">
                                <take-offline min-wait="60000" after-failures="3" />
                            </backup>
                            <backup site="BACKUP_SITE_2" strategy="ASYNC" enabled="true">
                                <take-offline min-wait="60000" after-failures="3" />
                            </backup>
                        </backups>
                    </distributed-cache>
                </cache-container>
            </infinispan>

    The Batch custom resource

    To create the other caches we needed for our system, and to upload all our *.proto (Protobuf) schemas to our Data Grid cluster, we used the Batch custom resource. The Batch custom resource executes the batch file in a ConfigMap using the Data Grid command-line interface (CLI). We were able to take advantage of full CLI capabilities for manipulating caches without worrying too much about the authentication and connection details. Here’s the Batch custom resource we used to upload our schema, create the players and match-instances caches at each site, and then put an initial entry into the game cache:

    apiVersion: infinispan.org/v2alpha1
    kind: Batch
    metadata:
      name: datagrid
    spec:
      cluster: datagrid
      configMap: datagrid-batch
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: datagrid-batch
    data:
      LeaderboardServiceContextInitializer.proto: |
        // File name: LeaderboardServiceContextInitializer.proto
        // Generated from : com.redhat.model.LeaderboardServiceContextInitializer
        syntax = "proto2";
        package com.redhat;
        enum GameStatus {
           PLAYING = 1;
           WIN = 2;
           LOSS = 3;
        }
        enum ShipType {
           CARRIER = 1;
           SUBMARINE = 2;
           BATTLESHIP = 3;
           DESTROYER = 4;
        }
        /**
         * @Indexed
         */
        message PlayerScore {
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            */
           optional string userId = 1;
           optional string matchId = 2;
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            */
           optional string gameId = 3;
           optional string username = 4;
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            */
           optional bool human = 5;
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            * @SortableField
            */
           optional int32 score = 6;
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            * @SortableField
            */
           optional int64 timestamp = 7;
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            */
           optional GameStatus gameStatus = 8;
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            */
           optional int32 bonus = 9;
        }
        /**
         * @Indexed
         */
        message Shot {
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            */
           required string userId = 1;
           required string matchId = 2;
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            */
           optional string gameId = 3;
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            */
           optional bool human = 4;
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            */
           optional int64 timestamp = 5;
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            */
           optional ShotType shotType = 6;
           /**
            * @Field(index=Index.YES, analyze = Analyze.NO, store = Store.YES)
            */
           optional ShipType shipType = 7;
        }
        enum ShotType {
           HIT = 1;
           MISS = 2;
           SUNK = 3;
        }
      batch: |
        schema --upload=/etc/batch/LeaderboardServiceContextInitializer.proto LeaderboardServiceContextInitializer.proto
        create cache --file=/etc/batch/match-instances.xml match-instances
        create cache --file=/etc/batch/players.xml players
        cd caches/game
        put --encoding=application/json --file=/etc/batch/game-config.json game
      game-config.json: "{\n\t\"id\": \"uuidv4\",\n\t\"state\": \"lobby\"\n}\n"
      game.xml: |
        <infinispan>
            <cache-container>
                <distributed-cache name="game" statistics="true">
                    <encoding>
                        <key media-type="text/plain" />
                        <value media-type="text/plain" />
                    </encoding>
                </distributed-cache>
            </cache-container>
        </infinispan>
      match-instances.xml: |
        <infinispan>
            <cache-container>
                <distributed-cache name="players" statistics="true">
                    <encoding>
                        <key media-type="text/plain" />
                        <value media-type="text/plain" />
                    </encoding>
                </distributed-cache>
            </cache-container>
        </infinispan>
      players.xml: |
        <infinispan>
            <cache-container>
                <distributed-cache name="players" statistics="true">
                    <encoding>
                        <key media-type="text/plain" />
                        <value media-type="text/plain" />
                    </encoding>
                </distributed-cache>
            </cache-container>
        </infinispan>

    Conclusion

    In this article, we’ve shown you how we created a system built with Red Hat Data Grid and Quarkus with RESTEasy, Websockets, Scheduler, and the infinispan-client extensions, along with Protobuf encoding. The resulting system creates a global ranking of game players across three different cloud providers in separate geographic regions. We hope all of these details will inspire you to start using Data Grid, or the Infinispan project, for other hybrid cloud use cases. If you’re interested in finding out more, please feel free to start with the Data Grid project page or visit our Infinispan community website.

    Last updated: September 19, 2023

    Related Posts

    • Building a real-time leaderboard with Red Hat Data Grid and Quarkus on a hybrid Kubernetes deployment

    • Using Red Hat Data Grid to power a multi-cloud real-time game

    Recent Posts

    • Beyond a single cluster with OpenShift Service Mesh 3

    • Kubernetes MCP server: AI-powered cluster management

    • Unlocking the power of OpenShift Service Mesh 3

    • Run DialoGPT-small on OpenShift AI for internal model testing

    • Skopeo: The unsung hero of Linux container-tools

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue