Red Hat Summit 2018 demo

The scavenger hunt game developed for the audience to play during the Red Hat Summit 2018 demo used Red Hat Data Grid as storage for everything except the pictures taken by the participants. Data was stored across three different cloud environments using cross-site replication. In this blog post, we will look at how data was flowing through Data Grid and explain the Data Grid features powering different aspects of the game's functionality.

In its simplest form, Data Grid exposes a key/value map API. This API was used throughout the game for storing and retrieving information such as game tasks, per-site active users, players, picture/task attempts (also called transactions), and scores.

Some of the data, such as player information including each player’s score was indexed for speed of retrieval. This was done in order to easily calculate the leaderboard, that is, the top 10 players with the highest score. So, from the player’s data, we only needed to index its score. We were able to do that very easily using Infinispan Protostream annotations (full source code can be found here), example:

@ProtoMessage(name = "Player")
public class Player {

private int score;

@ProtoField(number = 10, required = true)
public int getScore() {
return score;


Calculating the leaderboard was easy. The Data Grid client simply had to build the query for it, send it to the Data Grid and handle results. The query looked something like this:

Query query = queryFactory.from(Player.class)
.orderBy("score", SortOrder.DESC)

It’s worth noting that both the player data and the index itself were replicated across sites. This meant that no matter which cloud you’d hit, you’d get the same, consistent result.

Even though it didn’t make the final cut, we had also planned for users to be able to figure out their individual position in the leaderboard. Coming up with such a query was a bit more tricky, but we could achieve that by using this query:

int playerScore = ...
Query query = qf.from(Player.class)
.orderBy("score", SortOrder.DESC)

List list = query.list();
final long playerRank = list.size() + 1;

The query groups the number of different scores, sort them in descending order and count how many are bigger than a given player’s score. The number returned, plus 1, would give us the players position. This meant that if several players had the same score, they’d all share the same position but this is quite common, e.g. golf tournament rankings.

Keeping indexed data across different clouds was not always easy. One issue we had to deal with is the lack of default cross-site replication for the indexed data schema. To be able to index data, Data Grid must be able to decompose binary data received from the client to discover individual fields, figure out which ones to index, etc. To solve this problem, we created a schema keeper component (code can be found here) which checked if the player’s schema was present in a given cloud, and if it wasn’t, register it. This component was deployed as a sidecar with each of the Data Grid instances. The Data Grid team is working to avoid the need for such component in the future.

Whenever a picture was uploaded and it was scored, the score would be stored individually inside Data Grid. This would trigger the Data Grid to send an event using remote client listeners, which would be picked by the game and would forward it to the user. Bearing in mind that Data Grid stores data in a key/value pair structure, the score would be stored in the value part. However, by default remote client listener events only ship key (and version) information. To avoid an extra lookup, the remote client listener was configured with a converter factory named key-value-with-previous-converter-factory which is available out-of-the-box. By doing this, each event was transformed to contain the value part as well as the key. Example:

@ClientListener(converterFactoryName = "key-value-with-previous-converter-factory", ...)
public class RemoteCacheListener {

For each new score, we wanted a single event to be fired. However, due to how remote client listeners work, each score would, by default, fire as many events as different cloud/sites available. To avoid this, we used remote client listener filters to create a server-side deployed filter which would compare the score’s cloud of origin (AWS, Azure or Private) with the cloud where the filter was being executed.

For example, if a score originated in AWS, only the filter running inside the AWS cloud would allow the event to be fired to the client. When this score arrived in Azure or Private clouds, the filter would detect the event originated in a different cloud and would not fire it. The code for the filter can be found here.

To add it to the server, the filter needs to be deployed into it. For the demo, we extended the base Data Grid image with JAR containing the filter and related files.

Finally, to apply the filter to the listener, we added the name of the filter factory to the filterFactoryName property of the client listener annotation. Example:

converterFactoryName = "key-value-with-previous-converter-factory",
filterFactoryName = "site-filter-factory"
public class RemoteCacheListener {

This concludes this blog post where we looked at how the scavenger hunt game’s application layer used Red Had Data Grid to store and expose metadata information used throughout the game.

The Red Hat Data Grid Team

Last updated: June 25, 2018