Reactica roller coaster image

Before we get into the details of the code, we'll cover three topics here: 

  • The Reactica roller coaster and our assignment
  • Reactive programming
  • Eclipse Vert.x

The Reactica roller coaster

The thrill of riding the Reactica roller coaster can hardly be put into words, so we won’t attempt that here. Besides, the point of this demo is to create a display that shows the wait time for the coaster. We’ll let the ride and its riders speak for themselves. 

The code we’ll cover uses a system of reactive microservices to handle all of the data associated with the coaster and the guests who dare to ride it. Those microservices use asynchronous messages to do their work, creating a system that is resilient, scalable, and reliable. The result is a compelling display that shows Coderland guests how long they’ll have to wait before boarding the ride. 

That’s all we really need to know about the scenario itself, so let’s move on. 

Reactive programming

What exactly is reactive programming? To quote Andre Staltz’s Reactive tutorial

Reactive programming is programming with asynchronous data streams. 

To put it another way, we’ll be writing components that create or consume events. (In some cases, a component does both.) In our work here, some events are sent to Red Hat AMQ; others are sent to Red Hat Data Grid. Those two pieces of middleware are how data moves through the system. Events are things like “The roller coaster just shut down,” “Someone just got on the ride,” and “Someone just finished the ride.” We’ll use the data in those events to calculate the wait times for Reactica. 

We use Red Hat Data Grid to track park guests as they get in line for Reactica, as they board the ride, and as they finish the ride. The data grid provides backflow support, meaning that we can store events until a consumer handles them. It’s possible that the consumers may not be able to handle all the events when they occur. With Data Grid, all of the events are guaranteed to be consumed. 

What we’re going to do here is create a few microservices and have each one create and/or watch asynchronous data streams. The events we’ll be watching are simple and easy to process. And best of all, the code is simple, too. Most of our microservices have fewer than 25 lines of code.

Characteristics of reactive systems

The Reactive Manifesto defines four characteristics of reactive systems. We’ll discuss them briefly here. As we go through the Reactica code, we’ll see how it embodies these principles. The manifesto says that reactive systems are: 

Responsive

The system responds as quickly as possible. (Duh.) Obviously, every system is designed that way, but the responsiveness of reactive systems includes the notion that failures are found quickly and handled in a graceful way.  

Resilient 

Following up on this notion of responsiveness, reactive systems stay responsive when parts of the system fail. As you’ll see, our system has multiple components, each of which is isolated from the others. That means any failure is confined to one component, not the entire system. When the complete application is deployed, we can start and stop different components without affecting the entire system. 

Elastic

Because the components of a reactive system are creating and consuming events asynchronously, it’s likely that the resources needed by a particular component will change significantly over time. A reactive system scales those resources up and down as necessary. (If you think that sounds very cloud-y and Kubernetes-ish, you’re on the right track.)

Message driven

The final piece of the puzzle is that reactive systems use asynchronous message passing to communicate with each other. Message passing, by its very nature, promotes loose coupling between components, and doing things asynchronously protects one component from a failure in another. If a component is producing messages, it won’t know if a component that consumes those messages fails or degrades in some way. Similarly, a message consumer won’t know if any of the producers fail. 

System architecture

The architecture of our reactive system looks like this: 

reactica architecture image

In the upper left of the diagram, we have the components that generate User and Ride objects. New Users are sent to Red Hat AMQ. In the lower left, the Event Store component watches AMQ for new Users and passes them on to Red Hat Data Grid. 

In the lower right are the components that generate the data needed by the Billboard component. The Queue Length Estimate component queries the data grid to see how many people are in line for the coaster. That determines the wait time. The Current Waiting Line component keeps track of the names of the guests (Users) in line and their status. A User's state can be in line, on the ride, or completed the ride. 

Reactica architecture

 

The point of this entire exercise is the Billboard UI component in the upper right. It watches for updated data (sent via AMQ) from the Queue Length Estimate and Current Waiting Line components and changes the web UI accordingly. It also has controls for the coaster itself, allowing an administrator to start or stop the coaster, start or stop the generation of new Users, and even dismiss all Users from the coaster if Reactica is going to be down for a while. 

 

A reactive, distributed system

 

The combination of all of these components and the underlying infrastructure isn’t just an example of reactive programming; it’s a complete reactive, distributed system that embodies all of the characteristics we mentioned earlier. 

 

Reactive resources

 

If you’d like to dive deeper into reactive programming, here are some useful resources: 

 

 

Eclipse Vert.x

 

Because almost all of the code is written using Vert.x, it’s worth saying a bit about it. As the headline on the Vert.x website (vertx.io) says, “Eclipse Vert.x is a toolkit for building reactive applications on the JVM.” It is event-driven, single-threaded, and non-blocking, which means you can handle many concurrent apps with a small number of threads. (If you know how the Node.js event loop works, Vert.x will seem familiar.) 

 

A Vert.x component is called a verticle. It's a single-threaded, event-driven (generally non-blocking) component that receives events (HTTP requests, ticks from a timer) and produces responses (an HTTP response, console output). It can do other things as well. Send a message to AMQ, for example. 

 

Continuing to draw on the overview at the website, Vert.x is: 

 

Lightweight

 

The core Vert.x module is around 650kB. Kilobytes. That’s not even half the capacity of an old floppy disk. (For those of you who don’t remember floppy disks, they were primitive storage devices that looked like a save button.) 

 

Fast

 

You can find numbers on this if you want, but trust us, it’s fast. If you’re not convinced, vertx.io points to statistics at TechEmpower. Vert.x scores highest on this benchmark and scores highly on many other tests as well.

 

Not an application server 

 

You don’t deploy your Vert.x application into a monolithic server somewhere. You run your code wherever you want. If you’re familiar with the combination of Node.js and Express, you’ll feel right at home here. 

 

Modular

 

Vert.x has lots of components, but we only have to use the ones we need. If we need to access Apache Kafka for our event stream, for example, we add the Vert.x Kafka JAR file to our application’s dependencies. If we don’t need Kafka, it won’t be part of our application.  

 

Simple but not simplistic

 

We’ll see this when we look at the Reactica code. Most of the applications (Vert.x calls them verticles, by the way) are no more than 25 lines of code.  

 

Ideal for microservices

 

Microservices are meant to be lightweight, asynchronous, and event-driven. Vert.x makes building that kind of application easy. 

 

Polyglot

 

Because it is built on the JVM, it supports a number of languages that have been implemented on the JVM, including Java (of course), JavaScript, Ruby, Groovy, Ceylon, Scala, and Kotlin. 

 

A basic Vert.x application 

 

Here is a simple Vert.x verticle: 

 

package com.redhat.coderland; 

import io.vertx.core.AbstractVerticle;
import io.vertx.core.Vertx;

public class SampleVerticle extends AbstractVerticle {
    private long startTime = System.currentTimeMillis();
    private long counter = 1;

    @Override
    public void start() {
        vertx.setPeriodic(2000, counter -> {
            long runTime = (System.currentTimeMillis() - startTime) / 1000;
            System.out.println("Server run time: " + runTime + " seconds.");
        });

        vertx.createHttpServer()
            .requestHandler(req -> {
                System.out.println("Request #" + counter++ +
                                   " from " + req.remoteAddress().host());
                req.response().end("Hello from Coderland!");
            })
            .listen(8080);

        System.out.println("----------------------------------------------");
        System.out.println("---> Coderland now listening on localhost:8080");
        System.out.println("----------------------------------------------");
    }

    @Override
    public void stop() {
       System.out.println("---------------------------------------------");
       System.out.println("---> Coderland signing off! Have a great day.");
       System.out.println("---------------------------------------------");
    }

    public static void main (String[] args) {
        Vertx vertx = Vertx.vertx();
        vertx.deployVerticle(new SampleVerticle());
    }
}

 

 

 

Once this verticle is up and running, it does two things simultaneously: it uses vertx.setPeriodic() to print a message every two seconds (2000 milliseconds) to indicate how long the code has been running, and it uses vertx.createHttpServer() to serve up requests to localhost:8080. The code is available at GitHub; we encourage you to clone or fork the repo and run it yourself. Go to the directory where you cloned the repo and build it with Maven: 

 

mvn clean package
java -jar target/vertx-starter-1.0-SNAPSHOT.jar

 

When you run the code, you’ll see something like this: 

 

doug@dtidwell-mac:~/vertx-starter $ java -jar target/vertx-starter-1.0-SNAPSHOT.jar 
----------------------------------------------
---> Coderland now listening on localhost:8080
----------------------------------------------
Jun 26, 2019 8:33:59 AM io.vertx.core.impl.launcher.commands.VertxIsolatedDeployer
INFO: Succeeded in deploying verticle
Server run time: 2 seconds.
Server run time: 4 seconds.
Request #1 from 192.168.1.67
Server run time: 6 seconds.
Request #2 from 192.168.1.76
Server run time: 8 seconds.
Server run time: 10 seconds.
Request #3 from 0:0:0:0:0:0:0:1
Request #4 from 0:0:0:0:0:0:0:1 
Server run time: 12 seconds.
Server run time: 14 seconds.
Server run time: 16 seconds.
Request #5 from 192.168.1.67
Server run time: 18 seconds.
Server run time: 20 seconds.
Request #6 from 192.168.1.76
Server run time: 22 seconds.
Server run time: 24 seconds.
^C---------------------------------------------
---> Coderland signing off! Have a great day.
---------------------------------------------

 

Some notes on the output. First of all, notice that we @Override the start() and stop() methods from AbstractVerticle. Overriding the start() method is typical because you probably want to set up some things when your verticle is loaded. Overriding stop() is less common, but notice that typing Ctrl+C at the command line invoked the stop() method before the system killed the code.  

 

Second, the print statements in the start() method were executed before the verticle was up and running. The Vert.x runtime doesn’t print the “Succeeded in deploying verticle” message until the start() method is finished. The output says the code is listening on port 8080, but that’s not technically true until a fraction of a second later when the verticle is fully loaded. 

 

Finally, the verticle contains two asynchronous processes. One (vertx.setPeriodic()) is invoked every two seconds, the other (vertx.createHttpServer()) is invoked whenever an HTTP request comes in on localhost:8080. As long as the verticle is running, these two processes operate independently of each other.

 

Some of the verticles we’ll look at in the Reactica code are much more complicated, as you’d expect, but this is the basic structure. We’ll use the start() method to set up whatever infrastructure we need, we’ll define methods to handle the events that we’ll get from that infrastructure, then we’ll sit back and let the system do its work. 

 

One more thing - the event bus

 

Before we go, there's one other concept we should mention: the Vert.x event bus. The event bus allows you to send messages across all of the verticles in the system. Messages are sent to addresses, and they have a set of headers and a body. The address is just a string, typically a meaningful name, and the format of the data in the headers and body can be anything you like. (Typically the headers are metadata and the body is JSON.) Any verticle can register as a consumer of an address, meaning that any message sent to that address is passed along to every verticle that registered with that address. 

 

For more details on the event bus, see Clement Escoffier's aforementioned book Building Reactive Microservices in Java, available for free from the Red Hat Developer Program

 

What's next? 

 

We'd love to hear your comments and questions on this content. You can reach us at coderland@redhat.com

 

If you're a member of the Red Hat Developer Program (and why wouldn't you be? it's free.), you can download copies of Red Hat AMQ and Red Hat Data Grid

 

Beyond that, there are two more articles in this series: 

 

 

These two articles are independent. If you just want to run the code, you can skip to article 3, but we strongly recommend you go through both articles to understand how the code works. Enjoy!

Last updated: May 30, 2023