< Summit 2020

Event-driven architecture with Quarkus, Kafka, and OpenShift

Intermediate | Build event-driven applications. Tackle common stumbling blocks.
Jeremy Davis   

Jeremy  Davis

Log in to watch the session

Through next year, log in to view the on-demand developer sessions that will help you go from here, anywhere.

Event-driven architectures are distributed, asynchronous, and scalable. The rise of real-time decision making, the on-demand economy, explosion of data, and the adoption of microservices have all driven the adoption of event driven architectures. Event-driven code is reactive by nature and significantly different from imperative programming.

In this breakout session, we'll examine building an event-driven application and tackle the most common stumbling blocks of EDA, including:

  • How and where for maintaining state.
  • How to handle transactions that span multiple services.
  • REST or Kafka (or something else).
  • Asynchronous testing.


We'll end by quickly (and easily) spinning up an event-driven application using Quarkus and Red Hat AMQ Streams (Apache Kafka).

Talk Text

Jeremy Davis (00:00):

All right, thanks for attending our talk, "Event-Driven Architecture with Quarkus, Kafka, and Kubernetes." So, I'm Jeremy Davis. I'm a chief architect for app dev technologies here at Red Hat.

 

Tosin Akinosho (00:12):

And I'm Tosin Akinosho. I'm an OpenShift solutions architect specialist at Red Hat.

 

Jeremy Davis (00:18):

And as you might've guessed from reading the topic of this talk or reading the title of this talk, today we're going to talk about event-driven architecture, Kafka, Quarkus, and Kubernetes, and what those things mean to you and why Quarkus, OpenShift and Kafka are particularly well-suited to building event-driven architectures.

 

Jeremy Davis (00:38):

So, first things first. Why should you care about event-driven architecture? One of the big pieces of event-driven architecture is that it is scalable. So, it's really easy to add more load to an event-driven system. Things are typically message-based, which means you can scale out and you can add more processing power very easily.

 

Jeremy Davis (00:57):

Some other advantages you have in an event-driven system is that it's replayable. So you can record all of your events and you can recreate state very easily, or much more easily than you can with traditional architectures. It also makes it easy to add new components into the application, especially when you build this out using microservices, which is what we're going to take a look at today.

 

Jeremy Davis (01:19):

There's a couple of pieces, some founding blocks of event-driven architectures. One of them, and the one that I think is probably the most important, is called Event Storming. This is a fantastic book. The slides to this talk will be available on GitHub, which we have a link to at the end of the presentation.

 

Jeremy Davis (01:34):

I would highly recommend reading this book. It talks about how you get together through an organization and why you create the events, or model the events of your system, and it discusses how you do that. It might look a little crazy with all these sticky notes, but these notes all have a purpose, and the colors of the notes all have a purpose. So we'll look at one particular workflow that we stormed events for and then build out for this presentation.

 

Jeremy Davis (02:02):

Another foundational piece that we're going to look at is domain-driven design. And you don't have to use domain-driven design with event-driven architectures, but it makes a lot of sense, and it's very common. One of the things this brings to the table is that it allows you to really encapsulate your business logic inside of your domain models, which is where it belongs. And it makes it easier to propagate the events through your system, because you don't want to have to have logic bouncing all over the place, which can be common in a lot of CRUD-based systems.

 

Jeremy Davis (02:31):

These are two good books as well. I don't know if you thought you were, if you knew you were going to get a reading list when you started attending this talk. We've got a little bit of a reading list here. Domain-Driven Design by Eric Evans was the original book on this topic. Then Vaughn Vernon's Implementing Domain-Driven Design is another great book that gives you some real-world code examples and discusses how a team would go about implementing this system. Both of these are great reads. Highly recommend both of those.

 

Jeremy Davis (02:57):

All right, so the events that we're going to look at, the event storm that we created here. We're talking about a cafe. Before we were all on coronavirus lockdown, we used to go into cafes a lot and grab a cup of coffee. You place your order at the counter, and you order some kind of beverage, and then maybe you order something to eat as well. Those orders then get sent off to baristas into a kitchen.

 

Jeremy Davis (03:21):

If you look at the way we model this, you have a little user guy on the left. The user places an order. That modifies an order aggregate, which is a way of handling your domain model. That's typical in domain-driven design. And that creates an event called "order placed." I mentioned all these colors have a purpose. Events are always in orange, and typically in past tense. So, our order is placed. Then we have a green order. That's the object that we store inside of the database. Well, that order-placed object kicks a couple of things off. One, it notifies the baristas, and it notifies the kitchen that they need to make stuff. It also notifies the user that your order is in process. Then when the barista and the kitchen complete their order, a user gets notified that their beverage or that their kitchen item, their food, is ready.

 

Jeremy Davis (04:14):

We've got this application running. We'll take a look at that in a moment. And then we will jump over to some source code and look at what this means, how you design these applications and why you want to design your applications with Quarkus and Kafka and deploy them on top of Kubernetes.

 

Jeremy Davis (04:32):

One of the first big questions—REST is typical in a microservices architecture, and it's really easy to write RESTful services. Well, we chose to use Kafka in this application for a couple of reasons. One is that if you make a REST call to another microservice, if that service isn't running, your REST call just dies, and you have to handle the error and somehow wait and do some retries. Well, when you put it in a messaging system, especially the way that Kafka is built, you don't have to worry about that. When your barista or when your kitchen, when your other microservice comes back online, it can immediately pick up and begin processing again. I'll turn it over to Tosin to talk a bit about Apache Kafka.

 

Tosin Akinosho (05:10):

Okay. So, Apache Kafka is a distributed streaming platform. We at Red Hat call it AMQ Streams. That's the product name for it at Red Hat. With Apache AMQ Streams and Kafka, it's built on append-only technology. What I mean by that is, all of the messages are created from a producer, and the producer's hitting, writing those messages to a commit log. Then those commit logs live in partitions. From the partition it goes into the consumer. From that consumer, anybody can request messages from that.

 

Tosin Akinosho (05:56):

What this gives you is a simple storage abstraction. It allows this to be reliable with the partitions. It allows it to be scalable, because since they're distributed through different partitions, we can request information from any of those quickly. This also allows it to be durable, which, since the messages persist on disc and they should be as fast as possible. Finally, it allows it to be performant.

 

Tosin Akinosho (06:29):

Another benefit of using Kafka is it allows it to be replayable. Since it's writing to commit logs, we can go back or go forward, or even have a snapshot in the state of time.

 

Tosin Akinosho (06:42):

This slide is an example of a Kafka topic. We can say Kafka topic A, B, and C, and the producer is creating these different topics. Then on the other side, we would have a consumer that would consume all those topics. Then from there, people can consume those topics after it's been created by the producer. Okay, and I'm going to pass this on to Jeremy to talk about Quarkus.

 

Jeremy Davis (07:14):

Yeah. So, in this presentation, we are consuming all these messages and processing them with Quarkus. Quarkus is supersonic, subatomic Java. So, why are we using this? A couple of reasons. One, it's really easy to use. It's really fast, super lightweight. It's fun to code with. Great built-in support for messaging, and it's Kubernetes native. What I mean by Kubernetes native is that we can build out a native image using GraalVM and run in a super tiny memory footprint on top of Kubernetes. It's built into OpenShift really nicely.

 

Jeremy Davis (07:47):

I'm going to jump over here, and we can look at some code here. So, let's share my screen. Got it. All right. Share my screen. All right. Let's go over to another browser here.

 

Jeremy Davis (08:01):

This is the application we built. We talked about the events that we're looking at, and this is the status board for a cafe. If you walk into a cafe, and you make your order—and the way we kick this off is through a REST call. So, we are using some REST technology. Now, this is Postman. If you don't have Postman, grab Postman. This is a great tool. It's totally worth the purchase price. What I'm doing is I'm sending in some JSON and telling our application to make me some stuff. So, I send this JSON in. We get it, it's in queue. Then it's going to become ready after a certain amount of time. I make my order. I can look up, and now I know when my order is ready. So, now everything is ready. This is pretty simple, right? But there's a lot going on in the background behind this. So, let's take a look at what some of this code looks like.

 

Jeremy Davis (08:50):

Now, the first thing that we're doing is a REST call. In Quarkus, this is what a REST call looks like. Pretty simple. If you've ever done Java-based web development, this is going to look very familiar. We're making a post to this endpoint. We're passing in a create order command. This kind of syntax is one of the domain-driven design constructs. We're sending a command to place order. In fact, we call our order service, and we place order passing in this command. We're also injecting here. Quarkus handles CDI for us. We're injecting our order service. Then we're also injecting a source URL for our properties. That allows us to use, to write the appropriate URL for attaching to server set events on the web page.

 

Jeremy Davis (09:42):

So, this comes in here, goes to our order service, and we call order service, place order. And if we look at what that does—here, we are injecting an emitter. I mentioned that Quarkus makes it really easy to do messaging. It uses MicroProfile Reactive Messaging. What we're doing here is, we're going to send out our order. So, our order comes in, we send this out. We do it asynchronously. When this completes, we either just log a debug message, or—actually, I should make this an error log, shouldn't I? Or we log our error message if an error occurs when we send that message out.

 

Jeremy Davis (10:20):

The next thing that happens is we hit our second microservice. Really I should pop over here to OpenShift and show you what this application looks like. So, this is our microservice, and Tosin's going to demonstrate this a little bit better later on, but really quickly in OpenShift—this is MongoDB where I'm storing these orders. And this is our actual application. So, our first order is our web order. The web then pops a message onto a Kafka topic. Then this core microservice grabs the message off the Kafka topic. You can see we've got one running here. We grabbed the message off the Kafka topic. Then we split that message, and we send it out to a barista for beverage orders, and we send it to the kitchen for food orders.

 

Jeremy Davis (11:17):

So, let's take a look at our core service. This is our core service, and this is how we grab a message off of Kafka. We say incoming, and we're just pulling this off of a message. We log it. Then we create an order-created event.

 

Jeremy Davis (11:33):

Now we start talking about events and events inside of our system. So, our order itself, our domain object, which in domain-driven design terms is called an aggregate, because an order is going to contain multiple subobjects. So, in this case, line items. When I send in an order for a coffee and a bagel, then I have a line item for a coffee and a line item for a bagel. It also contains other information like the person who placed the order, but it returns an order-created event, which I'm going to log out. Then I'm going to take that order-created event, and I'm going to apply the events, which means I'm just going to loop through and fire off all of these events to various—and divide them up between the barista emitter and the kitchen emitter. So, I'm going to send them to two different Kafka topics. Each of our microservices has its own topic.

 

Jeremy Davis (12:30):

Let's take a look at the barista here, because most people are definitely going to order a drink. This is a pretty simple service here. The kitchen and the barista are effectively the same code. So, what this does is, we have an incoming annotation, which means we're going to grab this off the Kafka topic. And we can figure this inside of a properties file for my local application. Then when we go on OpenShift, we use it in an environment variable. I'll show that off in a second here. What we do is, we grab our event, and if it equals order in, we call the order-in method right here. And all we're doing for the sake of this is just delaying for a few seconds. So, a black coffee takes five seconds; a cappuccino takes nine seconds. Then we just default down. So, this just mimics the fact that it's going to take somebody a bit of time to make your drinks.

 

Jeremy Davis (13:23):

Kitchen works exactly the same way. An item that has some cooking takes a little bit of extra time. Once it's done, it pops it back onto the order-up emitter. Now, we're saying "orders up," not "order ready." That comes from part of the event-storming discipline or domain-driven-design discipline, where we agree on a language to use. I spent my time working in restaurants when I was in college. If you spend time in restaurants, people say, "Order up," right? That's the typical syntax you use. It's not important that you use the same syntax over and over, but it's the same within an organization that you agree upon a syntax that makes sense for your organization. So, we use the term "order up" here.

 

Jeremy Davis (14:06):

When the order then gets up, the website listens for that. Back in our website, we listened for that. So, we listened to orders in and orders out, and then we send that out to the dashboard. And we have a publisher, which is another MicroProfile messaging construct. Then that publisher is attached to an endpoint stream. And this is what our web page is listening to. So, when this web page loads up, it opens up a server sent-event connection to listen for server sent events to update these orders.

 

Jeremy Davis (14:47):

So, that's when we get our dashboard here. That's what's happening when we send these in, right? The first thing, we send these in. We watched that it was in the queue. Now we wait, and our cake pop is ready really quick. Coffee with room, croissants are ready quickly. The cappuccino takes the longest, and now it's back out. So, that event gets propagated straight from Kafka. We're listening right here on a Kafka queue. When this comes in, it just gets forwarded directly to the website. It's really nice functionality and really easy to write.

 

Jeremy Davis (15:20):

Now, one thing that's also really nice about Quarkus is I have all of these microservices running on my laptop. So, I've got one, two, three, four, five, six microservices, all running on my laptop. Quarkus is a super tiny memory footprint. It's really easy to use, and it means that local development is really, really simple. Now, the way we deploy them to OpenShift is also really easy. I mentioned that—we call it Kubernetes native. What we mean by Kubernetes native is OpenShift can grab the source code straight out of GitHub and compile it to a native image binary. This is using a really tiny memory footprint, because it's a native Linux executable sitting inside of a container. Quarkus makes it really, really easy for us to do that.

 

Jeremy Davis (16:16):

Let me stop sharing my screen. All right, back to the slides.

 

Jeremy Davis (16:27):

All right. So, we looked at Quarkus in dev mode. We looked at MicroProfile Reactive Messaging, and we did not look at test-driven development. So, you know what, I'll share my screen again, and we'll look at some test-driven development. Now, I have to admit, I feel a little bit guilty here, because in test-driven development, we always start with a test first. I didn't do very well starting with a test first. I'm going to the test last, but here we go.

 

Jeremy Davis (16:55):

So, in test-driven development, let's look at our barista test right here. So, Quarkus tests make it—oops, let's close all you guys. Inside my test folder, we have two kinds of tests. Using one, we can copy up our barista object. And we can just do logic tests very simply using JUnit. But we have this annotation here: QuarkusTest. What this does is, it gives us access to Quarkus' entire environment. So, we can inject individual beans inside the tests and exercise that bean individually. You don't have to do a lot of jumping through hoops to start up the entire application. We can just exercise one bean.

 

Jeremy Davis (17:40):

But more interestingly is our integration test here, because we're using Kafka, and we want to make sure that the messaging piece works. So, I have my QuarkusTest annotation. I've got a second annotation called QuarkusTestResource, and I've built a class called KafkaTestResource. What this does is, it just spins up Kafka containers. So, this has spin up containers locally, just for my test. In fact, it's using random ports, so it won't conflict with anything that might be running on my system. Then I create a producer and a consumer, and then I can fire in messages. I just use a vanilla Kafka consumer to read those messages off the queue and verify that I'm getting the records that I expect to get in. Quarkus comes with a full testing suite, makes it really easy to do this kind of asynchronous testing. It's usually pretty difficult to do.

 

Jeremy Davis (18:30):

All right. Now I'll actually stop sharing, and we can take a look at how OpenShift and Kubernetes fits into this. And for that, I'll turn that over to you, Tosin.

 

Tosin Akinosho (18:45):

So, OpenShift and Kubernetes. So, we all are familiar with Kubernetes. We've heard of Kubernetes. Kubernetes and OpenShift go hand in hand. We have Kubernetes, and then OpenShift sits on top of Kubernetes. We add all types of cool features and tools to Kubernetes, and we call it OpenShift. So, what does OpenShift give us, and why would we even use OpenShift? Well, OpenShift first gives us scalability. What we mean by scalability is, when you're deploying an application into the cloud, you can scale up and scale down easily and quickly. And how does this make Kafka easier to use? Kafka is easier to use by deployment. For example, there's a technology called Helm that you can use to deploy an application within Kubernetes. But when we deploy an application or AMQ, which is Kafka and OpenShift, we use a technology called Operators. Operators allow you to use Kubernetes custom resource definitions to do a deployment.

 

Tosin Akinosho (19:54):

Also, it allows you to manage the life cycle of that deployment as well, which I'll show you in a quick demo. How does OpenShift make microservices easier? Well, in general, building microservices is hard. OpenShift has different technologies, such as deployment controllers, and it has routes, config maps, and other things like that to make microservices easier.

 

Tosin Akinosho (20:22):

This is just a screenshot of the AMQ Streams on OpenShift, which I'll delve into now. So, with OpenShift, we installed AMQ Streams. After we installed it, we have this dashboard as an Operator. We click on this, and we see all the provided APIs for OpenShift. The only thing that we had to do was hit a couple buttons in this thing we call OperatorHub to get this installed.

 

Tosin Akinosho (20:56):

Now, the thing that's working in the background here with the cafe with this demo is the Kafka. So, what we did is, we created a Kafka cluster. Then for that cluster, we have different topics: barista in, kitchen in, orders in, web in, and web updates, for example. If I go down to pods, we'll see all the pods. So, these first three pods are for your Kafka cluster, then the ZooKeeper, which manages the state. After that we have the application code, which would be the web, the kitchen, the core, and the barista.

 

Tosin Akinosho (21:34):

This is just a nice developer view of the microservices application that we have running on OpenShift.

 

Tosin Akinosho (21:52):

In summary, yeah, I guess I'll pass it to you, Jeremy.

 

Jeremy Davis (21:58):

All right. So, then also event-driven architectures are scalable and efficient. OpenShift is scalable and efficient, Kafka is scalable and efficient, and Quarkus is scalable and efficient. In fact, let's jump over, share a screen again, take a look at some of these pieces.

 

Jeremy Davis (22:18):

In our OpenShift Cockpit, this is our developer view, which we talked about, which makes it easier for us to use microservices. So, we can see everything here. We can start and stop these. We can scale them up. So, let's say if we were getting really hammered with a lot of coffee orders, we can scale this up, and all of a sudden we have two baristas working. So, our throughput is going to go up.

 

Jeremy Davis (22:38):

Also inside of here, the way that we set all of this up is pretty simple as well. So, Tosin mentioned Operators. And so, in Operators, we have what's called OperatorHub, which is lots and lots of different Operators that you can install and get instant functionality. We've installed a few right here, Red Hat AMQ Streams being the most important one. When I come in we can—as a developer, we can get our own environments. We have projects—it's Kubernetes namespace, we use the term project—and we can have multiple different projects. So, I can have a dev, test, integration projects all at the same time. To set these up, I come in really quickly, and I just say, "I'm going to create a new Kafka topic." And I create my new Kafka topic. Just a really quick adjustment here, my new topic.

 

Jeremy Davis (23:29):

If we look in here, we can see what we've already got set up. So, I've already got all these topics built, and we can see what's being run here. So, we had our web orders in, we have our web updates that are being read, and we have all these various topics. We can come in here. We can edit these immediately. So, it makes it really easy for us to come in here and spin up an entire Kafka environment. It literally just takes about five minutes to set up this entire environment, which is really nice.

 

Jeremy Davis (23:56):

Another piece when we talked about Quarkus being Kubernetes native, if we look at our builds. So, the way that these applications are being built, and these are the applications that we're looking at, right? Let's say our core application—we can look at our build and we can see how many builds we've done. And this application is being built by OpenShift, so it's like some of these builds you can fail, so something went wrong.

 

Jeremy Davis (24:19):

So, what this does is, it goes into GitHub. So, it goes into my GitHub. We've given it a particular tag, so this isn't running off master. It's running off a particular tag, and it tells us a context Core. So, this is my—that just means the directory. So, this is the demo, or this is where the demo is kept on my GitHub site: Quarkus Cafe Demo. And I've got all these sub projects. So, I just tell OpenShift, "Go to GitHub. Grab this. Pull this out, and build it." And when it builds it, it's building it into a native image. It takes a little while to build the native image. It uses a technology called GraalVM, which goes through and not just compiles your entire job application but it manages all the potential paths of execution and strips out anything extra. So, it gives you this super tiny, small, lightweight footprint, which traditionally Java was obviously not very small in terms of memory.

 

Jeremy Davis (25:16):

So, anyway, Quarkus is really fast. It's really small. Kafka is super performant, super fast. And OpenShift brings all this together and gives you a really great environment to deploy all of this on.

 

Jeremy Davis (25:30):

All right, so thanks for joining us. And we have—check out our GitHub repository. You can grab these slides. You can grab all the source code to this. And hope you enjoyed this talk. You have our contacts there. Feel free to reach out to either one of us on Twitter if you have any questions or want to dive into any of this material deeper.

Session Speaker

Jeremy Davis

Jeremy Davis

Red Hat

I am a Chief Architect for App Dev Technologies at Red Hat. I help Red Hat's customers to design and deliver deliver applications, work with Red Hat engineers to create great products, and occasionally speak at conferences. Before joining Red Hat I wrote a lot of code in C, C#, Groovy, JavaScript, Objective-C, Perl, PHP, Python, Ruby, Visual Basic, and of course Java (mostly in Java.)