Red Hat Summit signage at Moscone South Red Hat Summit signage at Moscone South

Red Hat Senior Architects Marius Bogoevici and Christian Posta recently presented an overview of event-driven architecture, taking the audience from the basics of enterprise integration to microservices and serverless computing. Standing in front of a packed room at Red Hat Summit, their talk addressed four basic points:

  1. Event-driven architectures have been around for a while. What are they, why are they powerful, and why are back en vogue?
  2. Messaging is often used as a backbone for event-based distributed systems. What options do we have for cloud-native event-driven architectures?
  3. Integration is necessary for any organization. How do streaming, cloud-native architectures, and microservices fit in?
  4. Are Functions-as-a-Service (FaaS) the next utopian architecture? Where do functions fit in a world of microservices?

The entire session was done with three enterprise concerns in mind. First is the divide between agile systems and purpose-built systems. While the purpose-built system is optimized for a small set of use cases, it is very difficult to change if new use cases arise or the old use cases become irrelevant. We have to be agile to adapt to a constantly changing environment. Another concern is resource utilization. We want to eliminate waste and get the most out of our systems and resources, although the cloud in general and containers in particular make more distributed architectures practical. Finally, Christian made the observation that we cannot build complex systems from complex parts. The components we develop must be as simple and understandable as possible.

Marius explained the rise of event-driven architectures by comparing them to the old client-server paradigm. The most important differences are:

  • Client-server interactions are ephemeral and synchronous. Event-driven interactions are persistent (they can be, anyway) and asynchronous.
  • Client-server applications are tightly coupled. Event-driven applications are decoupled.
  • Client-server applications aren't easily composable. Event-driven applications are1.
  • Client-server architectures have a simplified model (every request has a single response) and are not fault-tolerant. Event-driven architectures have a complicated model (a single event might be handled by several components) and are highly fault-tolerant.

To start with the basics, an event is simply an action or occurrence that happened in the past. It is immutable, it can be persistent, and it can be shared. Types of events are notifications, state transfers (aka commands), event sourcing, and CQRS (Command Query Responsibllity Segregation). (See Martin Fowler's excellent article "What do you mean by 'Event-Driven'?" for a great discussion on the subject.)

In an event-driven architecture, you treat events as part of your domain model and create decoupled components that either emit or handle those events. This dovetails with the concerns of domain-driven design. Marius used an example of a system with four components: Orders, Billing, Shipment, and Inventory. Those components deal with the events: Order Created, Order Paid, and Order Shipped. In this simple example, the interactions among the components become obvious. When the Orders component generates an Order Created event, the other three components are affected. The order impacts the inventory on hand, the customer must be billed for the order, and the order has to be shipped. By focusing on events, the behavior of the system is easy to understand.

Event-driven architecture leads to more agile systems. As mentioned earlier, composability makes it straightforward to add more components to the system. In addition, if the system uses persistent events, those events are available for data mining, analytics, and machine learning. All in all, event-driven architectures are more robust and resilient, they are agile, and they make it possible for the organization to align its digital business with what actually happens in the real world.

Of course, if you're going to build an event-driven architecture, you have to have an infrastructure that delivers a stream of events reliably. Middleware has evolved to address this. Traditional message brokers deliver functionality such as publish/subscribe, queueing, and persistence. In that infrastructure, all of the messages flow through the broker, creating a bottleneck. This is good from the perspective of system utilization, but it limits agility. New requirements have created orders of magnitude more events. For example, an application that tracks clicks of the Place Order button on a web page has a certain number of events. Tracking mouse movements on the page could give tremendous insights into user behavior, but it would create many more events. Messaging middleware has evolved to handle greater event volumes.

Systems like Apache Kafka decentralize the processing of messages to the individual services that are using them. That makes the system horizontally scalable, reduces the amount of coordination between parts of the broker infrastructure, and allows clients to come and go without impacting the broker. This simpler architecture is great for the collection and distribution of huge numbers of events at cloud scale. (To go beyond the basics of Kafka, take a look at Strimzi, a project to bring Kafka into the world of OpenShift and Kubernetes.)

Next, Marius turned to enterprise integration, starting with the characteristics of an Enterprise Service Bus:

  • An ESB handles all the message traffic in the system, optimizing utilization but creating a bottleneck
  • An ESB is centralized and tightly coupled
  • An ESB mixes logic and infrastructure, including things like transformations and mediations with message delivery

In newer messaging frameworks like Apache Camel, the responsibility for things like transformations are placed on the applications or components that handle the messages. This makes it possible to change application logic without reconfiguring a centralized component like an ESB. With the rise of cloud-native applications, the technology has evolved further. Marius used a diagram of a set of containerized Camel applications running in OpenShift, with services such as messaging being provided by the platform. He also pointed to Strimzi (Kafka as a service) and mentioned EnMasse (messaging as a service), both of which run inside OpenShift.

Enterprise Integration Patterns were originally designed to build integrated systems out of siloed enterprise systems. The patterns are a good fit for distributed, event-driven systems, typically implemented by event-driven microservices. With today's message volumes, however, streaming becomes a key design point. Event-driven applications need to view streams as continuous, unbounded flows of data (events), with those streams handled by small services working together. Data pipelines built on top of small services working together using frameworks like Camel or Kafka Streams can solve modern enterprise integration problems. This change in mindset is an adaptation to the agile, decentralized, cloud-native nature of modern event-driven systems.

Which brings us to microservices themselves. A well-designed microservice has a specific business function and can be deployed and developed independently from other microservices. This enables agility and allows multiple development teams to work in parallel. Microservices are frequently containerized to increase density and utilization and reduce the overhead of running multiple services. Although the design concepts behind microservices have been around for years, the combination of cloud architectures and containerization has made them the obvious choice for many applications.

But not all. Christian made the important point that while microservices are great at enabling agility from existing systems, you shouldn't optimize your applications for microservices unless you have problems with your current architecture. Specifically, if agility isn't the problem with your existing system, microservices are not the solution. As an example, he mentioned an HR system only used on the last day of each week. On that one day, utilization and compute requirements are high, but only on that day. It doesn't make sense to have a set of services around to serve that traffic constantly when you know that traffic isn't constant. The main message: Understand your use cases.

Continuing the discussion of integration use cases, tasks like webhook callbacks, scheduled tasks, file processing, and reacting to database changes are better suited to the FaaS model. The code in a FaaS system is run by the platform whenever certain events happen. The task of the development team is to write the code that handles each event and the rules that define when the code should be invoked. As a result, the system has high utilization and parallelization, and the resources needed to handle those events are managed by the FaaS provider.

There are four options to consider as you build cloud-native applications:

  • Event-driven microservices
  • Containers
  • FaaS
  • Other serverless components such as databases, message queues, and caches

Marius and Christian both made the point that all of these technologies have their place. Despite the current hype cycle, not everything is a good candidate for FaaS. Again, it comes down to your use case. If you have well-understood boundaries and you know what those boundaries are, microservices could be the answer. On the other hand, if you have an exploratory use case and you don't know its traffic patterns and utilization (and might not even know if the use case provides any business value at all), FaaS could let you experiment without a lot of overhead.

This was a great session with lots of insight from two highly experienced architects. If this post whetted your appetite, we encourage you to watch the video recording of their presentation.

 


1 If System A and System B are client and server, composing a new application that adds System C is difficult because that almost certainly requires changes to Systems A and B. In the decoupled world of event-driven applications, introducing Systems C, D, and E shouldn't require any changes to Systems A and B. In fact, it's quite likely that A, B, C, D, and E have absolutely no knowledge of each other.

Last updated: August 21, 2019