Building resilient event-driven architectures with Apache Kafka

Even though cloud-native computing has been around for some time—the Cloud Native Computing Foundation was started in 2015; an eon in computer time—not every developer has experienced the, uh, "joy" of dealing with distributed systems. The old patterns of thinking and architecting systems have given way to new ideas and new problems. For example, it's not always possible (or advisable) to connect to a database and run transactions. Databases themselves are giving way to events and Command Query Responsibility Segregation (CQRS) and eventual consistency. Two-phase commits are being replaced with queues and database sagas, while monoliths are replaced with microservices, containers, and Kubernetes. "Small and local" thinking rules the day.

Now combine this with the fallacies of distributed processing, and suddenly event-driven architecture becomes very attractive. Thankfully, there are tools to make this possible. Apache Kafka is one of those tools.

Kafka makes event processing possible; Red Hat OpenShift Streams for Apache Kafka makes event processing easy.

Why events?

With the move to cloud-based computing, architects and developers were forced to re-examine how data is processed. Questions about timeliness and data urgency were met with the realization that immediate updates aren't always necessary. Plus, the focus shifted to systems that can sometimes (most times; ideally all times) continue to operate even if all the parts aren't working. The goal of systems changed from "It must never fail" to "Failure is inevitable, so we need to handle it." This led to the rise of a new way of thinking that includes circuit breakers, multiple databases, events, and more.

The beauty of the event-driven model is that you can fire the event and continue on, leaving the results up to "the system." Your code doesn't sit and wait for four databases to be updated or for an object to propagate around the globe via a CDN. There's a certain freedom that the developer feels when processing events. You're either pushing them out and forgetting about them, or you're simply waiting for an event to arrive and then processing it. In other words, your code is typically a one-way street. Low latency, or at least reduced latency, is almost automatic.

And loosely-coupled services? Events, by nature, force loosely-coupled services. The API is the event. My code isn't calling a method in your code; I'm simply supplying a message. What your code does with it is up to you.

Why should developers care about Kafka?

As a developer, you're going to need to embrace event processing if you want a system that's elastic, resilient, and high performing. It's just a matter of fact.

Kafka brings a level of maturity as well as a robust community with it. Apache Kafka is tried and tested. It's used all over the place and has an entire ecosystem surrounding it. Kafka connectors, for example, allow you to process some events with zero code written. This is important to a developer for this reason: If I know I do not need to write certain code, that frees me up to concentrate on the code I do need to write.

Business aside; it's fun.

Side note: Events pair excellently with Functions-as-a-Service (FaaS).

What next?

Get coding. Write some PoC (Proof of Concept) applications—or use one of our examples—and do what we developers do best: Show what software can do.

I suggest watching this short and informative video about Red Hat OpenShift Streams for Apache Kafka, and then continuing your journey into the joy of distributed processing. There are only four steps to awesomeness:

  1. Get your (free) instance of Red Hat OpenShift Streams for Apache Kafka.
  2. Get your (free) instance of Developer Sandbox for Red Hat OpenShift.
  3. Write event-processing code.
  4. Profit.
Last updated: February 5, 2024