Apache Kafka derives great value not just from its technical features and performance, but from the ecosystem that surrounds it. This article is the first part of a two-part series describing the many ways to run Kafka, and the benefits of each. We'll cover distributions for local development, self-managed Kafka, Kafka as a Service, and "serverless-like" Kafka. The series ends with a summary of when to use each type of distribution.
The Kafka landscape
The number of books, courses, conference talks, Kafka service providers, consultancies, independent professionals, third-party tools, and developer libraries that make up the Kafka landscape is unmatched by competing projects. This support makes Kafka a de facto standard for event streaming and provides assurance that it will be around for a long time to come.
At the same time, Kafka alone is just a cog in the wheel and does not solve business problems on its own. Many different Kafka distributions exist for different business use cases. This series suggests which type of distribution is best suited to each use case, and which ecosystem enables the highest productivity for different development teams and organizational constraints. This series navigates the growing ecosystem of Kafka distributions and gives you some thoughts on where the industry is heading.
This series focuses only on the distributions of the Kafka broker, not the complete Kafka ecosystem of tools and additional components. There are other monitoring and management tools and services that help developers and operations teams with their daily activities, which we leave for another time. Figure 1 below lists the types of distributions covered in this series and some of their uses.
Kafka for local development
If you are new to Kafka, you might assume that all you need is a Kafka cluster, and you are done. Although this statement might be correct for organizations with a low level of Kafka adoption, where Kafka is a generic messaging infrastructure, the picture is different in the organizations with a higher level of event-streaming adoption, where Kafka is used heavily by multiple teams in multiple sophisticated scenarios. The latter group needs developer productivity tools that offer rapid feedback during their development of event-driven applications, high levels of automation, and repeatability in lower-level environments. Depending on their business priorities, they might use a variety of hybrid deployment mechanisms from edge computing to multiple clouds in production.
The very first thing that a developer working heavily with stream processing applications would want is to start a short-lived Kafka instance quickly on their laptop. That is true regardless of whether you practice test-driven development and mock all external dependencies, or use rapid prototyping.
A developer usually wants to validate quickly that the application is wiring up and functioning properly with an in-memory database or messaging system. Then the developer wants repeatable integration testing with a real Kafka broker. This rapid feedback cycle enables developers to iterate and deliver features faster and to adapt to changing requirements.
Several projects address this need. The ones that I'm most familiar with are the Quarkus extension for Kafka and EmbeddedKafka from Spring in the Java ecosystem. The easiest way to unit test Kafka applications is with SmallRye Reactive Messaging, which replaces the channel implementation with in-memory connections. This in-memory architecture is not designed specifically for Kafka, but shows how using the right streaming abstraction libraries can help you unit test applications rapidly.
Another option is to start an in-memory Kafka cluster in the same process as the test resource through EmbeddedKafkaCluster for a quick integration test. If you want to start a real Kafka broker as a separate process as part of the resource, try the Java framework Quarkus with Dev Services for Kafka. With this mechanism, Quarkus can start a Kafka cluster in less than a second using containers. This mechanism can validate Kafka-specific aspects of your application and ensure that it is working as expected on the local machine. The cool thing about Dev Services is that it can also start a schema registry (such as Apicurio), relational databases, caches, and many other third-party service dependencies.
Once you are done with "inner-loop" development, you'll want to commit your application to a source control system and run integration tests on the central build system. You can use Testcontainers to start a Kafka broker from a Java DSL (or librdkafka mock for C), which allows you to pick specific Kafka distributions and versions. If your application passes all the gates, it is ready for deployment into a shared environment with other services that need a continuously running Kafka cluster.
Before your application reaches production, or a performance testing environment that requires production-like characteristics, all you want is a Kafka installation reliable enough for various teams to integrate and run some tests without involving a lot of management. Self-managed Kafka clusters are used both for development and for production in certain situations. Once you get closer to a production environment, the characteristics required from the Kafka deployment change drastically. You want to be able to provision production-like Kafka clusters to test application performance and disaster recovery.
Such environments are also desirable because they are low-cost, avoiding the cost overhead of data replication and deployment to multiple availability zones (AZs). Many organizations have Kubernetes environments where each development team has its own isolated namespace, along with shared namespaces for CI/CD with all the shared dependencies deployed.
The Strimzi project, created by Red Hat, has everything needed to automate and operate a Kafka cluster on Kubernetes for development and production. The advantage of using Strimzi for environments with lower requirements is that the deployment can be managed through a declarative Kubernetes. Such environments allow developers to use the same Kubernetes infrastructure to quickly create a Kafka cluster repeatedly through automation pipelines and processes, without depending on other teams for the approval and provisioning of new services. Strimzi is useful for individuals or teams, a temporary project cluster, or a longer living shared cluster.
A single cluster is not a typical production environment, either; in this context you can use multiple clusters optimized for different purposes. You might want a self-managed Kafka cluster to deploy on edge clusters that run offline, on-premises infrastructure that might require a non-standard topology, or deployments to a public cloud with a fairly standard multi-AZ deployment. Finally, there are many self-managed Kafka platforms available, from Red Hat, Confluent, Cloudera, and TIBCO, among others.
The main characteristic of a self-managed cluster is that it places the responsibility for managing and running the Kafka cluster within the organization owning the cluster. That ownership allows you to customize and configure the Kafka cluster to suit your deployment needs. For these and any other odd use cases that are not possible with Kafka as a Service model, the self-managed Kafka remains a proven path.
A look ahead
The second and final article of this series covers the upper, more autonomous half of the chart in Figure 1. We'll discuss Kafka as a Service and "serverless-like" Kafka. You'll also get a summary of use cases for the various distributions.Last updated: January 6, 2023