Apache Kafka

Accessing Apache Kafka in Strimzi: Part 5 – Ingress

Accessing Apache Kafka in Strimzi: Part 5 – Ingress

In the fifth and final part of this series, we will look at exposing Apache Kafka in Strimzi using Kubernetes Ingress. This article will explain how to use Ingress controllers on Kubernetes, how Ingress compares with Red Hat OpenShift routes, and how it can be used with Strimzi and Kafka. Off-cluster access using Kubernetes Ingress is available only from Strimzi 0.12.0. (Links to previous articles in the series can be found at the end.)

Continue reading “Accessing Apache Kafka in Strimzi: Part 5 – Ingress”

Share
Accessing Apache Kafka in Strimzi: Part 4 – Load balancers

Accessing Apache Kafka in Strimzi: Part 4 – Load balancers

In this fourth article of our series about accessing Apache Kafka clusters in Strimzi, we will look at exposing Kafka brokers using load balancers. (See links to previous articles at end.) This article will explain how to use load balancers in public cloud environments and how they can be used with Apache Kafka.

Continue reading “Accessing Apache Kafka in Strimzi: Part 4 – Load balancers”

Share
Accessing Apache Kafka in Strimzi: Part 3 – Red Hat OpenShift routes

Accessing Apache Kafka in Strimzi: Part 3 – Red Hat OpenShift routes

In the third part of this article series (see links to previous articles below), we will look at how Strimzi exposes Apache Kafka using Red Hat OpenShift routes. This article will explain how routes work and how they can be used with Apache Kafka. Routes are available only on OpenShift, but if you are a Kubernetes user, don’t be sad; a forthcoming article in this series will discuss using Kubernetes Ingress, which is similar to OpenShift routes.

Continue reading “Accessing Apache Kafka in Strimzi: Part 3 – Red Hat OpenShift routes”

Share
Accessing Apache Kafka in Strimzi: Part 2 – Node ports

Accessing Apache Kafka in Strimzi: Part 2 – Node ports

This article series explains how Apache Kafka and its clients work and how Strimzi makes it accessible for clients running outside of Kubernetes. In the first article, we provided an introduction to the topic, and here we will look at exposing an Apache Kafka cluster managed by Strimzi using node ports.

Specifically, in this article, we’ll look at how node ports work and how they can be used with Kafka. We also will cover the different configuration options available to users and the pros and cons of using node ports.

Continue reading “Accessing Apache Kafka in Strimzi: Part 2 – Node ports”

Share
Accessing Apache Kafka in Strimzi: Part 1 – Introduction

Accessing Apache Kafka in Strimzi: Part 1 – Introduction

Strimzi is an open source project that provides container images and operators for running Apache Kafka on Kubernetes and Red Hat OpenShift. Scalability is one of the flagship features of Apache Kafka. It is achieved by partitioning the data and distributing them across multiple brokers. Such data sharding has also a big impact on how Kafka clients connect to the brokers. This is especially visible when Kafka is running within a platform like Kubernetes but is accessed from outside of that platform.

This article series will explain how Kafka and its clients work and how Strimzi makes it accessible for clients running outside of Kubernetes.

Continue reading “Accessing Apache Kafka in Strimzi: Part 1 – Introduction”

Share
Guru Night at Red Hat Summit: Hands-on experience with serverless computing

Guru Night at Red Hat Summit: Hands-on experience with serverless computing

Millions of developers worldwide want to learn more about serverless computing. If you’re one of the lucky thousands attending Red Hat Summit in Boston May 7-9, you can gain hands-on experience with the help of Burr Sutter and the Red Hat Developer team.

Guru Night is a BYOL (bring your own laptop) event taking place Wednesday, May 8 from 5:00 p.m. to 8:00 p.m. at the Boston Convention and Event Center in ML2 East-258AB. (Doubtless there will be a map to show you where or what ML2 East etc. is; we have no idea.) Head to the signup page and fill out your details now.

TL;DR: Beer and pizza will be served.

We felt compelled to point that out. But read on.

Continue reading “Guru Night at Red Hat Summit: Hands-on experience with serverless computing”

Share
Intro to Apache Kafka and Kafka Streams for Event-Driven Microservices on DevNation Live

Intro to Apache Kafka and Kafka Streams for Event-Driven Microservices on DevNation Live

Scalability is often a key issue for many growing organizations. That’s why many organizations use Apache Kafka, a popular messaging and streaming platform. It is horizontally scalable, cloud-native, and versatile. It can serve as a traditional publish-and-subscribe messaging system, as a streaming platform, or as a distributed state store. Companies around the world use Apache Kafka to build real-time streaming applications, streaming data pipelines, and event-driven architectures.

Continue reading Intro to Apache Kafka and Kafka Streams for Event-Driven Microservices on DevNation Live

Share
How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams

How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams

On October 25th Red Hat announced the general availability of their AMQ Streams Kubernetes Operator for Apache Kafka. Red Hat AMQ Streams focuses on running Apache Kafka on Openshift providing a massively-scalable, distributed, and high performance data streaming platform. AMQ Streams, based on the Apache Kafka and Strimzi projects, offers a distributed backbone that allows microservices and other applications to share data with extremely high throughput. This backbone enables:

  • Publish and subscribe: Many to many dissemination in a fault tolerant, durable manner.
  • Replayable events: Serves as a repository for microservices to build in-memory copies of source data, up to any point in time.
  • Long-term data retention: Efficiently stores data for immediate access in a manner limited only by disk space.
  • Partition messages for more horizontal scalability: Allows for organizing messages to maximum concurrent access.

One of the most requested items from developers and architects is how to get started with a simple deployment option for testing purposes. In this guide we will use Red Hat Container Development Kit, based on minishift, to start an Apache Kafka cluster on Kubernetes.

Continue reading “How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams”

Share
Welcome Apache Kafka to the Kubernetes Era!

Welcome Apache Kafka to the Kubernetes Era!

We have pretty exciting news this week as Red Hat is announcing the General Availability of their Apache Kafka Kubernetes operator. Red Hat AMQ Streams delivers the mechanisms for managing Apache Kafka on top of OpenShift, our enterprise distribution for Kubernetes.

Everything started last May 2018 when David Ingham (@dingha) unveiled the Developer Preview as new addition to the Red Hat AMQ offering. Red Hat AMQ Streams focuses on running Apache Kafka on OpenShift. In the microservices world, where several components need to rely on a high throughput communication mechanism, Apache Kafka has made a name for itself for being a leading real-time, distributed messaging platform for building data pipelines and streaming applications.

Continue reading “Welcome Apache Kafka to the Kubernetes Era!”

Share
EventFlow: Event-driven microservices on OpenShift (Part 1)

EventFlow: Event-driven microservices on OpenShift (Part 1)

This post is the first in a series of three related posts that describes a lightweight cloud-native distributed microservices framework we have created called EventFlow. EventFlow can be used to develop streaming applications that can process CloudEvents, which are an effort to standardize upon a data format for exchanging information about events generated by cloud platforms.

The EventFlow platform was created to specifically target the Kubernetes/OpenShift platforms, and it models event-processing applications as a connected flow or stream of components. The development of these components can be facilitated through the use of a simple SDK library, or they can be created as Docker images that can be configured using environment variables to attach to Kafka topics and process event data directly.

Continue reading “EventFlow: Event-driven microservices on OpenShift (Part 1)”

Share