Kubernetes

MySQL for developers in Red Hat OpenShift

MySQL for developers in Red Hat OpenShift

As a software developer, it’s often necessary to access a relational database—or any type of database, for that matter. If you’ve been held back by that situation where you need to have someone in operations provision a database for you, then this article will set you free. I’ll show you how to spin up (and wipe out) a MySQL database in seconds using Red Hat OpenShift.

Continue reading “MySQL for developers in Red Hat OpenShift”

Share
Deploying an internal container registry with Minikube add-ons

Deploying an internal container registry with Minikube add-ons

Minikube has a feature called add-ons, which help in adding extra components and features to Minikube’s Kubernetes cluster.

The registry add-on will deploy an internal registry, which can then be used to push and pull Linux container images. But at times, we might wish to mimic push and pull to different registries (i.e., using aliases for container registry). In this article, I will walk you through the steps required to achieve the same.

Continue reading “Deploying an internal container registry with Minikube add-ons”

Share
Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 2

Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 2

The Apache Kafka project includes a Streams Domain-Specific Language (DSL) built on top of the lower-level Stream Processor API. This DSL provides developers with simple abstractions for performing data processing operations. However, how one builds a stream processing pipeline in a containerized environment with Kafka isn’t clear. This second article in a two-part series uses the basics from the previous article to build an example application using Red Hat AMQ Streams.

Continue reading “Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 2”

Share
Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 1

Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 1

The Apache Kafka project includes a Streams Domain-Specific Language (DSL) built on top of the lower-level Stream Processor API. This DSL provides developers with simple abstractions for performing data processing operations. However, how to build a stream processing pipeline in a containerized environment with Kafka isn’t clear. This two-part article series describes the steps required to build your own Apache Kafka Streams application using Red Hat AMQ Streams.

Continue reading “Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 1”

Share
Accessing Apache Kafka in Strimzi: Part 5 – Ingress

Accessing Apache Kafka in Strimzi: Part 5 – Ingress

In the fifth and final part of this series, we will look at exposing Apache Kafka in Strimzi using Kubernetes Ingress. This article will explain how to use Ingress controllers on Kubernetes, how Ingress compares with Red Hat OpenShift routes, and how it can be used with Strimzi and Kafka. Off-cluster access using Kubernetes Ingress is available only from Strimzi 0.12.0. (Links to previous articles in the series can be found at the end.)

Continue reading “Accessing Apache Kafka in Strimzi: Part 5 – Ingress”

Share
Accessing Apache Kafka in Strimzi: Part 4 – Load balancers

Accessing Apache Kafka in Strimzi: Part 4 – Load balancers

In this fourth article of our series about accessing Apache Kafka clusters in Strimzi, we will look at exposing Kafka brokers using load balancers. (See links to previous articles at end.) This article will explain how to use load balancers in public cloud environments and how they can be used with Apache Kafka.

Continue reading “Accessing Apache Kafka in Strimzi: Part 4 – Load balancers”

Share
Accessing Apache Kafka in Strimzi: Part 3 – Red Hat OpenShift routes

Accessing Apache Kafka in Strimzi: Part 3 – Red Hat OpenShift routes

In the third part of this article series (see links to previous articles below), we will look at how Strimzi exposes Apache Kafka using Red Hat OpenShift routes. This article will explain how routes work and how they can be used with Apache Kafka. Routes are available only on OpenShift, but if you are a Kubernetes user, don’t be sad; a forthcoming article in this series will discuss using Kubernetes Ingress, which is similar to OpenShift routes.

Continue reading “Accessing Apache Kafka in Strimzi: Part 3 – Red Hat OpenShift routes”

Share
Accessing Apache Kafka in Strimzi: Part 2 – Node ports

Accessing Apache Kafka in Strimzi: Part 2 – Node ports

This article series explains how Apache Kafka and its clients work and how Strimzi makes it accessible for clients running outside of Kubernetes. In the first article, we provided an introduction to the topic, and here we will look at exposing an Apache Kafka cluster managed by Strimzi using node ports.

Specifically, in this article, we’ll look at how node ports work and how they can be used with Kafka. We also will cover the different configuration options available to users and the pros and cons of using node ports.

Continue reading “Accessing Apache Kafka in Strimzi: Part 2 – Node ports”

Share