OpenShift

Accessing Apache Kafka in Strimzi: Part 5 – Ingress

Accessing Apache Kafka in Strimzi: Part 5 – Ingress

In the fifth and final part of this series, we will look at exposing Apache Kafka in Strimzi using Kubernetes Ingress. This article will explain how to use Ingress controllers on Kubernetes, how Ingress compares with Red Hat OpenShift routes, and how it can be used with Strimzi and Kafka. Off-cluster access using Kubernetes Ingress is available only from Strimzi 0.12.0. (Links to previous articles in the series can be found at the end.)

Continue reading “Accessing Apache Kafka in Strimzi: Part 5 – Ingress”

Share
Accessing Apache Kafka in Strimzi: Part 4 – Load balancers

Accessing Apache Kafka in Strimzi: Part 4 – Load balancers

In this fourth article of our series about accessing Apache Kafka clusters in Strimzi, we will look at exposing Kafka brokers using load balancers. (See links to previous articles at end.) This article will explain how to use load balancers in public cloud environments and how they can be used with Apache Kafka.

Continue reading “Accessing Apache Kafka in Strimzi: Part 4 – Load balancers”

Share
Accessing Apache Kafka in Strimzi: Part 3 – Red Hat OpenShift routes

Accessing Apache Kafka in Strimzi: Part 3 – Red Hat OpenShift routes

In the third part of this article series (see links to previous articles below), we will look at how Strimzi exposes Apache Kafka using Red Hat OpenShift routes. This article will explain how routes work and how they can be used with Apache Kafka. Routes are available only on OpenShift, but if you are a Kubernetes user, don’t be sad; a forthcoming article in this series will discuss using Kubernetes Ingress, which is similar to OpenShift routes.

Continue reading “Accessing Apache Kafka in Strimzi: Part 3 – Red Hat OpenShift routes”

Share
Accessing Apache Kafka in Strimzi: Part 2 – Node ports

Accessing Apache Kafka in Strimzi: Part 2 – Node ports

This article series explains how Apache Kafka and its clients work and how Strimzi makes it accessible for clients running outside of Kubernetes. In the first article, we provided an introduction to the topic, and here we will look at exposing an Apache Kafka cluster managed by Strimzi using node ports.

Specifically, in this article, we’ll look at how node ports work and how they can be used with Kafka. We also will cover the different configuration options available to users and the pros and cons of using node ports.

Continue reading “Accessing Apache Kafka in Strimzi: Part 2 – Node ports”

Share
Accessing Apache Kafka in Strimzi: Part 1 – Introduction

Accessing Apache Kafka in Strimzi: Part 1 – Introduction

Strimzi is an open source project that provides container images and operators for running Apache Kafka on Kubernetes and Red Hat OpenShift. Scalability is one of the flagship features of Apache Kafka. It is achieved by partitioning the data and distributing them across multiple brokers. Such data sharding has also a big impact on how Kafka clients connect to the brokers. This is especially visible when Kafka is running within a platform like Kubernetes but is accessed from outside of that platform.

This article series will explain how Kafka and its clients work and how Strimzi makes it accessible for clients running outside of Kubernetes.

Continue reading “Accessing Apache Kafka in Strimzi: Part 1 – Introduction”

Share
Announcing Red Hat CodeReady Workspaces 1.2

Announcing Red Hat CodeReady Workspaces 1.2

We are pleased to introduce Red Hat CodeReady Workspaces version 1.2, which provides a cloud developer workspace server and browser-based IDE built for teams and organizations. CodeReady Workspaces includes ready-to-use developer stacks for most of the popular programming languages, frameworks, and Red Hat technologies.

Release overview

Red Hat CodeReady Workspaces 1.2 introduces:

Continue reading “Announcing Red Hat CodeReady Workspaces 1.2”

Share
EventFlow: Event-driven microservices on Red Hat OpenShift (Part 2)

EventFlow: Event-driven microservices on Red Hat OpenShift (Part 2)

In part 1, I introduced the EventFlow platform for developing, deploying, and managing event-driven microservices using Red Hat AMQ Streams. This post will demonstrate how to deploy the EventFlow platform on Red Hat OpenShift, install a set of sample processors, and build a flow.

Continue reading “EventFlow: Event-driven microservices on Red Hat OpenShift (Part 2)”

Share
Self-service messaging with Red Hat AMQ Online and GitOps

Self-service messaging with Red Hat AMQ Online and GitOps

This article explores the service model of Red Hat AMQ Online 1.1 and how it maps to a GitOps workflow for different teams in your organization. For more information on new features in AMQ Online 1.1, see the release notes.

AMQ Online is an operator of stateful messaging services running on Red Hat OpenShift. AMQ Online is built around the principle that the responsibility of operating the messaging service is separate from the tenants consuming it. The operations team in can manage the messaging infrastructure, while the development teams provision messaging in a self-service manner, just as if they were using a public cloud service.

Continue reading “Self-service messaging with Red Hat AMQ Online and GitOps”

Share
Use the Kubernetes Python client from your running Red Hat OpenShift pods

Use the Kubernetes Python client from your running Red Hat OpenShift pods

Red Hat OpenShift is part of the Cloud Native Computing Foundation (CNCF) Certified Program, ensuring portability and interoperability for your container workloads. This also allows you to use Kubernetes tools to interact with an OpenShift cluster, like kubectl, and you can rest assured that all the APIs you know and love are right there at your fingertips.

The Kubernetes Python client is another great tool for interacting with an OpenShift cluster, allowing you to perform actions on Kubernetes resources with Python code. It also has applications within a cluster. We can configure a Python application running on OpenShift to consume the OpenShift API, and list and create resources. We could then create containerized batch jobs from the running application, or a custom service monitor, for example. It sounds a bit like “OpenShift inception,” using the OpenShift API from services created using the OpenShift API.

In this article, we’ll create a Flask application running on OpenShift. This application will use the Kubernetes Python client to interact with the OpenShift API, list other pods in the project, and display them back to the user.

Continue reading “Use the Kubernetes Python client from your running Red Hat OpenShift pods”

Share