secrets

Using secrets in Kafka Connect configuration

Using secrets in Kafka Connect configuration

Kafka Connect is an integration framework that is part of the Apache Kafka project. On Kubernetes and Red Hat OpenShift, you can deploy Kafka Connect using the Strimzi and Red Hat AMQ Streams Operators. Kafka Connect lets users run sink and source connectors. Source connectors are used to load data from an external system into Kafka. Sink connectors work the other way around and let you load data from Kafka into another external system. In most cases, the connectors need to authenticate when connecting to the other systems, so you will need to provide credentials as part of the connector’s configuration. This article shows you how you can use Kubernetes secrets to store the credentials and then use them in the connector’s configuration.

Continue reading “Using secrets in Kafka Connect configuration”

Share
Using API keys securely in your OpenShift microservices and applications

Using API keys securely in your OpenShift microservices and applications

In the microservices landscape, the API provides an essential form of communication between components. To allow secure communication between microservices components, as well as third-party applications, it’s important to be able to consume API keys and other sensitive data in a manner that doesn’t place the data at risk. Secret objects are specifically designed to hold sensitive information, and OpenShift makes exposing this information to the applications that need it easy.

In this post, I’ll demonstrate securely consuming API keys in OpenShift Enterprise 3. We’ll deploy a Sinatra application that uses environment variables to interact with the Twitter API; create a Kubernetes Secret object to store the API keys; expose the secret to the application via environment variables; and then perform a rolling update of the environment variables across our pods.

Continue reading “Using API keys securely in your OpenShift microservices and applications”

Share