DevNation Live tech talks are hosted by the Red Hat technologists who create our products. These sessions include real solutions and code and sample projects to help you get started. In this talk, you’ll learn about Tekton, a Kubernetes-native way of defining and running CI/CD, from Kamesh Sampath, Principal Software Engineer at Red Hat.
Continue reading “DevNation Live: Plumbing Kubernetes builds | Deploy with Tekton”
“Write once, run everywhere” is a slogan created by Sun Microsystems to illustrate the cross-platform benefits of Java. In the cloud-native world, this slogan is more accurate than ever, with virtualization and containers increasing the distance between code and hardware even further. But what does this shift mean for developers?
Developers need to take care of containerizing their application and also provide a set of manifests for Kubernetes (which now tends to be a synonym of cloud). In this article, we are going to focus on the latter and, more specifically, on how to use Dekorate to create and maintain these manifests with the minimum possible effort.
Continue reading “How to use Dekorate to create Kubernetes manifests”
One of the cool things about separating the container runtimes into different tools is that you can start to combine them to help secure one other.
Lots of people would like to build OCI/container images within a system like Kubernetes. Imagine you have a CI/CD system that is constantly building container images, a tool like Red Hat OpenShift/Kubernetes would be useful for distributing the load of builds. Until recently, most people were leaking the Docker socket into the container and then allowing the containers to do
docker build. As I pointed out years ago, this is one of the most dangerous things you can do. Giving people root access on the system or sudo without requiring a password is more secure than allowing access to the Docker socket.
Because of this, many people have been attempting to run Buildah within a container. We have been watching and answering questions on this for a while. We have built an example of what we think is the best way to run Buildah inside of a container and have made these container images public at quay.io/buildah.
Continue reading “Best practices for running Buildah in a container”
Red Hat developer Nikhil Thomas recently presented “How to Build Cloud-Native CI/CD Pipelines With Tekton on Kubernetes” at the KubeCon China 2019 co-located Continuous Delivery Summit.
Continue reading “How to build cloud-native CI/CD pipelines with Tekton on Kubernetes”
The Kubernetes API is amazing, and not only are we going to break it down and show you how to wield this mighty weapon, but we will do it while building a video game, live, on stage. As a matter of fact, you get to play along.
Continue reading “Kubernetes: The retro-style, Wild West video game”
Minikube has a feature called add-ons, which help in adding extra components and features to Minikube’s Kubernetes cluster.
The registry add-on will deploy an internal registry, which can then be used to push and pull Linux container images. But at times, we might wish to mimic push and pull to different registries (i.e., using aliases for container registry). In this article, I will walk you through the steps required to achieve the same.
Continue reading “Deploying an internal container registry with Minikube add-ons”
The Apache Kafka project includes a Streams Domain-Specific Language (DSL) built on top of the lower-level Stream Processor API. This DSL provides developers with simple abstractions for performing data processing operations. However, how one builds a stream processing pipeline in a containerized environment with Kafka isn’t clear. This second article in a two-part series uses the basics from the previous article to build an example application using Red Hat AMQ Streams.
Continue reading “Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 2”
The Apache Kafka project includes a Streams Domain-Specific Language (DSL) built on top of the lower-level Stream Processor API. This DSL provides developers with simple abstractions for performing data processing operations. However, how to build a stream processing pipeline in a containerized environment with Kafka isn’t clear. This two-part article series describes the steps required to build your own Apache Kafka Streams application using Red Hat AMQ Streams.
Continue reading “Building Apache Kafka Streams applications using Red Hat AMQ Streams: Part 1”
In the fifth and final part of this series, we will look at exposing Apache Kafka in Strimzi using Kubernetes Ingress. This article will explain how to use Ingress controllers on Kubernetes, how Ingress compares with Red Hat OpenShift routes, and how it can be used with Strimzi and Kafka. Off-cluster access using Kubernetes Ingress is available only from Strimzi 0.12.0. (Links to previous articles in the series can be found at the end.)
Continue reading “Accessing Apache Kafka in Strimzi: Part 5 – Ingress”
In this fourth article of our series about accessing Apache Kafka clusters in Strimzi, we will look at exposing Kafka brokers using load balancers. (See links to previous articles at end.) This article will explain how to use load balancers in public cloud environments and how they can be used with Apache Kafka.
Continue reading “Accessing Apache Kafka in Strimzi: Part 4 – Load balancers”