Containers

Red Hat Universal Base Images for Docker users

Red Hat Universal Base Images for Docker users

Red Hat Universal Base Images (UBIs) allow developers using Docker on Windows and Mac platforms to tap into the benefits of the large Red Hat ecosystem. This article demonstrates how to use Red Hat Universal Base Images with Docker from a non-Red Hat system, such as a Windows or Mac workstation.

Red Hat Enterprise Linux and Docker

When Red Hat Enterprise Linux (RHEL) 8 was released almost a year ago, and it came with lots of new features related to containers. The biggest ones were the new container tools (Podman, Buildah, and skopeo) and the new Red Hat Universal Base Images. There was also confusion because RHEL 8 dropped support for the Docker toolset. Some developers thought that they could not work with Docker anymore, and had to either migrate to a Red Hat-ecosystem Linux system such as CentOS or stay away from Red Hat customers.

Continue reading “Red Hat Universal Base Images for Docker users”

Share
Testing memory-based horizontal pod autoscaling on OpenShift

Testing memory-based horizontal pod autoscaling on OpenShift

Red Hat OpenShift offers horizontal pod autoscaling (HPA) primarily for CPUs, but it can also perform memory-based HPA, which is useful for applications that are more memory-intensive than CPU-intensive. In this article, I demonstrate how to use OpenShift’s memory-based horizontal pod autoscaling feature (tech preview) to autoscale your pods if the demands on memory increase. The test performed in this article might not necessarily reflect a real application. The tests only aim to demonstrate memory-based HPA in the simplest way possible.

Continue reading “Testing memory-based horizontal pod autoscaling on OpenShift”

Share
How to customize Fedora CoreOS for dedicated workloads with OSTree

How to customize Fedora CoreOS for dedicated workloads with OSTree

In part one of this series, I introduced Fedora CoreOS (and Red Hat CoreOS) and explained why its immutable and atomic nature is important for running containers. I then walked you through getting Fedora CoreOS, creating an Ignition file, booting Fedora CoreOS, logging in, and running a test container. In this article, I will walk you through customizing Fedora CoreOS and making use of its immutable and atomic nature.

Continue reading How to customize Fedora CoreOS for dedicated workloads with OSTree

Share
How to run containerized workloads securely and at scale with Fedora CoreOS

How to run containerized workloads securely and at scale with Fedora CoreOS

The history of container-optimized operating systems is short but filled by a variety of proposals with different degrees of success. Along with CoreOS Container Linux, Red Hat sponsored the Project Atomic community, which is today the umbrella that holds many projects, from Fedora/CentOS/Red Hat Enterprise Linux Atomic Host to container tools (Buildah, skopeo, and others) and Fedora SilverBlue, an immutable OS for the desktop (more on the “immutable” term in the next sections).

When Red Hat acquired the San Francisco-based company CoreOS on January 2018 new perspectives opened. Red Hat Enterprise Linux CoreOS (RHCOS) was one of the first products of this merge, becoming the base operating system in OpenShift 4. Since Red Hat is focused on open source software, always striving to create and feed upstream communities, the Fedora ecosystem was the natural environment for the RHCOS-related upstream, Fedora CoreOS. Fedora CoreOS is based on the best parts of CoreOS Container Linux and Atomic Host, merging features and tools from both.

In this first article, I introduce Fedora CoreOS and explain why it is so important to developers and DevOps professionals. Throughout the rest of this series, I will dive into the details of setting up, using, and managing Fedora CoreOS.

Continue reading “How to run containerized workloads securely and at scale with Fedora CoreOS”

Share
Red Hat simplifies container development and redistribution of Red Hat Enterprise Linux packages

Red Hat simplifies container development and redistribution of Red Hat Enterprise Linux packages

Application developers in the Red Hat Partner Connect program can now build their container apps and redistribute them from the full set of Red Hat Enterprise Linux (RHEL) user space packages (non-kernel). This nearly triples the number of packages over UBI only.

When we introduced Red Hat Universal Base Images (UBI) in May 2019, we provided Red Hat partners the ability to freely use and redistribute a substantial number of RHEL packages that can be deployed on both Red Hat and non-Red Hat platforms. This gave developers the ability to build safe, secure, and portable container-based software that could then be deployed anywhere. The feedback on this has been overwhelmingly positive and we thank you for it, but we learned that you needed more, so we’re sharing this advanced preview with Red Hat Partner Connect members to help you with your planning. 

Expanded and exclusive redistribution rights for Red Hat Technology Partners

We are pleased to announce expanded partner terms and conditions that grant Red Hat Technology Partners free use and redistribution rights to all Red Hat Enterprise Linux user space packages when you build upon UBI-based images. With more than triple the number of RHEL packages now available, you can simplify your container and Operator development and freely re-distribute your container-based software through both Red Hat and non-Red Hat registries. This is only available to Red Hat partners who participate in and complete Red Hat Container Certification.

Continue reading “Red Hat simplifies container development and redistribution of Red Hat Enterprise Linux packages”

Share
Using secrets in Kafka Connect configuration

Using secrets in Kafka Connect configuration

Kafka Connect is an integration framework that is part of the Apache Kafka project. On Kubernetes and Red Hat OpenShift, you can deploy Kafka Connect using the Strimzi and Red Hat AMQ Streams Operators. Kafka Connect lets users run sink and source connectors. Source connectors are used to load data from an external system into Kafka. Sink connectors work the other way around and let you load data from Kafka into another external system. In most cases, the connectors need to authenticate when connecting to the other systems, so you will need to provide credentials as part of the connector’s configuration. This article shows you how you can use Kubernetes secrets to store the credentials and then use them in the connector’s configuration.

Continue reading “Using secrets in Kafka Connect configuration”

Share
Installing Kubeflow v0.7 on OpenShift 4.2

Installing Kubeflow v0.7 on OpenShift 4.2

As part of the Open Data Hub project, we see potential and value in the Kubeflow project, so we dedicated our efforts to enable Kubeflow on Red Hat OpenShift. We decided to use Kubeflow 0.7 as that was the latest released version at the time this work began. The work included adding new installation scripts that provide all of the necessary changes such as permissions for service accounts to run on OpenShift.

Continue reading Installing Kubeflow v0.7 on OpenShift 4.2

Share
Red Hat OpenShift 4.2 IPI on OpenStack 13: All-in-one setup

Red Hat OpenShift 4.2 IPI on OpenStack 13: All-in-one setup

Months ago, a customer asked me about Red Hat OpenShift on OpenStack, especially regarding the network configuration options available in OpenShift at the node level. In order to give them an answer and increase my confidence on $topic, I’ve considered how to test this scenario.

At the same time, the Italian solution architect “Top Gun Team” was in charge of preparing speeches and demos for the Italian Red Hat Forum (also known as Open Source Day) for the Rome and Milan dates. Brainstorming led me to start my journey toward testing OpenShift 4.2 setup on OpenStack 13 in order to reply to the customer and leverage this effort to build a demo video for Red Hat Forum.

Continue reading “Red Hat OpenShift 4.2 IPI on OpenStack 13: All-in-one setup”

Share
Customizing OpenShift project creation

Customizing OpenShift project creation

I recently attended an excellent training run by Red Hat’s Global Partner Enablement Team on advanced Red Hat OpenShift management. One of the most interesting elements of the training was how to customize default project creation. This article explains how to use OpenShift’s projectRequestTemplate to add default controls for the resources that a project is allowed to consume.

Continue reading “Customizing OpenShift project creation”

Share