DevOps

Ceph storage monitoring with Zabbix

Ceph storage monitoring with Zabbix

Storage prices are decreasing, while business demands are growing, and companies are storing more data than ever before. Following this growth pattern, demand grows for monitoring and data protection involving software-defined storage. Downtimes have a high cost that can directly impact business continuity and cause irreversible damage to organizations. Aftereffects include loss of assets and information; interruption of services and operations; law, regulation, or contract violations; along with the financial impacts from losing customers and damaging a company’s reputation.

Gartner estimates that a minute of downtime costs enterprise organizations $5,600, and an hour costs over $300,000.

On the other hand, in a DevOps context, it’s essential to think about continuous monitoring, which is a proactive approach to monitoring throughout the full application’s life cycle and that of its components. This approach helps identify the root cause of possible problems and then quickly and proactively prevent performance issues or future outages. In this article, you will learn how to implement Ceph storage monitoring using the enterprise open source tool Zabbix.

Continue reading “Ceph storage monitoring with Zabbix”

Share
How to customize Fedora CoreOS for dedicated workloads with OSTree

How to customize Fedora CoreOS for dedicated workloads with OSTree

In part one of this series, I introduced Fedora CoreOS (and Red Hat CoreOS) and explained why its immutable and atomic nature is important for running containers. I then walked you through getting Fedora CoreOS, creating an Ignition file, booting Fedora CoreOS, logging in, and running a test container. In this article, I will walk you through customizing Fedora CoreOS and making use of its immutable and atomic nature.

Continue reading How to customize Fedora CoreOS for dedicated workloads with OSTree

Share
How to run containerized workloads securely and at scale with Fedora CoreOS

How to run containerized workloads securely and at scale with Fedora CoreOS

The history of container-optimized operating systems is short but filled by a variety of proposals with different degrees of success. Along with CoreOS Container Linux, Red Hat sponsored the Project Atomic community, which is today the umbrella that holds many projects, from Fedora/CentOS/Red Hat Enterprise Linux Atomic Host to container tools (Buildah, skopeo, and others) and Fedora SilverBlue, an immutable OS for the desktop (more on the “immutable” term in the next sections).

When Red Hat acquired the San Francisco-based company CoreOS on January 2018 new perspectives opened. Red Hat Enterprise Linux CoreOS (RHCOS) was one of the first products of this merge, becoming the base operating system in OpenShift 4. Since Red Hat is focused on open source software, always striving to create and feed upstream communities, the Fedora ecosystem was the natural environment for the RHCOS-related upstream, Fedora CoreOS. Fedora CoreOS is based on the best parts of CoreOS Container Linux and Atomic Host, merging features and tools from both.

In this first article, I introduce Fedora CoreOS and explain why it is so important to developers and DevOps professionals. Throughout the rest of this series, I will dive into the details of setting up, using, and managing Fedora CoreOS.

Continue reading “How to run containerized workloads securely and at scale with Fedora CoreOS”

Share
Configure and run a QEMU-based VM outside of libvirt with virt-manager

Configure and run a QEMU-based VM outside of libvirt with virt-manager

I recently needed to run a virtual machine (VM) created using virt-manager outside of libvirt. I was investigating an issue that required running QEMU with the machine option dump-guest-core=on. By default, libvirt runs with that option off, so I decided to set up a standalone QEMU environment. I found the process of configuring the test VM and writing the boot script more involved than expected, so I decided to document the steps I took.

I hope this article makes it easier for you to configure and run your own QEMU-based VM for similar investigations. Note that I do not recommend the approach described here for a VM running in production (at least, not without backup).

Continue reading “Configure and run a QEMU-based VM outside of libvirt with virt-manager”

Share
OpenShift Actions: Deploy to Red Hat OpenShift directly from your GitHub repository

OpenShift Actions: Deploy to Red Hat OpenShift directly from your GitHub repository

Here is a common situation: You write your code, everything is on GitHub, and you are ready to publish it. But, you know that your job is not finished yet. You need to deploy your code and this process can be a nightmare at times.

Ideally, you should be able to do this whole process all in one place, but until now, you always had to set up external services and integrate them with GitHub (or add post-commit hooks). What if, instead, you could replace all of these extras and run everything directly from your GitHub repository with just a few YAML lines? Well, this is exactly what GitHub Actions are for.

Continue reading “OpenShift Actions: Deploy to Red Hat OpenShift directly from your GitHub repository”

Share
Red Hat OpenShift 4.2 IPI on OpenStack 13: All-in-one setup

Red Hat OpenShift 4.2 IPI on OpenStack 13: All-in-one setup

Months ago, a customer asked me about Red Hat OpenShift on OpenStack, especially regarding the network configuration options available in OpenShift at the node level. In order to give them an answer and increase my confidence on $topic, I’ve considered how to test this scenario.

At the same time, the Italian solution architect “Top Gun Team” was in charge of preparing speeches and demos for the Italian Red Hat Forum (also known as Open Source Day) for the Rome and Milan dates. Brainstorming led me to start my journey toward testing OpenShift 4.2 setup on OpenStack 13 in order to reply to the customer and leverage this effort to build a demo video for Red Hat Forum.

Continue reading “Red Hat OpenShift 4.2 IPI on OpenStack 13: All-in-one setup”

Share
Customizing OpenShift project creation

Customizing OpenShift project creation

I recently attended an excellent training run by Red Hat’s Global Partner Enablement Team on advanced Red Hat OpenShift management. One of the most interesting elements of the training was how to customize default project creation. This article explains how to use OpenShift’s projectRequestTemplate to add default controls for the resources that a project is allowed to consume.

Continue reading “Customizing OpenShift project creation”

Share
How to use third-party APIs in Operator SDK projects

How to use third-party APIs in Operator SDK projects

The Operator Framework is an open source toolkit for managing Kubernetes-native applications. This framework and its features provide the ability to develop tools that simplify complexities, such as installing, configuring, managing, and packaging applications on Kubernetes and Red Hat OpenShift. In this article, we show how to use third-party APIs in Operator-SDK projects.

In projects built with Operator-SDK, only the Kubernetes API schemas are added by default. However, you might need to create, read, update, or delete a resource that is from another API—even one that you created yourself via other Operator projects.

Let’s check out an example scenario: How to create a Route resource from the OpenShift API for an Operator-SDK project.

Continue reading “How to use third-party APIs in Operator SDK projects”

Share
Vault IDs in Red Hat Ansible and Red Hat Ansible Tower

Vault IDs in Red Hat Ansible and Red Hat Ansible Tower

This article demonstrates the use of multiple vault passwords through vault IDs. You will learn how to use vault IDs to encrypt a file and a string. Once they’re encrypted, the vault ID can be referenced inside a playbook and used within Red Hat Ansible and Red Hat Ansible Tower.

Starting with Ansible 2.4 and above, vault IDs are supported

Vault IDs help you encrypt different files with different passwords to be referenced inside a playbook. Before Ansible 2.4, only one vault password could be used in each Ansible playbook. In effect, every file needed to be encrypted using the same vault password.

To begin with, vault IDs need to be pre-created and referenced inside your ansible.cfg file. The following excerpt is from ansible-config list for the configuration DEFAULT_VAULT_IDENTITY_LIST:

Continue reading “Vault IDs in Red Hat Ansible and Red Hat Ansible Tower”

Share
Using Kubernetes ConfigMaps to define your Quarkus application’s properties

Using Kubernetes ConfigMaps to define your Quarkus application’s properties

So, you wrote your Quarkus application, and now you want to deploy it to a Kubernetes cluster. Good news: Deploying a Quarkus application to a Kubernetes cluster is easy. Before you do this, though, you need to straighten out your application’s properties. After all, your app probably has to connect with a database, call other services, and so on. These settings are already defined in your application.properties file, but the values match the ones for your local environment and won’t work once deployed onto your cluster.

So, how do you easily solve this problem? Let’s walk through an example.

Continue reading “Using Kubernetes ConfigMaps to define your Quarkus application’s properties”

Share