The history of container-optimized operating systems is short but filled by a variety of proposals with different degrees of success. Along with CoreOS Container Linux, Red Hat sponsored the Project Atomic community, which is today the umbrella that holds many projects, from Fedora/CentOS/Red Hat Enterprise Linux Atomic Host to container tools (Buildah, skopeo, and others) and Fedora SilverBlue, an immutable OS for the desktop (more on the "immutable" term in the next sections).
When Red Hat acquired the San Francisco-based company CoreOS on January 2018 new perspectives opened. Red Hat Enterprise Linux CoreOS (RHCOS) was one of the first products of this merge, becoming the base operating system in OpenShift 4. Since Red Hat is focused on open source software, always striving to create and feed upstream communities, the Fedora ecosystem was the natural environment for the RHCOS-related upstream, Fedora CoreOS. Fedora CoreOS is based on the best parts of CoreOS Container Linux and Atomic Host, merging features and tools from both.
In this first article, I introduce Fedora CoreOS and explain why it is so important to developers and DevOps professionals. Throughout the rest of this series, I will dive into the details of setting up, using, and managing Fedora CoreOS.
Fedora CoreOS
Fedora CoreOS is a minimal operating system designed for running containerized workloads securely and at scale (and so is Red Hat CoreOS), which is why the Fedora CoreOS operating system layer is kept as minimal as possible and the file system is managed atomically as an immutable image. These features provide a reliable background for running containers.
In Fedora CoreOS, we can run our applications as containers and we can also (optionally) install extra packages with the rpm-ostree
tool, which layers changes on top of the base image atomically, similar to how we use a Git commit to finalize the code we wrote or updated. As with Git, this behavior helps us track the changes to the file system.
Note: The rpm-ostree
tool is based on the libostree
and libdnf
libraries. It combines the best features from image-system and package-system approaches to managing and upgrading a machine.
What is "immutable" and why is it important?
In everyday work we usually run containers on top of standard Linux systems, so what is the advantage of using an immutable system for our containerized applications? I strongly believe that the future of systems management (not only in cloud environments) points toward an immutable infrastructure managed with a declarative approach. But, wait, what is exactly an immutable infrastructure?
An immutable infrastructure relies on its components not being changed after creation. If a change must be applied (like an upgrade), the whole component is replaced with its new, modified version. Consider an instance running a web server. If we need to change its configuration or add/upgrade/remove modules and are following the immutable approach, we don’t modify the running instance. Instead, we deploy a new instance with the desired changes and destroy the old version.
Managing systems manually (or with poorly written automation) leads to the risk of configuration drift. Instead of using this method, we need systems where changes are managed atomically. In computer science, an atomic commit is an operation that applies a set of distinct changes as a single operation. Immutable systems are the natural extension of this scenario, giving us an atomically-managed system that applies all changes (upgrades, new packages, etc.) in a single atomic operation layered on top of the base file system. This practice produces systems that are more predictable and reliable.
Some find the term “immutable” strange, thinking that it could weaken system control and ownership. The immutability here has much to do with the way the machine configurations are applied, and the atomic approach defines how the file system changes are managed, which privileges a Git-like, layered approach. An example implementation of this atomic behavior is the libostree
library, which is the base foundation of the systems we will describe.
Libostree is a library and a set of tools, and together they provide a Git-like model for committing and downloading bootable file system trees. Libostree (or simply ostree
) creates layers for managing /etc
, user files, and boot loader configurations on top of the immutable system, which is atomically managed as a whole tree. So, we can run custom workloads on top of these base, minimal images by layering our customizations on top of them.
Doesn't this process sound similar to containers?
Besides the similarities between container images and system images, the main advantage of running containers on top of an immutable system is having a more stable, standardized, and configuration drift-free system, which provides predictable behaviors when running workloads on top of dozens or hundreds of nodes in an orchestrated environment and reduces the need for manual intervention and potential related mistakes.
Fedora CoreOS, Red Hat CoreOS, and OpenShift 4
The predictable and reliable nature of atomically-managed systems is a perfect scenario for systems automation that follows an immutable approach. Infrastructure-as-Code projects like Terraform take advantage of this management workflow. We can also use Red Hat Ansible to build and deploy immutable systems starting from a minimal base image.
OpenShift 4 brings a new level of intelligent automation to the process with OpenShift Operators. Operators take the burden of managing, upgrading, and configuring systems following a NoOps approach, letting DevOps professionals focus on application delivery. In OpenShift 4, the Machine Config Operator (MCO) has a fundamental role: It manages machine configurations and updates within the cluster. The MCO starts the Machine Config Daemon (MCD) on every RHCOS node as a DaemonSet. The MCD retrieves updated configurations (MachineConfig resources) and acts to align the configuration's current state with the desired state.
Since Fedora CoreOS is the default operating system for OKD 4, OpenShift 4's community upstream, learning how Fedora CoreOS works is a great help for understanding how nodes are managed inside of an OpenShift cluster.
Getting started with Fedora CoreOS
The rest of the series will use the latest Fedora CoreOS QEMU image as an example for installation, configuration, and management. You can download this image here from the Bare Metal & Virtualized tab. These are the basic images that can be configured at first boot using Ignition configs, a boot configuration format inherited from CoreOS Linux.
As you can see, QEMU is not your only option. You can find other images on Download Fedora CoreOS, in the categories:
- Bare metal: ISO, PXE kernel and initramfs, and raw
- Cloud-launchable: AMIs for AWS for different regions worldwide
- For cloud Operators: Alibaba Cloud and OpenStack qcow2, AWS vmdk, Azure vhd, and GCP
- Virtualized: OpenStack and QEMU qcow2 and VMware ova
Note: If you are not familiar with QEMU, check out Configure and run a QEMU-based VM outside of libvirt with virt-manager.
Now, let us walk through the process of installing and initially configuring Fedora CoreOS, running a test container, updating its configuration, and testing the new instance.
Creating an Ignition config with fcct
The underlying technology for Ignition configs is based on the Ignition project, a low-level system configuration utility that is executed during boot in the machine's initramfs. In this early boot stage, Ignition applies all of the configurations defined in the Ignition config file before pivoting the persistent root file system. In this article, we walk through preparing a basic Ignition config file and then booting an FCOS instance in a Linux box using libvirt
and QEMU/KVM. With slight adaptations, these examples also apply to cloud instances.
Ignition configs are standard JSON files that are not encoded in a pretty format. They can be long and hard to read or modify. FCOS offers a compatible format called Fedora CoreOS Configuration (FCC), a YAML-formatted config file that is easier to read/write. To generate Ignition files from an FCC, we can use the Fedora CoreOS Configuration Transpiler (FCCT) tool, fcct
.
This tool is easy to use, but we first need to create an FCC file. For the sake of this article, here is a simple example (example-fcc.yaml
) that sets a public SSH key for the user core
, the default cloud user in FCOS:
variant: fcos version: 1.0.0 passwd: users: - name: core ssh_authorized_keys: - ssh-rsa AAAAB3NzaC1yc...
In this example, the SSH public key was intentionally incomplete. After writing the FCC file, we need to translate it into an Ignition file. Download the latest release of fcct
and install it locally (/usr/local/bin
is the best choice for compiled or user-provided binaries).
Now, run the command and transform the FCC file into an Ignition config file:
$ fcct -input example-fcc.yaml -output example-ignition.ign
Booting Fedora CoreOS
If you have not downloaded the QEMU image yet, do so now (see the instructions earlier in this article). Once you have the image, start it with the virt-install
command:
$ sudo virt-install --connect qemu:///system \ -n fcos -r 2048 --os-variant=fedora31 --import \ --graphics=none \ --disk size=10,backing_store=/path/to/fedora-coreos-31.20200118.3.0-qemu.x86_64.qcow2 \ --qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=/path/to/example-ignition.ign"
Please be sure to replace dummy paths with your own correct file locations.
In the above command, I passed a command line argument to QEMU to define the Ignition file to use at boot with the --qemu-commandline
option.
Note: The virt-install
command is a great tool for spinning up local virtual machines from the command line. If we need a graphical alternative to monitor and manage virtual machines, we can use the virt-manager
tool in Gnome and configure the VM manually.
If we are running a Fedora/CentOS/Red Hat Enterprise Linux system on our laptop and SELinux is enabled (as it should be) on our machine, SELinux will block the creation of the instance since the qemu-kvm
process tries to access files in a directory without the virt_image_t
context. To solve this issue, we have two options: Put SELinux in permissive mode or relabel the directory containing the Ignition files.
To enable permissive mode:
$ sudo setenforce 0 $ sed -i '/^SELINUX=/ s/enforcing/permissive/g' /etc/selinux/config
Alternatively, to change the file context:
$ sudo semanage fcontext -a -t virt_image_t '/path/to/ignitions(/.*)?' $ sudo restorecon -Rvv /path/to/ignitions
Both options are interchangeable for the sake of our lab. Now, let boot the instance. At the end of the fast boot process, we should have output like this:
Fedora CoreOS 31.20200118.3.0 Kernel 5.4.10-200.fc31.x86_64 on an x86_64 (ttyS0) SSH host key: SHA256:0VrCMwoOmSiU9UNBT/HFzJAPRJFcaR9WE/wpCd3lt2I (ECDSA) SSH host key: SHA256:YAvgZLN6Wiuo+upzRmcDQ2gIOrJHVSHbiITWhrTRhZo (ED25519) SSH host key: SHA256:oxT9DOFu+QuOE4jyIJecTdElBvqREllfnCGFYNpIzu4 (RSA) eth0: 192.168.122.209 fe80::1300:f07a:26f4:2fb2 localhost login:
Logging in the first time
Along with kernel, OS version, and the SSH host keys, we can see the IPv4 address assigned to the Ethernet interface and a link-local IPv6 address. Now, let’s SSH into the instance with the IPv4 address:
$ ssh -i /path/to/private_key core@192.168.122.209 Fedora CoreOS 31.20200118.3.0 Tracker: https://github.com/coreos/fedora-coreos-tracker Last login: Thu Feb 6 21:50:26 2020 from 192.168.122.1 [core@localhost ~]$
Success! We have logged into our Fedora CoreOS machine. The login with SSH keys succeded because the Ignition file passed at boot was correctly applied to the core
user's SSH authorized_keys
file. Inspect the modified file to see our injected public key:
[core@localhost ~]$ cat /home/core/.ssh/authorized_keys.d/ignition
Running a test container
Fedora CoreOS comes with the most used container management tools already installed. Podman is the default container runtime. Along with Podman, Skopeo and Docker are also installed, with the Docker daemon disabled by default. I personally prefer to use Podman because of its daemonless nature, and relegate Docker only to those scenarios where communication with its Unix socket is mandatory.
Let’s run a simple container with Podman:
[core@localhost ~]$ podman run -d -p 8080:80 docker.io/library/nginx
In this example the container port 80/tcp is mapped to the host port 8080/tcp to let nginx serve requests from the external.
We can check the status with the podman ps
command:
[core@localhost ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0abc1a48f176 docker.io/library/nginx:latest nginx -g daemon o... 11 seconds ago Up 11 seconds ago 0.0.0.0:8080->80/tcp dazzling_jackson
The NGINX server is now up and running and, most importantly, running as a rootless container in Fedora CoreOS. This result has a great impact from the security perspective because it means that the container uses the mappings provided by the user namespace in Linux.
Rootless containers are a very important feature that Podman implemented in the early stages of the project: for an in-depth analysis, start with the rootless containers manifesto. and be sure to check out this article written by Dan Walsh on Opensource.com.
Conclusion
You now have a test container running in Fedora CoreOS. In the next article in this series, we will dig past the installation and setup and focus on customization and management. Let us know if you were inspired to experiment with Fedora CoreOS or Red Hat CoreOS, and how it went!
Last updated: February 11, 2024