Imagine being able to take any random virtual machine (VM) or cloud VM and being able to run a single command to replace the operating system (OS) with an OCI-containerized image after a simple reboot. Then, imagine you could manage the updates of the OS in the same manner as you do for all of your containerized applications.
Steps to WOW
- Launch a VM or cloud image—Amazon, Microsoft, Google, it does not matter.
Make sure you have Podman installed:
$ sudo dnf -y install podman
or
$ sudo apt get podman
or
$ sudo yum -y install podman
Run the
centos-bootc
container with the following command:$ sudo podman run -ti --rm --privileged -v /:/target --pid=host -v /var/lib/containers:/var/lib/containers -v /dev:/dev --security-opt label=type:unconfined_t quay.io/centos-bootc/centos-bootc-cloud:stream9 bootc install to-existing-root /target $ sudo systemctl reboot
When the system reboots, ssh into it; you'll see the original OS has been replaced with the OS defined in centos-bootc-cloud
image. The rest of this article will explain how to build a containerized bootable OS to run AI models of your own.
Image mode for Red Hat Enterprise Linux
Image mode for Red Hat Enterprise Linux (RHEL) uses the same tools, skills, and patterns as containerized applications to deliver an operating system that is easy to build, ship, and run. This article will cover the concepts behind image mode and help introduce foundational concepts required to package operating systems in Open Container Initiative (OCI) container images.
I will also introduce artificial intelligence (AI) concepts, models, and application recipes to allow users to explore how to package AI tools as containers and prepare them for installation as a bootable container image.
This article is appropriate for developers, system administrators, and data scientists interested in building and packaging AI tools. You'll learn and understand the concepts behind RHEL image mode. We'll also get hands-on and build and deploy a custom image.
Requirements
- Run all commands on a subscribed RHEL 9.x system (a laptop, VM, etc. will work) and a minimum of 40 GB of available disk space. (AI uses a lot of disk space.) Keep in mind that more disk space might be required, depending on the size and quantity of images being created. When using VM, you might need to set up copy/paste between environments.
- A Red Hat account with either production or developer subscriptions. No-cost developer subscriptions are available via the Red Hat Developer program.
- A container registry. This tutorial makes use of quay.io as the registry content is published to. However, you can use another hosted registry service or run a registry locally. If you do not have a quay.io account, you can create one quickly and easily here.
Set up the container repository
You will need to have an account at a container image registry like quay.io or docker.io. Once you have this, it is recommended that you set up the environment variable CONTAINER_REPOSITORY
to point at your repository. For example, my repository is quay.io/rhatdan, so I would set export CONTAINER_REPOSITORY=quay.io/rhatdan
. Substitute YOUR_REPOSITORY
in the following snippet with your actual repository. We will refer to the ${CONTAINER_REPOSITORY}
throughout the rest of the document.
$ export CONTAINER_REPOSITORY=YOUR_REPOSITORY
Install packages
Start by installing Podman and Git. You will use Git to check out the AI Lab Recipes repo. Use the latest Podman version available, but anything v4.* or newer will work. Note that other container tools, such as Docker or container pipeline tooling, can work in a production environment. This article will explain the concepts using Podman, but just remember other tools might be more relevant to your environment.
Discussions, issues and pull requests are welcome in the upstream https://github.com/containers/ai-lab-recipes repository. This project is in tech preview for RHEL.
To install packages on a Linux box:
$ sudo dnf -y install podman git subscription-manager make
To installing packages on a Mac or Windows platforms:
On Mac or Windows platforms, you need to configure Podman Desktop and a Podman machine. Start the installation process from https://podman-desktop.io/.
Set up a subscription for RHEL
Next we’ll need to authenticate to registry.redhat.io. If you do not have a Red Hat account, visit https://access.redhat.com/terms-based-registry and click New service account. From the New Service Account page, click the name of the new entry and copy/paste the “docker login” instructions in the terminal, replacing the docker
command with podman
. Full instructions are here if more information is needed.
Now that you have an account, ensure that you are subscribed to get RHEL content. You must run this command as root, even if you are using a Podman machine.
On Linux:
$ sudo subscription-manager register
Using Podman Desktop:
On a Mac or Windows, we recommend to using the Podman Desktop Red Hat Account extension, which will register the Podman machine. To install the extension, go to the Extensions menu and select the Red Hat Account extension for installation. Once installed, you can sign into your Red Hat account via the Authentication menu.
Finally, to access the rhel-bootc
image, you need to log in to the registry.
Via command line:
$ podman login registry.redhat.io
Username: <USERNAME>
Password: <PASSWORD>
Using Podman Desktop:
When using the Podman Desktop Red Hat Account Extension, there's no need to log into this registry; it's done automatically when signing into the Red Hat account.
Using bootc container images
bootc container images differ technically from application containers in two important ways:
bootc
images use OSTree inside the container.- A
bootc
image has a kernel,systemd
, and enough other packages to boot a physical or virtual machine—application container base images typically contain a minimal set of packages that are unrelated to hardware management. And, unlike the Red Hat Universal Base Image (UBI), RHELbootc
images are distributed under the same licensing terms as RHEL.
Let’s pull the rhel-bootc
base image.
Via command line:
$ podman pull registry.redhat.io/rhel9/rhel-bootc:9.4
Clone AI Lab recipes
As a next step, git clone the github.com/containers/ai-lab-recipes GitHub repo to grab the example AI applications. This repo is the same one used in the Podman Desktop AI Lab.
At command line:
$ git clone https://github.com/containers/ai-lab-recipes 2>/dev/null || (cd ai-lab-recipes; git pull origin main)
Inside Podman Desktop, this happens automatically when you start using the AI plug-in.
Building containerized applications
After git cloning the repository, you can start building the containerized applications used in the example. The first containerized application is the AI model you plan on using. Suggested models are described in ai-lab-recipes/models/Containerfile
; for example, the mistral-7b-instruct-v0.1.Q4_K_M.gguf model.
The ai-lab-recipes
repo uses Containerfiles (Dockerfiles) to describe applications. In the next example, you cat out the ai-lab-recipes/models/Containerfile
. Notice that its primary purpose is to pull down the selected AI model from huggingface and installing it as /model/model.file
.
$ more ai-lab-recipes/models/Containerfile
# Suggested alternative open AI Models # https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/main/llama-2-7b-chat.Q5_K_S.gguf # https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf (Default) # https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/resolve/main/codellama-7b-instruct.Q4_K_M.gguf # https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-small.bin # podman build --build-arg MODEL_URL=https://... -t quay.io/yourimage . # FROM registry.access.redhat.com/ubi9/ubi-micro:9.3-13 # Can be substituted using the --build-arg defined above ARG MODEL=https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf # By default the Model Server container image uses the AI Model stored in the model/model.file file. WORKDIR /model ADD $MODEL /model/model.file
Build the container image and push it to a registry
While you could use Podman directly to build the container models, ai-lab-recipes
provides Makefiles to make the process easier. By default the images will be created for the quay.io/ai-labl/IMAGE:1.0
name, which you will need to replace with a pointer to your CONTAINER_REPOSITORY
.
Use the provided makefile to build a container image with the Mistral-7B-Instruct-v0.1 model. You can optionally add the MODEL
argument to the make build command to specify another model.
$ cd ai-lab-recipes/models
$ make IMAGE=${CONTAINER_REPOSITORY}/mymodel:1.0 build
podman build --build-arg MODEL_URL=https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf -f Containerfile -t ${CONTAINER_REPOSITORY}/mymodel:1.0 . STEP 1/4: FROM registry.access.redhat.com/ubi9/ubi-micro:9.3-15 Trying to pull registry.access.redhat.com/ubi9/ubi-micro:9.3-15... Getting image source signatures Checking if image destination supports signatures Copying blob 82e56b4fb992 done 6.9MiB / 6.9MiB (skipped: 0.0b = 0.00%) Copying config 0a76a9bf80 done | Writing manifest to image destination Storing signatures STEP 2/4: ARG MODEL_URL=https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf --> 822e115f7722 STEP 3/4: WORKDIR /model --> 7f2319860698 STEP 4/4: ADD $MODEL_URL /model/model.file COMMIT quay.io/ai-lab/mistral-7b-instruct-v0.1.q4_k_m.gguf:latest --> ca57255117ab Successfully tagged ${CONTAINER_REPOSITORY}/mymodel:1.0 ca57255117ab0600ec8902dd24b59657fe5f5b7f22e9b8aec01580b956aa7105
You can also override the default model by specifying it on the command line:
$ make \ MODEL=https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/main/llama-2-7b-chat.Q5_K_S.gguf \
IMAGE=${CONTAINER_REPOSITORY}/mymodel:1.0 build
podman build ${MODEL_URL:+--build-arg MODEL=${MODEL_URL}} -f Containerfile -t ${CONTAINER_REPOSITORY}/mymodel:1.0 . STEP 1/4: FROM registry.access.redhat.com/ubi9/ubi-micro:9.3-13 Trying to pull registry.access.redhat.com/ubi9/ubi-micro:9.3-13... Getting image source signatures Checking if image destination supports signatures Copying blob ea29d36b883e done | Copying config 5aaaf0e6d3 done | Writing manifest to image destination Storing signatures STEP 2/4: ARG MODEL=https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf --> eba8cb4131df STEP 3/4: WORKDIR /model --> 003f40565ec0 STEP 4/4: ADD $MODEL /model/model.file COMMIT ${CONTAINER_REPOSITORY}/mymodel:1.0 --> 2b0b988fed55 Successfully tagged ${CONTAINER_REPOSITORY}/mymodel:1.0 2b0b988fed55eecc01953ab9e74a67993c94e513a7109afe8ee1e4140b9b5a30
Now that you built your containerized model, you can log in and push it to your registry:
$ podman login ${CONTAINER_REPOSITORY}
Username: <USERNAME> Password: <PASSWORD>
$ podman push ${CONTAINER_REPOSITORY}/mymodel:1.0
Getting image source signatures Copying blob f0552ee69c7b [==============>----------------------] 1.7GiB / 4.1GiB | 314.5 MiB/s Copying blob 5026add57f39 skipped: already exists Containerize the model server and push it to the registry Currently AI Lab Recipes provides three model servers.
Model servers
AI Lab Recipes provides three model servers:
- llamacpp_python: The
llamacpp_python
model server images are based on the llama-cpp-python project that provides Python bindings for llama.cpp. This provides a Python-based and OpenAI API compatible model server that can run LLMs of various sizes locally across Linux, Windows, or Mac. You can read more about it in theai-lab-recipes
REAME.md file. - Ollama: We can also use the official Ollama image for our model service, ollama/o. See llama:latest.
- whispercpp: Whisper models are useful for converting audio files to text. The sample application audio-to-text describes how to run an inference application.
For this article, you will use the llamacpp_python
model server.
$ cd ../model_servers
$ ls
llamacpp_python ollama whispercpp
$ cd llamacpp_python
$ make IMAGE=${CONTAINER_REPOSITORY}/model_server:1.0 build
podman build --squash-all --build-arg 8001 -t ${CONTAINER_REPOSITORY}/model_server:1.0 . -f base/Containerfile STEP 1/6: FROM registry.access.redhat.com/ubi9/python-311:1-52.1712567218 STEP 2/6: WORKDIR /locallm STEP 3/6: COPY src . STEP 4/6: RUN pip install --no-cache-dir --verbose -r ./requirements.txt ... --> e22353c2417c Successfully tagged ${CONTAINER_REPOSITORY}/model_server:1.0 e22353c2417c36ce66e8e828f1f2ed9a83ab4ab4dfbe948383a05fbf302e0274
$ podman push ${CONTAINER_REPOSITORY}/model_server:1.0
Getting image source signatures Copying blob 031702316d20 done | Copying config e22353c241 done | Writing manifest to image destination Containerization of your application
AI Lab Recipes
The ai-lab-recipes
folder contains many recipes for building different types of AI applications. For this example, you are going to use the chatbot recipe:
$ cd ../../recipes
$ ls
audio common computer_vision multimodal natural_language_processing
$ cd natural_language_processing/chatbot
Each recipe has a README file associated with them to describe the application and its use cases. For example, the chatbot README.md looks something like:
more README.md
# Chat Application This demo provides a simple recipe to help developers start building out their own custom LLM enabled chat applications. It consists of two main components; the Model Service and the AI Application.
You can package up the example application or begin working with it and converting it to your own application. Then you can package it up and push it to a registry.
Finally, you will build a Containerfile for your application. In this example, you can just use the example application:
$ make build APP_IMAGE=${CONTAINER_REPOSITORY}/chatbot:1.0
$ podman push ${CONTAINER_REPOSITORY}/chatbot:1.0
Build a bootable container for your application
As a next step, let’s look at an example Containerfile. (You might know these as Dockerfiles.) We are going to start simple and install a chatbot inference stack. You can use this Containerfile directly or make modifications specifically for your app:
$ cat ai-lab-recipes/recipes/natural_language_processing/chatbot/bootc/Containerfile
# Example: an AI powered sample application is embedded as a systemd service
# via Podman quadlet files in /usr/share/containers/systemd
#
# from recipes/natural_language_processing/chatbot, run
# 'make bootc'
FROM quay.io/centos-bootc/centos-bootc:stream9
ARG SSHPUBKEY
# The --build-arg "SSHPUBKEY=$(cat ~/.ssh/id_rsa.pub)" option inserts your
# public key into the image, allowing root access via ssh.
RUN set -eu; mkdir -p /usr/ssh && \
echo 'AuthorizedKeysFile /usr/ssh/%u.keys .ssh/authorized_keys .ssh/authorized_keys2' >> /etc/ssh/sshd_config.d/30-auth-system.conf && \
echo ${SSHPUBKEY} > /usr/ssh/root.keys && chmod 0600 /usr/ssh/root.keys
ARG RECIPE=chatbot
ARG MODEL_IMAGE=quay.io/ai-lab/mistral-7b-instruct:latest
ARG APP_IMAGE=quay.io/ai-lab/${RECIPE}:latest
ARG SERVER_IMAGE=quay.io/ai-lab/llamacpp-python:latest
ARG TARGETARCH
# Add quadlet files to setup system to automatically run AI application on boot
COPY build/${RECIPE}.kube build/${RECIPE}.yaml /usr/share/containers/systemd
# Because images are prepulled, no need for .image quadlet
# If commenting out the pulls below, uncomment this to track the images
# so the systemd service will wait for the images with the service startup
# COPY build/${RECIPE}.image /usr/share/containers/systemd
# Setup /usr/lib/containers/storage as an additional store for images.
# Remove once the base images have this set by default.
RUN sed -i -e '/additionalimage.*/a "/usr/lib/containers/storage",' \
/etc/containers/storage.conf
# Added for running as an OCI Container to prevent Overlay on Overlay issues.
VOLUME /var/lib/containers
# Prepull the model, model_server & application images to populate the system.
# Comment the pull commands to keep bootc image smaller.
# The quadlet .image file added above pulls following images with service startup
RUN podman pull --arch=${TARGETARCH} --root /usr/lib/containers/storage ${SERVER_IMAGE}
RUN podman pull --arch=${TARGETARCH} --root /usr/lib/containers/storage ${APP_IMAGE}
RUN podman pull --arch=${TARGETARCH} --root /usr/lib/containers/storage ${MODEL_IMAGE}
RUN podman system reset --force 2>/dev/null
You can see how this Containerfile assembles the components from our previous steps and adds them to a bootable container image. Let’s build the container image:
$ make bootc \
AUTH_JSON=$XDG_RUNTIME_DIR/containers/auth.json \
SERVER_IMAGE=${CONTAINER_REPOSITORY}/model_server:1.0 \
MODEL_IMAGE=${CONTAINER_REPOSITORY}/mymodel:1.0 \
APP_IMAGE=${CONTAINER_REPOSITORY}/chatbot:1.0 \
BOOTC_IMAGE=${CONTAINER_REPOSITORY}/chatbot-bootc:1.0
make bootc AUTH_JSON=$XDG_RUNTIME_DIR/containers/auth.json SERVER_IMAGE=${CONTAINER_REPOSITORY}/model_server:1.0 APP_IMAGE=${CONTAINER_REPOSITORY}/chatbot:1.0 MODEL_IMAGE=${CONTAINER_REPOSITORY}/mymodel:1.0 # Modify quadlet files to match the server, model and app image rm -rf build; mkdir -p bootc/build; ln -sf bootc/build . sed -e "s|SERVER_IMAGE|${CONTAINER_REPOSITORY}/model_server:1.0|" \ -e "s|APP_IMAGE|${CONTAINER_REPOSITORY}/chatbot:1.0|g" \ -e "s|MODEL_IMAGE|${CONTAINER_REPOSITORY}/mymodel:1.0|g" \ -e "s|APP|chatbot|g" \ quadlet/chatbot.image \ > build/chatbot.image sed -e "s|SERVER_IMAGE|${CONTAINER_REPOSITORY}/model_server:1.0|" \ -e "s|APP_IMAGE|${CONTAINER_REPOSITORY}/chatbot:1.0|g" \ -e "s|MODEL_IMAGE|${CONTAINER_REPOSITORY}/mymodel:1.0|g" \ quadlet/chatbot.yaml \ > build/chatbot.yaml cp quadlet/chatbot.kube build/chatbot.kube podman build \ ${ARCH:+--arch ${ARCH}} \ ${FROM:+--from ${FROM}} \ ${AUTH_JSON:+-v ${AUTH_JSON}:/run/containers/0/auth.json} \ --security-opt label=disable \ --cap-add SYS_ADMIN \ --build-arg MODEL_IMAGE=${CONTAINER_REPOSITORY}/mymodel:1.0 \ --build-arg APP_IMAGE=${CONTAINER_REPOSITORY}/chatbot:1.0 \ --build-arg SERVER_IMAGE=${CONTAINER_REPOSITORY}/model_server:1.0 \ --build-arg "SSHPUBKEY=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA2vfgIjZRMlDQ0ck5ub8S2sfgG1HTML3Y8lsA3yOz/UsVJY/vxnDVrLLZPNk3hLSAVP+W9+j+pF1XsnjQ6aVbrsSxuGkFQvJoZeK3EjZ0A55lqEvJPG9+IUi7rqHBh5yU/ZX9fQ6KkvmB2ECGR1CqOR1uednPJJ/7fHeTElycgOGlYT6hafuo5RV6O3GITA4VAKZdIE+0+N37p0Yej+vIVYG4iRZ7Jgk+ZnZlwyi8IG06CJfuGwpAw76c/rvI6GYYueFwfGjUxo474ReNn53GyBGnM9NtoBdWGFSiPeET7056kfV4TSNDMB24js8lrho3TqE3wl+szkBeKU3oAZ9okQ== dwalsh@localhost.localdomain" \ -v /etc/containers/policy.json:/etc/containers/policy.json \ -f bootc/Containerfile \ -t quay.io/ai-lab/chatbot-bootc:latest bootc STEP 1/15: FROM quay.io/centos-bootc/centos-bootc:stream9 STEP 2/15: ARG SSHPUBKEY --> Using cache 75dabde0716d82a1e71a1e6bc8d268943308f1d8a84ee4d25f2ef4c48a3d6b9d --> 75dabde0716d STEP 3/15: RUN set -eu; mkdir -p /usr/ssh && echo 'AuthorizedKeysFile /usr/ssh/%u.keys .ssh/authorized_keys .ssh/authorized_keys2' >> /etc/ssh/sshd_config.d/30-auth-system.conf && echo ${SSHPUBKEY} > /usr/ssh/root.keys && chmod 0600 /usr/ssh/root.keys --> Using cache d1326c5617ea1182e7c3c7f673ddce473b20e6ac3ce6f00346a5b91fd97cf5df --> d1326c5617ea STEP 4/15: ARG RECIPE=chatbot --> Using cache 3a7fc9326d7f4799ded6fe4c5be125288a3b722bd2bbb1491b9a3ed379a4216c --> 3a7fc9326d7f STEP 5/15: ARG MODEL_IMAGE=quay.io/ai-lab/mistral-7b-instruct:latest --> Using cache de60d77de196f18bcd6e6c7e7ff33b82d3c02e8ce400bbe16f961da66f321f47 --> de60d77de196 STEP 6/15: ARG APP_IMAGE=quay.io/ai-lab/${RECIPE}:latest --> Using cache 0b2599be3759dcad65a8a7b26e33dba455b1c56565542f7018853d95204308fc --> 0b2599be3759 STEP 7/15: ARG SERVER_IMAGE=quay.io/ai-lab/llamacpp-python:latest --> Using cache 52e82ebd5e489b63cfc512900898802e23fbb88c40ada8cbfd4680423ab229cd --> 52e82ebd5e48 STEP 8/15: ARG TARGETARCH --> Using cache f96c6e45e2f64013236356fe7bb2ef7ba60de99dbb5f0759d35183b8f0a9394f --> f96c6e45e2f6 STEP 9/15: COPY build/${RECIPE}.kube build/${RECIPE}.yaml /usr/share/containers/systemd --> Using cache 1c752f348934804e80f10068276e6ecbefe78e935bc0b8bb6b30321528898803 --> 1c752f348934 STEP 10/15: RUN sed -i -e '/additionalimage.*/a "/usr/lib/containers/storage",' /etc/containers/storage.conf --> Using cache e63b7b141eff0fb163a2614c9d802a9b2b4a96c7fb38e7f65128000989f16b4d --> e63b7b141eff STEP 11/15: VOLUME /var/lib/containers --> Using cache 84a15209a9be9a46b28764103212374f4b752312fc3be293da91aedf5e2a19b6 --> 84a15209a9be STEP 12/15: RUN podman pull --arch=${TARGETARCH} --root /usr/lib/containers/storage ${SERVER_IMAGE} --> Using cache 4844f4bafc876dda49fdb93ec4a7d6f928a1b9e024cf5f4cb1851a55abd13e46 --> 4844f4bafc87 STEP 13/15: RUN podman pull --arch=${TARGETARCH} --root /usr/lib/containers/storage ${APP_IMAGE} Trying to pull ${CONTAINER_REPOSITORY}/chatbot:1.0... Getting image source signatures Copying blob sha256:37e951d6febc638ac3d1151d8e45797c50a7b333376ea2b563ff9ef5fa62ed82 Copying blob sha256:903f91b73b4b7d5eddf0246e44433dafa93917afba2777f8d8d63bdee0075345 Copying blob sha256:f04d3d2fb7c335df858f9642e6666f2e16bef07c115ee333b77667ae92edf549 Copying blob sha256:701c7a00f55c6e3880b839b433ad1d4fa4bd17493da03b1605e9d24af976ff98 Copying blob sha256:f305848567145c22432fe89972d5374aa2bdfb2fb0178e0d8ee48ab2500d88af Copying blob sha256:711d9c9ed8c629ce167ef63a5afeb874c157e4fef032fe07d9900820ac059398 Copying blob sha256:6f9d0afd3c88713acae6266a865b731df110224a1abed5a827f181f779695622 Copying blob sha256:58683069f54861ea05b283037f7058557c4ae620c54666020b7a345cb6c3481a Copying config sha256:69dcf05402a69592652d984de17268c0e91456a365ef9db169b97f8463342441 Writing manifest to image destination 69dcf05402a69592652d984de17268c0e91456a365ef9db169b97f8463342441 --> 015f7d02291d STEP 14/15: RUN podman pull --arch=${TARGETARCH} --root /usr/lib/containers/storage ${MODEL_IMAGE} Trying to pull ${CONTAINER_REPOSITORY}/mymodel:1.0... Getting image source signatures Copying blob sha256:575264bb79f9364c4dc2a44ebc4f6a78f6204cefad344c87116f0ce0a3d2520a Copying blob sha256:5026add57f3913302eacee826b09c6771b06c2ce9629a2aae79a73e9fddb5d2d Copying config sha256:7a488b157d85b5691802133f96c9988639e48530e4ee3d9d7a9f519477845402 Writing manifest to image destination 7a488b157d85b5691802133f96c9988639e48530e4ee3d9d7a9f519477845402 --> da1e209cbbe0 STEP 15/15: RUN podman system reset --force 2>/dev/null A "/etc/containers/storage.conf" config file exists. Remove this file if you did not modify the configuration. COMMIT ${CONTAINER_REPOSITORY}/chatbot-bootc:latest --> 428b12e8c090 Successfully tagged ${CONTAINER_REPOSITORY}/chatbot-bootc:latest 428b12e8c0907faa89a09478a356fefe97dbd06fc0b8b8e6f6ea915fb003c65d Successfully built bootc image 'quay.io/ai-lab/chatbot-bootc:latest'. You may now convert the image into a disk image via bootc-image-builder or the Podman Desktop Bootc Extension. For more information, please refer to * https://github.com/osbuild/bootc-image-builder * https://github.com/containers/podman-desktop-extension-bootc
Run chatbot-bootc as a container
Now that we have our image, let’s quickly test it. Because our image is a container, it is fast to run and verify if we have any typos as an error will be emitted. We’ll give it the short name (chatbot
) for simplicity:
$ make bootc-run BOOTC_IMAGE=${CONTAINER_REPOSITORY}/chatbot-bootc:1.0
podman run -d --rm --pull=never --name chatbot-bootc -p 8080:8501 --privileged \
${AUTH_JSON:+-v ${AUTH_JSON}:/run/containers/0/auth.json} \
${CONTAINER_REPOSITORY}/chatbot-bootc:1.0 /sbin/init
2b575fb87ecb6415186cb3d470bb3664c110112303a0005cba480e73b07cf12d
The container will start, and there is no need to worry about logging in right now. Open a browser and verify that you can view the webpage being served at http://[your_ip_address]:8080
. If the page doesn’t load, double-check your firewall rules. If you’re using a local system, the loopback address should work fine.
In this example, we’re starting systemd. However, for many testing scenarios, it will be more efficient to start an application. Turning around fast testing and validations is one of the most profound things about using containers to define operating system images.
You can run a shell into the running container instance with podman exec
, using the name we set above. You can execute a few commands examining the internals of the container perhaps checking systemctl
status and other configuration.
$ podman exec -it chatbot-bootc /bin/sh
[root@container_id /]# echo "Hello from container"
exit
When you exit out of the exec session, you can stop the container using the same name:
$ podman stop chatbot-bootc
2b575fb87ecb6415186cb3d470bb3664c110112303a0005cba480e73b07cf12d
Push chatbot-bootc to a container registry
Next, push the image to the registry and configure the repository to be publicly accessible. (See this example for modifying the container builds to inject a pull secret.)
$ podman push ${CONTAINER_REPOSITORY}/chatbot-bootc:1.0
At this point, we have created a layered image that we can deploy and there are several ways that it can be installed to a host: we can use RHEL’s installer and kickstart a bare metal system (deploy via USB, PXE, etc), or we can use image builder to convert the container image to a bootable disk image. Note that once this container is “installed,” future updates will apply directly from the container registry as they are published. So, the installation process only happens once.
Deploy to AWS with an AMI disk image
For this example, we’ll need to ensure cloud-init
is available in our chatbot inference Containerfile previously created. This is where the container workflow helps us and we can easily create a layered image for our use case. We’ll demonstrate a layered build, but feel free to edit the original Containerfile to include cloud-init
if that’s easier.
$ tee containerfile << EOF
FROM ${CONTAINER_REPOSITORY}/chatbot-bootc:1.0
# Install cloud-init for AWS
RUN dnf install -y cloud-init && dnf clean all
EOF
Build and push the image:
$ podman build -f containerfile -t ${CONTAINER_REPOSITORY}/chatbot-bootc-aws:1.0
$ podman push ${CONTAINER_REPOSITORY}/chatbot-bootc-aws:1.0
Convert to run in AWS
We are going to rely on cloud-init
for injecting users and ssh keys that will allow us to skip the config.json
step from the KVM example above. (Creating a cloud-init
config is outside of the scope of this article.) By using cloud-init
, we increase the security posture by avoiding having hard-coded credentials included in our image.
Next, run image builder to create our AMI.
On a Linux machine, you need to run the podman
command needs as root via the sudo
command; if you are working with a Podman machine, then the following command would require a rootful Podman socket:
$ podman run \
--rm \
-it \
--privileged \
--pull=newer \
--security-opt label=type:unconfined_t \
-v $XDG_RUNTIME_DIR/containers/auth.json:/run/containers/0/auth.json \
-v $HOME/.aws:/root/.aws:ro \
--env AWS_PROFILE=default \
registry.redhat.io/rhel9/bootc-image-builder:9.4 \
--type ami \
--aws-ami-name chatbot-bootc-aws \
--aws-bucket bootc-bucket \
--aws-region us-east-1 \
${CONTAINER_REPOSITORY}/chatbot-bootc-aws:1.0
Additional options are available to configure the properties for AWS. See bootc-image-builder - Amazon Machine Images (AMIs).
After the publishing process completes successfully, start your image and prepare to be amazed by viewing http://[your_instance_ip_address]
in a browser.
Pushing an update
A key aspect of this story is that the “install” is a one-time task; Day 2 changes can be done by pushing to the registry. Automatic updates are on by default: see https://containers.github.io/bootc/upgrades.html#the-bootc-upgrade-verb
Make any change to your configuration you want (change a config file or systemd unit, add or remove a package, etc.). In the future, as Red Hat produces new base images in the final RHEL 9.4 location, you can pull new base images to apply base OS updates.
Observe systemctl status bootc-fetch-apply-updates.timer
and wait for its execution—or just bootc upgrade
to apply eagerly.
Additional deployment targets
Run bootc OS locally as a VM
Deploy bootable container images with Linux QEMU (KVM) via a Qcow2 disk image
This example will leverage Image Builder to convert the container image into a qcow2 formatted disk. Our example will assume the image is in a publicly accessible repository. Refer to the image builder documentation on how to utilize an image from a secure repository. Other image formats, aside from qcow2, are also available.
Next, pass the chatbot inference container to image builder. (Note the pushed image must currently be available without any registry pull credentials; while this did not make the beta, using local containers for authenticated pulls should appear in final 9.4). On a Linux system, you need to run the following command with sudo, On macOS or Windows, it needs to be run against the rootful Podman socket.
$ sudo make bootc-image-builder \
AUTH_JSON=${XDG_RUNTIME_DIR}/containers/auth.json \
FROM=registry.redhat.io/rhel9/rhel-bootc:9.4 \
MODEL_IMAGE=${CONTAINER_REPOSITORY}/mymodel:1.0 \
SERVER_IMAGE=${CONTAINER_REPOSITORY}/model_server:1.0 \
APP_IMAGE=${CONTAINER_REPOSITORY}/chatbot:1.0 \
BOOTC_IMAGE=${CONTAINER_REPOSITORY}/chatbot-bootc:1.0 \
DISK_TYPE=qcow2
The generated image is owned by the root user, so use chown
to change it to the current user. Once the image is ready, you can run it using libvirt (or qemu directly) Note: If you are doing this within a VM, you will need nested Virt.
The generated qcow2 image is saved under ai-lab-recipes/recipes/natural_language_processing/chatbot/build/qcow2/
.
virt-install \
--name chatbot-bootc \
--memory 8192 \
--vcpus 4 \
--network bridge:virbr0,mac=52:54:00:1a:2b:3c \
--disk natural_language_processing/chatbot/build/qcow2/disk.qcow2 \
--import \
--os-variant rhel-unknown
With the VM running, you should be able to verify that the site is running by viewing http://[your_instance_ip_address]:8501
in a browser.
$ arp -n | grep "52:54:00:1a:2b:3c"
192.168.122.252 ether 52:54:00:1a:2b:3c C virbr0
Install using Kickstart
As you’ve seen, there are several ways to install our container. This section covers the use of Kickstart, which is very popular for bare metal deployments using either ISO, PXE, or USB drives. Some familiarity with Kickstart concepts is assumed as this guide does not go into detail. Insert details related to users, passwords, and ssh keys in the following example. Adding additional options is supported, but be aware that the %packages section is not viable using this workflow as we’re replacing the instance with the container image. Download the 9.4 Boot ISO for your architecture from Red Hat Developer.
Note: No method of logging in to quay.io inside ks file, so make sure this is ${CONTAINER_REPOSITORY}/chatbot-bootc:1.0
is a public repository before installing the VM.
text
network --bootproto=dhcp --device=link --activate
# Basic partitioning
clearpart --all --initlabel --disklabel=gpt
reqpart --add-boot
part / --grow --fstype xfs
# Here's where we reference the container image to install - notice the kickstart
# has no `%packages` section! What's being installed here is a container image.
ostreecontainer --url ${CONTAINER_REPOSITORY}/chatbot-bootc:1.0
firewall --disabled
services --enabled=sshd
# optionally add a user
user --name=cloud-user --groups=wheel --plaintext --password=changemme
sshkey --username cloud-user "ssh-ed25519 AAAAC3Nza....."
# if desired, inject a SSH key for root
rootpw --iscrypted locked
sshkey --username root "ssh-ed25519 AAAAC3Nza....." #paste your ssh key here
reboot
Copy this config file to a web server, boot any physical or virtual system using the installation media and append the following to the kernel arguments:
inst.ks=http://path_to_my_kickstart
Press Ctrl-X to boot using this option.
If an HTTP server is not readily available, you can use the http server module available in most Python installations. In the directory holding the Kickstart file, run:
$ python -m http.server
Another approach, which does not utilize an HTTP server to host the Kickstart file, injects the Kickstart into the installer ISO. The lorax package includes a utility called mkksiso
, which can embed this file in an ISO. This is useful for booting directly from a thumb drive and avoids editing the boot menu. Run: mkksiso --ks /PATH/TO/KICKSTART /PATH/TO/ISO /PATH/TO/NEW-ISO