This guide is intended for developers and system administrators working with IBM Power Virtual Server environments who want to build reproducible, container-native OS images using bootc.
Building and maintaining custom operating system images on IBM Power Systems has traditionally been a manual and time-consuming process. Every update, rebuild, or configuration drift introduces risk and inconsistency. Bootc simplifies this by letting you build bootable OS images directly from container images, bringing the same speed, version control, and reproducibility you already use for container applications.
What is bootc?
Bootc is a tool that converts OCI container images into bootable, transactionally updatable operating systems. Instead of building OS images using traditional tools or ISO builders, bootc lets you define and manage your operating system the same way you manage container images.
Bootc bridges the gap between application containers and base OS images. It allows for atomic updates, rollback safety, and consistent builds across architectures, including IBM Power (ppc64le).
Prerequisites
Before you begin, ensure that you have the following set up.
On your ppc64le Builder Machine:
- A Fedora or RHEL ppc64le builder machine: You must have access to an IBM Power Virtual Server (Power VS) instance on IBM Cloud.
- Additional storage: Attach an additional disk of at least 100 GB to your builder machine as the installation target for the new OS image.
- Internet access: The builder machine must have internet connectivity to download necessary packages and container images.
- Podman and bootc: As you follow along with this article, you will install Podman and bootc on the builder machine.
On your local workstation:
- IBM Cloud CLI: As you follow along with this article, you will install the IBM Cloud CLI on the builder machine, along with these required plug-ins:
cloud-object-storage,power-iaas, andvpc-infrastructure.
On your IBM Cloud account:
- IBM Cloud Object Storage (COS): You need a COS instance with a bucket created to store the final image. Ensure that you have the HMAC credentials (
access_keyandsecret_key) for this instance.
1. Set up the builder environment on the ppc64le server
With these prerequisites in place, you can configure your ppc64le virtual server. These commands must be run on the builder machine.
First, install podman, bootc, and jq using the dnf command:
sudo dnf install podman bootc jq2. Set up your local workstation
Next, use the following command to download and run the installer script for the IBM Cloud CLI:
curl -fsSL https://clis.cloud.ibm.com/install/linux \
| sh ibmcloud --versionOnce the CLI is installed, add the necessary plug-ins for managing Power VS and Cloud Object Storage (power-iaas manages Power VS resources).
ibmcloud plugin install cloud-object-storageibmcloud plugin install power-iaasibmcloud plugin install vpc-infrastructure3. Defining and building the image on the ppc64le server
Now return to your ppc64le builder machine to create the image. A custom OS image is defined in a Containerfile. First, create a dedicated directory for your project. Create an output subdirectory for bootc to use later when creating the disk image.
mkdir -p golden-images/outputcd golden-imagesInside the golden-images directory, create a file named Containerfile and add the following content:
# golden-images/Containerfile
FROM quay.io/fedora/fedora-bootc:<tag>
RUN dnf install -y cloud-init && \
dnf clean all && \
systemctl enable cloud-init.service && \
systemctl enable cloud-config.service && \
systemctl enable cloud-final.service && \
systemctl enable cloud-init-local.serviceThis file defines a custom image by starting with a base Fedora bootc image, and then installing cloud-init to handle initial server configuration in the cloud.
Build the container image
You can now build your bootable container image using the podman command. This command, which must be run from within the golden-images directory, executes the steps defined in the Containerfile to create a local container image:
podman build --tag quay.io/<namespace>/<image_name>:<tag> .- Replace
<namespace>with your own container registry namespace (for example, your username on Quay.io or another registry). - Replace
<image_name>with a custom name for the image. - Replace
<tag>with a meaningful tag (for example,v1.0).
4. Installing the image to disk on the ppc64le server
Now use the bootc command to install the container image you've just built directly onto the additional disk attached to your builder machine.
Get the UUID
First, you must identify the multipath UUID of your additional 100 GB disk. You can find the UUID with the lsblk -f and multipath -ll commands.
The target disk must be empty, with no partitions. If it has existing partitions, then you must remove them. Use the kpartx command to remove partitions, replacing the example UUID with your disk's actual UUID:
kpartx -d /dev/mapper/<additional_disk_mpath_uuid>Run the installation command
Now you can use the podman command to start the installation. This command runs the bootc install process from within your custom container, giving it privileged access to write to the block devices in /dev:
podman run --pull=newer --rm \
--privileged --pid=host --network=host \
--security-opt label=type:unconfined_t \
-v $(pwd)/output:/output -v /dev/:/dev \
-v /var/lib/containers/storage:/var/lib/containers/storage \
quay.io/<namespace>/fedora-42:v1.0 \
bootc install to-disk \
--wipe /dev/mapper/<additional-disk-multipath-uuid> \
--filesystem xfs \
--block-setup direct \
--skip-fetch-checkReplace <namespace> with your registry namespace and <additional disk multipath uuid> with the UUID you identified in the previous step. The installation process takes a few minutes to complete.
The bootc install command supports many other options for customizing an installation, including different filesystems, block setups, and more. For a full list of capabilities, refer to the official bootc install documentation.
Shut down the builder machine
After the bootc install command successfully completes, you must shut down the builder machine to ensure that it is in a consistent state and can be safely detached.
sudo systemctl poweroff5. Capture the image on your local workstation
All remaining steps are performed on your local workstation using the IBM Cloud CLI.
The builder machine is now shut down. The next critical step is to tell IBM Cloud to treat your newly created disk as the boot disk. First, gather the required IDs and authentication token.
First, set your region:
export IBM_REGION="eu-de"Next, get your authentication token:
ibmcloud api https://cloud.ibm.comibmcloud loginFetch the token, and remove Bearer from the beginning of the returned token value:
export TOKEN=$(ibmcloud iam oauth-tokens \
--output JSON | jq -r .iam_token | sed 's/^Bearer //g')Get your Workspace ID and CRN:
ibmcloud pi workspace listexport POWER_WORKSPACE_ID="<your_workspace_id>"export WORKSPACE_CRN="<your_workspace_crn>"Get your Power Instance ID:
ibmcloud pi instance listexport POWER_INSTANCE_ID="<your_power_instance_id>"Get your new volume ID:
ibmcloud pi volume listexport VOLUME_ID="<your_additional_volume_id>"Next, run the following curl command to set the new boot volume using the API:
curl -X PUT -s -w "\nHTTP Status: %{http_code}\n" https://$IBM_REGION.power-iaas.cloud.ibm.com/pcloud/v1/cloud-instances/$POWER_WORKSPACE_ID/pvm-instances/$POWER_INSTANCE_ID/volumes/$VOLUME_ID/setboot \
-H "Authorization: $TOKEN" \
-H "CRN: $WORKSPACE_CRN" \
-H "Content-Type: application/json"A successful call returns status code 200, indicating that the volume is now the boot volume. It may take a few moments for this change to be fully registered.
You can optionally verify the change with this command, looking for "bootVolume": true in the JSON output:
curl -X GET https://$IBM_REGION.power-iaas.cloud.ibm.com/pcloud/v1/cloud-instances/$POWER_WORKSPACE_ID/pvm-instances/$POWER_INSTANCE_ID/volumes/$VOLUME_ID \
-H "Authorization: Bearer $TOKEN" \
-H "CRN: $WORKSPACE_CRN" \
-H "Content-Type: application/json" | jq -r .Example output:
{
"auxiliary": false,
"bootVolume": true,
"bootable": true,
"creationDate": "2025-10-18T05:36:32.000Z",
"crn": "crn:v1:bluemix:public:power-iaas:eu-de-1:a/934e118c399b4a28a70afdf2210d708f:42fff481-7cf1-4fe0-87fc-063a617945a0:volume:145e0015-e798-476e-ba7b-eef521c65abd",
"diskType": "tier3",
"ioThrottleRate": "300 iops",
"lastUpdateDate": "2025-10-18T05:39:50.000Z",
"name": "secondary-volume",
"pvmInstanceIDs": [
"797c00f8-35a7-4635-98bb-56204a86901f"
],
"replicationEnabled": false,
"replicationStatus": "not-capable",
"shareable": false,
"size": 100,
"state": "in-use",
"volumeID": "145e0015-e798-476e-ba7b-eef521c65abd",
"volumePool": "General-Flash-87",
"volumeType": "Tier3-General-Flash-87",
"wwn": "6005076813810264E80000000000924C"
}Capture and export the bootable image
With the correct volume now set as the boot device, the final step is to capture it as a reusable image and export it. You have two main destinations for the exported image: Directly to a bucket in IBM Cloud Object Storage, or to your local Power VS workspace's image catalog.
As long as you're using the same terminal session, the $POWER_INSTANCE_ID and $VOLUME_ID environment variables are already set from the previous section.
Exporting to Cloud Object Storage (COS)
This is the recommended approach for sharing the image across different workspaces or using it in automation pipelines.
Use the ibmcloud pi instance-capture-create command to start the capture. This command creates a new image from the specified volume, and uploads it to your COS bucket.
ibmcloud pi instance-capture-create <power_instance_id> \
--destination cloud-storage \
--name bootc-custom-fedora-image \
--image-path <your-bucket-name>/<optional-path> \
--access-key <cos_hmac_access_key> \
--secret-key <cos_hmac_secret_key> \
--volumes <volume_id> \
--region <your_region>Exporting a large image to Cloud Object Storage can take a significant amount of time, so be patient!
Exporting to the Workspace Image Catalog
This method is simpler if you only need to use the image within the same Power VS workspace. The image becomes available in the boot image catalog for that workspace.
ibmcloud pi instance-capture-create <instance_id> \
--destination image-catalog \
--name <image_name> \
--volumes <volume_id> \
--region <ibm_region>6. Use your custom image
Once you've captured and exported your image, it's ready to be used. Your next step depends on how you exported the image.
Workspace image catalog
If you used a workspace image catalog, then your image is already available in the workspace's boot image list. You can use it to provision new Power VS instances immediately.
Importing an image from Cloud Object Storage
If you exported to Cloud Object Storage, then you must first import your image from your COS bucket into a Power VS workspace image catalog:
ibmcloud pi image import <image_name> \
--bucket <cos_bucket_name> \
--image-file-name <export_image_name.ova.gz> \
--access-key=<access_key> \
--secret-key=<secret_key> \
--region <ibm_region> \
--os-type rhel- Replace the placeholder values (
<image_name>,<cos_bucket_name>, and so on) with your specific details. - Images exported to a COS bucket are always in the
.ova.gzformat. Make sure your--image-file-namereflects this.
The import command creates a background job. You can monitor its progress by getting the job ID and checking its status:
JOB_ID=$(ibmcloud pi job list \
--json | jq -r '.jobs[] | \
select(.operation.id == "<image_name>") | .id')ibmcloud pi job get $JOB_IDThe value used to filter the job list, <image_name>, is the exact image name argument supplied to the ibmcloud pi image import <image_name> command, as this value is recorded in the job's internal operation.id field
Example output:
Job ID 37d1fef0-f007-441e-8e07-7d34b4a0e256
Creation Timestamp 2025-11-22T02:09:49.000Z
Operation ID bootc-custom-fedora-image
Operation Target image
Operation Action epaImageImport
State running
Progress imageDownload
Message image download is in progresOnce this job completes, the bootable image is available in your Power VS workspace's image catalog.
7. Testing the custom image
Now that your custom image is available in the image catalog, the final step is to test it by provisioning a new Power VS instance.
Before you can create an instance, you need the ID of your new image and the ID of the subnet you want to connect the instance to. You also need the name of an SSH key that you have already added to your Power VS workspace.
Get the list of available images to find your custom image ID:
ibmcloud pi image listGet the list of available subnets to find the subnet ID:
ibmcloud pi subnet listUse this command, replacing the placeholder values with the information you gathered, to create a new instance from your custom image:
ibmcloud pi instance create <instance_name> \
--sys-type e1080 \
--image <image_id> \
--subnets <subnet_id> \
--key-name <ssh_key_name> \
--processors 0.5 \
--memory 8.0Once the instance is created and running, you can login using SSH and verify that your customizations and cloud-init are working as expected.
Conclusion
Congratulations! You have successfully built a custom, container-native OS image for IBM Power (ppc64le) using bootc. You've installed it to a disk, captured it, and deployed a new Power VS instance from it. This workflow allows you to create repeatable, immutable infrastructure, simplifying operating system management, and aligning your Power systems with modern cloud-native practices.