Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Splitting OpenShift machine config pool without node reboots

October 10, 2025
Rob Fisher
Related topics:
ContainersKubernetes
Related products:
Red Hat OpenShiftRed Hat OpenShift Container Platform

Share:

    This article demonstrates how to split an existing Red Hat OpenShift machine config pool (MCP) into two separate MCPs without requiring a reboot of any nodes. The goal is for the new MCP to have the exact same machine configuration as the original MCP. 

    Initially, many OpenShift clusters were built with one or two MCPs for the whole cluster. This is good when there are only 10 or 20 worker nodes. But when there are 100+ worker nodes in one MCP, this can make it difficult to perform an upgrade. As discussed in the Red Hat OpenShift Container Platform upgrade documentation, it is easier to control which nodes and how many nodes reboot if you utilize MCPs by pausing and unpausing them at the end of the upgrade.

    The splitting procedure

    This section outlines the prerequisites required before initiating the splitting procedure. Then it will walk you through the step-by-step procedure for splitting an OpenShift machine config pool without rebooting nodes.

    Prerequisites:

    • Access to OpenShift cluster with cluster-admin privileges.
    • The oc CLI tool, configured and connected to the OpenShift cluster.
    • An understanding of machine config pools and machine configs.

    Step 1: Identify the current machine config pool

    Use the following command to identify the current MCP that needs to be split:

    oc get mcp

    Example output:

    NAME     CONFIG                                       UPDATED   UPDATING DEGRADED  MACHINECOUNT READYMACHINECOUNT  UPDATEDMACHINECOUNT  DEGRADEDMACHINECOUNT   AGE
    master rendered-master-a5f35b903c8ad7c2c0175023f9909b05   True      False      False      3              3                   3                     0           13d
    mcp-1  rendered-mcp-1-f19092f2278430cc4d2cff6d9400f990    True      False      False      5              5                   5                     0           13d
    worker rendered-worker-f19092f2278430cc4d2f9909b05a5f3    True      False      False      0              0                   0                     0           13d
    

    Take note of two things in the output. First, there is only one MCP in this deployment that is not standard in an OpenShift Container Platform deployment, and that is “mcp-1.” Second, in the CONFIG column, there are specific hashes for each rendered version of a MCP.

    Step 2: Identify the machine configs

    Identify the machine configs by retrieving the machine configs associated with the current MCP using the following:

    oc get mcp mcp-1 -o json | jq .spec.configuration

    Example output:

    {
      "name": "rendered-mcp-1-f19092f2278430cc4d2cff6d9400f990",
      "source": [
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "00-worker"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "01-worker-container-runtime"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "01-worker-kubelet"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "50-workers-chrony-configuration"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "97-worker-generated-kubelet"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "98-worker-generated-kubelet"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "99-worker-generated-registries"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "99-worker-ssh"
        }
      ]
    }

    This will display each MachineConfig applied to this specific MCP.

    If you are not sure of the specifics regarding each MachineConfig, you can use the following command:

    oc get mc

    Example output:

    NAME                                               GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE                                                                                                      00-master                                          64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d                                                                                                      00-worker                                          64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d                                                                                                      01-master-container-runtime                        64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    01-master-kubelet                                  64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    01-worker-container-runtime                        64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    01-worker-kubelet                                  64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    50-masters-chrony-configuration                                                               3.1.0             13d
    50-workers-chrony-configuration                                                               3.1.0             13d
    97-master-generated-kubelet                        64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    97-mcp-1-generated-kubelet                         64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    97-worker-generated-kubelet                        64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    98-master-generated-kubelet                        64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    98-mcp-1-generated-kubelet                         64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    98-worker-generated-kubelet                        64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    99-assisted-installer-master-ssh                                                              3.1.0             13d
    99-master-generated-registries                     64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    99-master-ssh                                                                                 3.2.0             13d
    99-mcp-1-generated-kubelet                         64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             4d5h
    99-worker-generated-registries                     64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    99-worker-ssh                                                                                 3.2.0             13d
    rendered-master-a5f35b903c8ad7c2c0175023f9909b05   64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    rendered-mcp-1-8f6a90f60f8d9db7d8c0e243c9bf4963    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    rendered-mcp-1-b9d21061d076437680290f5da831ada0    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             24h
    rendered-mcp-1-f19092f2278430cc4d2cff6d9400f990    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    rendered-mcp-1-f90b4a73043e3cec10a72075c4d9d9fb    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             4d5h
    rendered-worker-dcb907f73ccf964e6db1cf7db6cd45ab   64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    rendered-worker-f19092f2278430cc4d2cff6d9400f990   64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    

    If you need any further assistance regarding MachineConfig, please refer to the OpenShift Container Platform documentation.

    Step 3: Create a new machine config pool

    Create a new MCP definition YAML file, mcp-new.yaml, with the desired name for the new pool (e.g., mcp-new). The machineConfigSelector for this new MCP should be exactly the same as the original MCP, as in this example:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfigPool
    metadata:
      name: mcp-new
    spec:
      machineConfigSelector:
        matchExpressions:
          - {
             key: machineconfiguration.openshift.io/role,
             operator: In,
             values: [worker,mcp-1]
            }
      nodeSelector:
        matchLabels:
          node-role.kubernetes.io/mcp-new: ""  # Label for nodes to move to this pool (nodes to be updated later)

    Apply the new MCP as follows:

    oc apply -f mcp-new.yaml

    Step 4: Verify both MCPs have the same rendered hash

    To make sure that nodes don’t reboot when they move from one MCP to another, the hash on both MCPs has to be the same. This means that MachineConfigs on both MCPs are exactly the same. Therefore, the Red Hat Enterprise Linux CoreOS configuration within these MCPs won’t require a change to the OS or files on the host when moved.

    To verify, run the following command:

    NAME     CONFIG                                      UPDATED UPDATING  DEGRADED  MACHINECOUNT   READYMACHINECOUNT  UPDATEDMACHINECOUNT  DEGRADEDMACHINECOUNT   AGE
    master  rendered-master-a5f35b903c8ad7c2c0175023f9909b05  True     False     False      3             3                   3                     0              14d
    mcp-1   rendered-mcp-1-f19092f2278430cc4d2cff6d9400f990   True     False     False      5             5                   5                     0              14d
    mcp-new rendered-mcp-new-f19092f2278430cc4d2cff6d9400f990 True     False     False      0             0                   0                     0              42h
    worker  rendered-worker-f19092f2278430cc4d2cff6d9400f990  True     False     False      0             0                   0                     0              14d

    Step 5: Label the nodes

    In this step, we will label the nodes for the new machine config pool. Identify the nodes moving to the new MCP and label them with the node selector label defined in the new MCP’s nodeSelector.

    # oc get node
    NAME                                       STATUS   ROLES                  AGE   VERSION
    ctrl-plane-0.nantahala.daytwops.bos2.lab   Ready    control-plane,master   14d   v1.29.10+67d3387
    ctrl-plane-1.nantahala.daytwops.bos2.lab   Ready    control-plane,master   14d   v1.29.10+67d3387
    ctrl-plane-2.nantahala.daytwops.bos2.lab   Ready    control-plane,master   14d   v1.29.10+67d3387
    worker-0.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-1.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-2.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-3.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-5.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387

    To move nodes from mcp-1 to mcp-new, patch the desired node(s) with the following command:

    # oc patch node worker-5.nantahala.daytwops.bos2.lab --type=json -p '[{"op":"move","from": "/metadata/labels/node-role.kubernetes.io~1mcp-1", "path": "/metadata/labels/node-role.kubernetes.io~1mcp-new"}]'

    Note: In the patch command, the ~1 is how the “/” character escaped. In this example, it allows us to reference a label which contains a “/” character.

    Run the following command to watch the change:

    # watch "oc get no; echo; oc get mcp; echo; oc get mc| grep -vE 'worker|master'"

    Example output:

    NAME                                       STATUS   ROLES                  AGE   VERSION
    ctrl-plane-0.nantahala.daytwops.bos2.lab   Ready    control-plane,master   14d   v1.29.10+67d3387
    ctrl-plane-1.nantahala.daytwops.bos2.lab   Ready    control-plane,master   14d   v1.29.10+67d3387
    ctrl-plane-2.nantahala.daytwops.bos2.lab   Ready    control-plane,master   14d   v1.29.10+67d3387
    worker-0.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-1.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-2.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-3.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-5.nantahala.daytwops.bos2.lab       Ready    mcp-new,worker         14d   v1.29.10+67d3387
    
    NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
    master   rendered-master-a5f35b903c8ad7c2c0175023f9909b05   True      False      False      3              3                   3                     0                      14d
    mcp-1    rendered-mcp-1-f19092f2278430cc4d2cff6d9400f990    True      False      False      4              4                   4                     0                      14d
    mcp-new  rendered-mcp-new-f19092f2278430cc4d2cff6d9400f990  True      False      False      1              1                   1                     0                      42h
    worker   rendered-worker-f19092f2278430cc4d2cff6d9400f990   True      False      False      0              0                   0                     0                      14d
    
    NAME                                               GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE
    97-mcp-1-generated-kubelet                         64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    97-mcp-new-generated-kubelet                       64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    98-mcp-1-generated-kubelet                         64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    98-mcp-new-generated-kubelet                       64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    99-mcp-1-generated-kubelet                         64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             5d
    rendered-mcp-1-8f6a90f60f8d9db7d8c0e243c9bf4963    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    rendered-mcp-1-b9d21061d076437680290f5da831ada0    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             43h
    rendered-mcp-1-f19092f2278430cc4d2cff6d9400f990    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    rendered-mcp-1-f90b4a73043e3cec10a72075c4d9d9fb    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             5d
    rendered-mcp-new-8f6a90f60f8d9db7d8c0e243c9bf4963  64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    rendered-mcp-new-f19092f2278430cc4d2cff6d9400f990  64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    ...

    You will see the mcp-new go into updated = false and updating = true. Then it will immediately change back to updated = true and updating = false when the node moves into mcp-new. You will also see the MACHINECOUNT change.

    You should not see the node go into SchedulingDisabled. This would indicate that the configuration between these two MCPs was incorrect, and that node will reboot.

    Important to note

    Ensure the machineConfigSelector for the new MCP exactly matches the original to avoid unintended configuration changes and potential disruptions. Patching node labels is the action that triggers the move to a new MCP.

    Monitor node status and MCP status closely during and after the process. Plan and coordinate this operation during a maintenance window, even if you don’t expect a reboot.

    Summary

    This procedure offers a significant advantage by allowing the splitting of a machine config pool without triggering node reboots, provided the machine configs remain identical between the old and new pools. This ensures a seamless transition and avoids service disruption, making upgrades and cluster management more efficient.

    Related Posts

    • Set up an OpenShift cluster to deploy an application in odo CLI

    • How to manage a fleet of heterogeneous OpenShift clusters

    • Benchmarking with GuideLLM in air-gapped OpenShift clusters

    • How to distribute workloads using Open Cluster Management

    Recent Posts

    • Splitting OpenShift machine config pool without node reboots

    • Node.js 20+ memory management in containers

    • Integrate incident detection with OpenShift Lightspeed via MCP

    • One model is not enough, too many models is hard: Technical deep dive

    • What's new in Ansible Automation Platform 2.6

    What’s up next?

    Learn how to use the hub cluster backup and restore operator to move managed clusters from one hub to another using Red Hat Advanced Cluster Management for Kubernetes 2.12. 

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue