Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Splitting OpenShift machine config pool without node reboots

October 10, 2025
Rob Fisher
Related topics:
ContainersKubernetes
Related products:
Red Hat OpenShiftRed Hat OpenShift Container Platform

    This article demonstrates how to split an existing Red Hat OpenShift machine config pool (MCP) into two separate MCPs without requiring a reboot of any nodes. The goal is for the new MCP to have the exact same machine configuration as the original MCP. 

    Initially, many OpenShift clusters were built with one or two MCPs for the whole cluster. This is good when there are only 10 or 20 worker nodes. But when there are 100+ worker nodes in one MCP, this can make it difficult to perform an upgrade. As discussed in the Red Hat OpenShift Container Platform upgrade documentation, it is easier to control which nodes and how many nodes reboot if you utilize MCPs by pausing and unpausing them at the end of the upgrade.

    The splitting procedure

    This section outlines the prerequisites required before initiating the splitting procedure. Then it will walk you through the step-by-step procedure for splitting an OpenShift machine config pool without rebooting nodes.

    Prerequisites:

    • Access to OpenShift cluster with cluster-admin privileges.
    • The oc CLI tool, configured and connected to the OpenShift cluster.
    • An understanding of machine config pools and machine configs.

    Step 1: Identify the current machine config pool

    Use the following command to identify the current MCP that needs to be split:

    oc get mcp

    Example output:

    NAME     CONFIG                                       UPDATED   UPDATING DEGRADED  MACHINECOUNT READYMACHINECOUNT  UPDATEDMACHINECOUNT  DEGRADEDMACHINECOUNT   AGE
    master rendered-master-a5f35b903c8ad7c2c0175023f9909b05   True      False      False      3              3                   3                     0           13d
    mcp-1  rendered-mcp-1-f19092f2278430cc4d2cff6d9400f990    True      False      False      5              5                   5                     0           13d
    worker rendered-worker-f19092f2278430cc4d2f9909b05a5f3    True      False      False      0              0                   0                     0           13d
    

    Take note of two things in the output. First, there is only one MCP in this deployment that is not standard in an OpenShift Container Platform deployment, and that is “mcp-1.” Second, in the CONFIG column, there are specific hashes for each rendered version of a MCP.

    Step 2: Identify the machine configs

    Identify the machine configs by retrieving the machine configs associated with the current MCP using the following:

    oc get mcp mcp-1 -o json | jq .spec.configuration

    Example output:

    {
      "name": "rendered-mcp-1-f19092f2278430cc4d2cff6d9400f990",
      "source": [
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "00-worker"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "01-worker-container-runtime"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "01-worker-kubelet"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "50-workers-chrony-configuration"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "97-worker-generated-kubelet"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "98-worker-generated-kubelet"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "99-worker-generated-registries"
        },
        {
          "apiVersion": "machineconfiguration.openshift.io/v1",
          "kind": "MachineConfig",
          "name": "99-worker-ssh"
        }
      ]
    }

    This will display each MachineConfig applied to this specific MCP.

    If you are not sure of the specifics regarding each MachineConfig, you can use the following command:

    oc get mc

    Example output:

    NAME                                               GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE                                                                                                      00-master                                          64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d                                                                                                      00-worker                                          64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d                                                                                                      01-master-container-runtime                        64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    01-master-kubelet                                  64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    01-worker-container-runtime                        64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    01-worker-kubelet                                  64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    50-masters-chrony-configuration                                                               3.1.0             13d
    50-workers-chrony-configuration                                                               3.1.0             13d
    97-master-generated-kubelet                        64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    97-mcp-1-generated-kubelet                         64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    97-worker-generated-kubelet                        64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    98-master-generated-kubelet                        64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    98-mcp-1-generated-kubelet                         64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    98-worker-generated-kubelet                        64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    99-assisted-installer-master-ssh                                                              3.1.0             13d
    99-master-generated-registries                     64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    99-master-ssh                                                                                 3.2.0             13d
    99-mcp-1-generated-kubelet                         64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             4d5h
    99-worker-generated-registries                     64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    99-worker-ssh                                                                                 3.2.0             13d
    rendered-master-a5f35b903c8ad7c2c0175023f9909b05   64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    rendered-mcp-1-8f6a90f60f8d9db7d8c0e243c9bf4963    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    rendered-mcp-1-b9d21061d076437680290f5da831ada0    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             24h
    rendered-mcp-1-f19092f2278430cc4d2cff6d9400f990    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    rendered-mcp-1-f90b4a73043e3cec10a72075c4d9d9fb    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             4d5h
    rendered-worker-dcb907f73ccf964e6db1cf7db6cd45ab   64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    rendered-worker-f19092f2278430cc4d2cff6d9400f990   64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             13d
    

    If you need any further assistance regarding MachineConfig, please refer to the OpenShift Container Platform documentation.

    Step 3: Create a new machine config pool

    Create a new MCP definition YAML file, mcp-new.yaml, with the desired name for the new pool (e.g., mcp-new). The machineConfigSelector for this new MCP should be exactly the same as the original MCP, as in this example:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfigPool
    metadata:
      name: mcp-new
    spec:
      machineConfigSelector:
        matchExpressions:
          - {
             key: machineconfiguration.openshift.io/role,
             operator: In,
             values: [worker,mcp-1]
            }
      nodeSelector:
        matchLabels:
          node-role.kubernetes.io/mcp-new: ""  # Label for nodes to move to this pool (nodes to be updated later)

    Apply the new MCP as follows:

    oc apply -f mcp-new.yaml

    Step 4: Verify both MCPs have the same rendered hash

    To make sure that nodes don’t reboot when they move from one MCP to another, the hash on both MCPs has to be the same. This means that MachineConfigs on both MCPs are exactly the same. Therefore, the Red Hat Enterprise Linux CoreOS configuration within these MCPs won’t require a change to the OS or files on the host when moved.

    To verify, run the following command:

    NAME     CONFIG                                      UPDATED UPDATING  DEGRADED  MACHINECOUNT   READYMACHINECOUNT  UPDATEDMACHINECOUNT  DEGRADEDMACHINECOUNT   AGE
    master  rendered-master-a5f35b903c8ad7c2c0175023f9909b05  True     False     False      3             3                   3                     0              14d
    mcp-1   rendered-mcp-1-f19092f2278430cc4d2cff6d9400f990   True     False     False      5             5                   5                     0              14d
    mcp-new rendered-mcp-new-f19092f2278430cc4d2cff6d9400f990 True     False     False      0             0                   0                     0              42h
    worker  rendered-worker-f19092f2278430cc4d2cff6d9400f990  True     False     False      0             0                   0                     0              14d

    Step 5: Label the nodes

    In this step, we will label the nodes for the new machine config pool. Identify the nodes moving to the new MCP and label them with the node selector label defined in the new MCP’s nodeSelector.

    # oc get node
    NAME                                       STATUS   ROLES                  AGE   VERSION
    ctrl-plane-0.nantahala.daytwops.bos2.lab   Ready    control-plane,master   14d   v1.29.10+67d3387
    ctrl-plane-1.nantahala.daytwops.bos2.lab   Ready    control-plane,master   14d   v1.29.10+67d3387
    ctrl-plane-2.nantahala.daytwops.bos2.lab   Ready    control-plane,master   14d   v1.29.10+67d3387
    worker-0.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-1.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-2.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-3.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-5.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387

    To move nodes from mcp-1 to mcp-new, patch the desired node(s) with the following command:

    # oc patch node worker-5.nantahala.daytwops.bos2.lab --type=json -p '[{"op":"move","from": "/metadata/labels/node-role.kubernetes.io~1mcp-1", "path": "/metadata/labels/node-role.kubernetes.io~1mcp-new"}]'

    Note: In the patch command, the ~1 is how the “/” character escaped. In this example, it allows us to reference a label which contains a “/” character.

    Run the following command to watch the change:

    # watch "oc get no; echo; oc get mcp; echo; oc get mc| grep -vE 'worker|master'"

    Example output:

    NAME                                       STATUS   ROLES                  AGE   VERSION
    ctrl-plane-0.nantahala.daytwops.bos2.lab   Ready    control-plane,master   14d   v1.29.10+67d3387
    ctrl-plane-1.nantahala.daytwops.bos2.lab   Ready    control-plane,master   14d   v1.29.10+67d3387
    ctrl-plane-2.nantahala.daytwops.bos2.lab   Ready    control-plane,master   14d   v1.29.10+67d3387
    worker-0.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-1.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-2.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-3.nantahala.daytwops.bos2.lab       Ready    mcp-1,worker           14d   v1.29.10+67d3387
    worker-5.nantahala.daytwops.bos2.lab       Ready    mcp-new,worker         14d   v1.29.10+67d3387
    
    NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
    master   rendered-master-a5f35b903c8ad7c2c0175023f9909b05   True      False      False      3              3                   3                     0                      14d
    mcp-1    rendered-mcp-1-f19092f2278430cc4d2cff6d9400f990    True      False      False      4              4                   4                     0                      14d
    mcp-new  rendered-mcp-new-f19092f2278430cc4d2cff6d9400f990  True      False      False      1              1                   1                     0                      42h
    worker   rendered-worker-f19092f2278430cc4d2cff6d9400f990   True      False      False      0              0                   0                     0                      14d
    
    NAME                                               GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE
    97-mcp-1-generated-kubelet                         64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    97-mcp-new-generated-kubelet                       64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    98-mcp-1-generated-kubelet                         64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    98-mcp-new-generated-kubelet                       64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    99-mcp-1-generated-kubelet                         64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             5d
    rendered-mcp-1-8f6a90f60f8d9db7d8c0e243c9bf4963    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    rendered-mcp-1-b9d21061d076437680290f5da831ada0    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             43h
    rendered-mcp-1-f19092f2278430cc4d2cff6d9400f990    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    rendered-mcp-1-f90b4a73043e3cec10a72075c4d9d9fb    64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             5d
    rendered-mcp-new-8f6a90f60f8d9db7d8c0e243c9bf4963  64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    rendered-mcp-new-f19092f2278430cc4d2cff6d9400f990  64460169e27091d8f8373b0952604ba2700d6d67   3.4.0             14d
    ...

    You will see the mcp-new go into updated = false and updating = true. Then it will immediately change back to updated = true and updating = false when the node moves into mcp-new. You will also see the MACHINECOUNT change.

    You should not see the node go into SchedulingDisabled. This would indicate that the configuration between these two MCPs was incorrect, and that node will reboot.

    Important to note

    Ensure the machineConfigSelector for the new MCP exactly matches the original to avoid unintended configuration changes and potential disruptions. Patching node labels is the action that triggers the move to a new MCP.

    Monitor node status and MCP status closely during and after the process. Plan and coordinate this operation during a maintenance window, even if you don’t expect a reboot.

    Summary

    This procedure offers a significant advantage by allowing the splitting of a machine config pool without triggering node reboots, provided the machine configs remain identical between the old and new pools. This ensures a seamless transition and avoids service disruption, making upgrades and cluster management more efficient.

    Related Posts

    • Set up an OpenShift cluster to deploy an application in odo CLI

    • How to manage a fleet of heterogeneous OpenShift clusters

    • Benchmarking with GuideLLM in air-gapped OpenShift clusters

    • How to distribute workloads using Open Cluster Management

    Recent Posts

    • Confidential virtual machine storage attack scenarios

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    • Red Hat UBI 8 builders have been promoted to the Paketo Buildpacks organization

    What’s up next?

    Learn how to use the hub cluster backup and restore operator to move managed clusters from one hub to another using Red Hat Advanced Cluster Management for Kubernetes 2.12. 

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility