Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • Guided learning
      Receive custom learning paths powered by our AI assistant.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

How in-place pod resizing boosts efficiency in OpenShift

December 23, 2025
Subin Modeel
Related topics:
Developer productivityKubernetes
Related products:
Red Hat OpenShift

    Red Hat OpenShift, the most popular container orchestration platform, has always provided flexibility, scalability, and resilience. As workloads evolve, so do the requirements for resources such as CPU and memory. Traditionally, adjusting these resources for a running pod meant recreating the pod. However, with the concept of in-place resource resizing, this is changing. Let's dive into what in-place resource resizing is and why it's a game-changer for OpenShift users.

    This feature was alpha in Kubernetes 1.27 and behind a feature gate in OpenShift versions through 4.19, but has graduated to beta in Kubernetes 1.33 and is on by default in OpenShift 4.20.

    What is in-place resource resize?

    In-place resource resize refers to the ability to adjust the CPU and memory requests and limits of a running Pod without the need to recreate it. This feature allows for more dynamic resource management, ensuring that applications can be allocated more or fewer resources based on their current needs without causing disruptions.

    Why is it important?

    Recreating a Pod to adjust its resources can lead to downtime, especially if the pod is part of a StatefulSet or if it's handling critical tasks. In-place resizing reduces this downtime, ensuring smoother operations.

    Over-provisioning resources can lead to wastage, while under-provisioning can cause performance issues. Dynamic resizing ensures that resources are used efficiently, based on real-time needs.

    Efficient resource utilization can lead to cost savings, especially in cloud environments where you pay for the resources you use.

    There is no need to manually intervene and recreate pods or adjust deployment configurations. This simplifies the operational overhead.

    How does it work?

    For our purposes, you need to create a pod whose container limits and resources differ so it isn't assigned the “Guaranteed” QoS class. Resize is not allowed if it would violate other pod mutability constraints, and the pod’s QoS class is still immutable.

    apiVersion: v1
    kind: Pod
    metadata:
     name: resizeme
    spec:
     containers:
     - name: resizeme
       image: ubi9/ubi
       command: ["tail", "-f", "/dev/null"]
       resources:
         requests:
           cpu: 1
           memory: "512Mi"
         limits:
           cpu: 2
           memory: "1Gi"

     Observe the allocatedResources fields in the containerStatuses: 

    $ oc get pod resizeme -o yaml
    ...
           containerStatuses:
           - allocatedResources:
              cpu: "1"
                memory: 512Mi

     Their presence indicates the availability of in-place resize.

    Note: In Alpha, ResizePolicy fields were additionally always populated by default, but this is no longer the case. The fields are still available but implicit defaults are assumed. 

    Resize The Container’s Resources: Change the pod’s CPU request from 1 to 2. You can also use oc edit to make a change, but be mindful that you include the --subresource=resize argument.

    $ oc patch pod resizeme -p '{"spec": {"containers": [{"name": "resizeme", "resources": { "requests" :{ "cpu" : 2, "memory": "512Mi"}, "limits" :{ "cpu" : 2, "memory" : "1Gi" } } }] }}' --subresource=resize

    Note: Previous iterations of this feature allowed you to edit the pod spec without having to specify the resize subresource, but the behavior has changed and you now have to specify the subresource. Additionally, the resize subresource is only available in kubectl version 1.33 / oc version 4.20 or greater, so make sure you are using a new enough client. If your client is too old, or you fail to specify the resize subresource, you will receive the following: 

    The Pod "resizeme" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`,`spec.initContainers[*].image`,`spec.activeDeadlineSeconds`,`spec.tolerations` (only additions to existing tolerations),`spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative)

    Watch the pod react. The resize happens quickly in most cases if it’s feasible, but you might be able to catch it by looking at the pod conditions.

    $ oc get pod resizeme -o json | jq '.status.conditions[] | select(.type | test("PodResizePending|PodResizeInProgress|PodResizeInfeasible|PodResizeDeferred|Resizing"))
    {
      "lastProbeTime": "2025-09-20T01:01:18Z",
      "lastTransitionTime": "2025-09-20T01:01:18Z",
      "status": "True",
      "type": "PodResizeInProgress"
    }

    Observe the successful resize. Eventually, once the resize is complete, your resource changes will be reflected in container status:

    $  oc get pods resizeme -o yaml
    ...
      containerStatuses:
      - allocatedResources:
          cpu: "2"
          memory: 512Mi
        containerID: cri-o://5076fa4d8cddea4d2d219b51f6ce31b510c59b1abbe300e7e6ce69ca70848f69
        image: registry.access.redhat.com/ubi9/ubi:latest
        imageID: registry.access.redhat.com/ubi9/ubi@sha256:03215fe3630a1b49a00e1b1918d063fe82b7197d342b5c253fe2255fc8027ea3
        lastState: {}
        name: resizeme
        ready: true
        resources:
          limits:
            cpu: "2"
            memory: 1Gi
          requests:
            cpu: "2"
            memory: 512Mi
        restartCount: 0

     You can find more details on configuration options and constraints upstream. 

    Limitations and considerations

    While in-place resource resizing offers numerous benefits, there are some considerations. Not all resources can be adjusted. While CPU and memory can be adjusted, other resources like storage are not currently supported for in-place resizing.

    There is potential for resource contention. If resources are reduced too aggressively, it might lead to resource contention among pods.

    As far as compatibility with container runtimes, ensure that your container runtime supports dynamic resource adjustments.

    Summary

    In-place resource resizing for OpenShift pods spec is a step towards more dynamic and efficient resource management. As OpenShift continues to evolve, features like this highlight its adaptability and responsiveness to the needs of modern applications and infrastructures. As always, while leveraging such features, it's essential to monitor and manage resources wisely to ensure optimal performance and cost-efficiency.

    Related Posts

    • Modern Kubernetes monitoring: Metrics, tools, and AIOps

    • Kubernetes MCP server: AI-powered cluster management

    • Smart deployments at scale: Leveraging ApplicationSets and Helm with cluster labels in Red Hat Advanced Cluster Management for Kubernetes

    • Introducing incident detection in Red Hat Advanced Cluster Management for Kubernetes 2.14

    • llm-d: Kubernetes-native distributed inferencing

    Recent Posts

    • Every layer counts: Defense in depth for AI agents with Red Hat AI

    • Fun in the RUN instruction: Why container builds with distroless images can surprise you

    • Trusted software factory: Building trust in the agentic AI era

    • Build a zero trust AI pipeline with OpenShift and RHEL CVMs

    • Red Hat Hardened Images: Top 5 benefits for software developers

    What’s up next?

    Learning Path Revive_OpenShift_Pods_featured_image

    Revive inactive OpenShift pods that scaled to zero

    This learning path demonstrates how to revive inactive pods in your sandbox...
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.