Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • Guided learning
      Receive custom learning paths powered by our AI assistant.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

How Red Hat has redefined continuous performance testing

October 15, 2025
Joe Talerico
Related topics:
Automation and managementCI/CD
Related products:
Red Hat OpenShift

    Continuous performance testing (CPT) is a critical aspect of modern software development, especially considering the mission-critical applications and diverse infrastructure on which Red Hat OpenShift can run. In this article, we will discuss the importance of continuous performance testing, the challenges the OpenShift Performance and Scale Team discovered, and how shifting-left has increased our team velocity.

    Why should your organization consider introducing CPT into their workflow? Introducing CPT into your workflow is crucial. In a rapidly changing industry, evolving software and specialized hardware increase the likelihood of performance bottlenecks. Without continuous performance testing, engineers face complex, error-prone manual testing, risking bottlenecks, degrading user experience, and incurring higher operational costs. 

    This begins our article series discussing the importance of integrating performance testing into your continuous integration/continuous delivery (CI/CD) pipelines and offering insights and best practices for ensuring optimal performance of your application or platform. Be sure to follow our CPT articles exploring the tooling, performance regression analysis, and recent case studies.  

    Challenges of performance testing OpenShift

    While OpenShift provides a robust and scalable platform, its complexity introduces significant performance testing hurdles. The platform's reliance on various open source components (e.g., etcd for its database, Openvswitch for networking, and Prometheus for monitoring) means each element brings unique performance considerations. 

    Successfully testing OpenShift at scale requires correctly sizing cluster instances to avoid artificial bottlenecks, especially when measuring performance under heavy loads in large environments of 250+ nodes. Furthermore, the dynamic nature of containerized workloads demands a flexible, automated, and cloud-native testing solution. To meet these demands and stay ahead of industry trends like the massive shift toward virtualization, we developed kube-burner, a CNCF sandboxed project that provides the modern tooling necessary to rigorously test OpenShift's performance and scalability. In this series, you will learn more about kube-burner and how we are using it within our continuous performance testing pipelines. 

    In recent years, the OpenShift ecosystem has grown far beyond a simple container orchestrator. This evolution presented a significant quality engineering challenge from a performance and scale perspective. Our testing framework must now validate a complex matrix that includes installations on bare metal, public clouds (i.e., AWS, Azure, GCP, and IBM Cloud), and managed offerings (i.e., ROSA and ARO). 

    The software stack ships around 30 built-in operators, over 20 certified operators, and nearly 20 layered products like ACM, ACS, and OpenShift GitOps. Adding to all of this is the extremely fast pace of development, as the openshift/release repository alone averages 300 pull requests each week. All of these different combinations of software allows OpenShift to support many use cases. However, each one of these combinations have different performance characteristics and considerations we need to identify.

    From ad-hoc to shift-left

    The Red Hat OpenShift Performance and Scale Team has undergone significant transitions in our continuous performance testing journey over the past 10+ years. 

    Initially, the OpenShift Performance and Scale Team ran performance and scale tests during big feature releases, architectural changes, and at the end of the OpenShift release. The team automated many of the tests. But we were running our CI/CD system in a bespoke way, not using the engineering team’s infrastructure. Having a bespoke system meant we had the burden of maintaining the CI/CD system (Jenkins and Airflow), which was complicated and required tribal knowledge to maintain. 

    During this time we also couldn’t run our tests before a nightly build, or what we call payload. This meant we could have hundreds of changes in the nightly build, all contributing to a performance regression. Our team needed to shift-left, meaning enabling our performance and scale tests to run earlier in the development stages like at the payload level or before they merge into our downstream product. 

    The team achieved a major breakthrough by shifting left and integrating our performance testing suite directly into the OpenShift CI/CD Prow pipeline. Adopting the Prow pipeline was a strategic decision that enabled our workloads for the entire OpenShift community, as it aligned our process with the primary CI system our OpenShift developers were already using. 

    The migration to Prow was quite simple since we had already automated our performance pipelines. To pivot to Prow, we simply had to re-use our benchmark scripting and create new OpenShift Prow steps in the OpenShift CI-Operator Step Registry. Once we introduced the steps to Prow, we then began working with the OpenShift Technical Release Team to have our workloads run as an informing job for payload. Being part of the TRT payload testing gives us the visibility to determine what handful of changes introduced into the downstream payload could impact OpenShift as a product.   

    This transition has been pivotal in several ways. First, it dramatically increased development velocity by automatically running tests on every OpenShift payload, allowing us to catch performance regressions early in the development cycle when they are faster and cheaper to fix. Second, it has led to seamless integration with OpenShift engineering. The Performance and Scale Team is now a core part of the release process, which has enhanced collaboration across the organization. This allows engineers to independently run performance tests on their pull requests, gain immediate feedback on their changes, and proactively work with other teams to optimize the platform.

    Best practices for implementing CPT

    To effectively implement continuous performance testing, consider the following best practices:

    • Use the same CI system as your development team:
      • Using the same CI/CD pipeline as your development team ensures they too can take advantage of your performance and scale tests.
      • Reduces the need to maintain your own bespoke CI/CD system.
      • Provide faster feedback to developers since your tests can be part of their tests that run against their PRs.
    • Automate everything:
      • Integrate performance tests into the same CI/CD pipeline your developers use, so performance tests run automatically with every payload or even better before every merge into the codebase.
      • Make your automation unspecific to a CI/CD framework, allowing you to easily lift and shift.
    • Define data and control plane performance baselines:
      • Establish acceptable performance control-plane metrics (e.g., P99 PodReadyLatency, etcd CPU utilization, etc.).
      • Having a clear set of baselines allows teams to understand the expected performance of your product.
      • Establish acceptable performance for the data-path of your platform (e.g., network performance).
    • Simulate realistic workloads:
      • Use tools like kube-burner, k8s-netperf or ingress-perf that simulate real-world user behavior and varying load conditions without being too domain specific.
    • Monitor key metrics:
      • Collect and analyze performance data, including CPU usage, memory consumption, network I/O, and application-specific metrics.
    • Early regression detection:
      • Use tools like Orion to detect performance regression early in the development cycle, which can help provide a clear signal to our developers with their regression analysis of what metric increased or decreased.
    • Isolate performance tests:
      • Run performance tests in isolated environments that mimic production as closely as possible to avoid interference from other processes.

    What’s next

    Continuous performance testing is not just a nice-to-have, it's a necessity for ensuring the success of the Red Hat OpenShift ecosystem. By integrating performance testing early and continuously into our development lifecycle, we can deliver a high-performing, scalable, and reliable platform that meets the demands of our diverse customers. The Red Hat Performance and Scale Team's journey demonstrates the significant benefits of shifting performance testing left, leading to increased velocity and deeper integration into the product release cycle. 

    Consider how your organization can implement continuous performance testing to provide your development teams with an early signal of the impact their changes have on the performance of your product. Be sure to stay tuned for more articles in this series, diving into the tooling, regression analysis, and case studies we found with our continuous performance testing framework.

    Related Posts

    • Controlling resources with cgroups for performance testing

    • Customer suggestions for improved CI/CD in Red Hat OpenShift

    • The ultimate CI/CD resource guide

    • A developer's guide to CI/CD and GitOps with Jenkins Pipelines

    Recent Posts

    • Red Hat Hardened Images: Top 5 benefits for software developers

    • How EvalHub manages two-layer Kubernetes control planes

    • Tekton joins the CNCF as an incubating project

    • Federated identity across the hybrid cloud using zero trust workload identity manager

    • Confidential virtual machine storage attack scenarios

    What’s up next?

    The automated builds and deployments (CI/CD) of container-based applications can reduce mistakes, improve productivity, and promote more thorough testing. Try this sandbox activity to learn about OpenShift Pipelines for automated builds and deployment.

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.