Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

How StatefulSet deployments tripled OpenShift Pipelines throughput

April 30, 2026
Siddardh R A
Related topics:
KubernetesVirtualization
Related products:
Red Hat OpenShift

    Running Red Hat OpenShift Pipelines at scale usually means watching execution times slowly degrade as concurrency increases. Most teams hit a wall. Execution times balloon from seconds to minutes, and adding more controller replicas barely helps. But OpenShift Pipelines 1.20 changes this with StatefulSet-based deployments, and the results are dramatic.

    The real difference: Leader election vs. sharding

    High availability (HA) mode provides fault tolerance with multiple controller replicas. OpenShift Pipelines supports two implementation approaches, each optimized for different operational priorities.

    Leader election is the standard Kubernetes HA pattern used across the ecosystem. One controller holds the lease and processes all reconciliation work, while others stand by ready to take over if the leader fails. This approach provides automatic failover and simpler operational characteristics, making it well-suited for environments where resilience is the primary concern.

    StatefulSet-based sharding distributes work across all replicas using hash-based assignment. Each pod receives a stable identity (controller-0, controller-1, etc.) and processes a deterministic subset of work. The controller uses hash(key) % N to assign each PipelineRun to a specific replica, where N is the total number of pods. This mapping remains consistent across pod restarts, ensuring predictable work distribution. All replicas actively participate in processing, enabling higher parallelism.

    Both approaches have valid use cases. Leader election excels when operational simplicity and automatic failover are critical. StatefulSet-based sharding is optimized for high-concurrency workloads where maximizing throughput and resource utilization across replicas becomes important. For teams running pipelines at scale with high concurrency, sharding can provide substantial performance improvements.

    Performance characteristics

    OpenShift Pipelines 1.20 introduced StatefulSet-based deployments as an alternative to leader election. With stable pod identities and deterministic work assignment, the sharding approach achieves consistent ownership of reconciliation work and predictable distribution across replicas. This enables more effective utilization across replicas and higher throughput under concurrent load.

    Test setup

    We ran 1,000 PipelineRuns (4,000 TaskRuns total) on Red Hat OpenShift 4.x using the math benchmark scenario from our performance test suite. The math scenario executes a basic pipeline with 4 simple tasks that pass parameters and results between them, designed to stress the controller and scheduler without external dependencies.

    Cluster configuration:

    • Control plane: 3× m6a.2xlarge (8 vCPUs, 32 GB memory each)
    • Compute plane: 5× m6a.2xlarge (8 vCPUs, 32 GB memory each)
    • Controller pods: 10 replicas, each allocated 1 CPU core and 2 GiB memory

    We measured execution time, scheduling delays, and controller resource utilization using Prometheus metrics collected every 30 seconds. Full test configuration and scripts are available in our performance repository.

    We stress-tested both HA approaches across concurrency levels ranging from 50 to 200 pipelines. Results are based on the test setup, using OpenShift Pipelines 1.20 with both deployment and StatefulSet configurations (10 replicas each). Performance may vary based on cluster size and workload characteristics.

    The following table shows the average time taken to complete a single PipelineRun at different concurrency levels. The gap widens as concurrency increases, demonstrating the performance impact of the different HA approaches under load.

    Concurrent Pipelines

    Deployment (Leader Election)

    StatefulSet (Sharding)

    Improvement

    50

    30.4 s

    8.8 s

    3.5× faster

    100

    79.3 s

    14.2 s

    5.6× faster

    150

    127.7 s

    41.7 s

    3× faster

    200

    176.1 s

    57.3 s

    3× faster

    This behavior is further illustrated by examining workload distribution across controller replicas.

    Workload distribution across controller replicas

    The heatmap in Figure 1 shows the TaskRun distribution across multiple pipeline controller pods. Higher color density indicates that a specific controller pod handled more TaskRuns.

    In this heatmap, higher color density indicates a specific controller pod handled more TaskRuns.
    Figure 1: This illustrates the workload distribution across both implementations.

    In the deployment-based controller, the distribution is uneven. A subset of controller pods handles a disproportionate share of TaskRuns, while others remain underutilized, especially at higher concurrency levels. This leads to localized load concentration, where a subset of controllers becomes the bottleneck, limiting overall throughput.

    In contrast, the StatefulSet-based controller illustrates that the distribution is much more uniform. TaskRuns are spread consistently across replicas, ensuring more balanced contribution from all controller replicas.

    The impact is twofold:

    • Better parallelism: Work is processed across multiple controllers instead of being concentrated on a few.
    • Higher utilization: Fewer idle replicas leads to more efficient resource usage.

    These improvements are not driven solely by faster execution, but also by reduced queuing, improved work distribution, and more effective utilization of controller resources under concurrency.

    Why this matters

    The improvements are not limited to overall pipeline duration. They are visible across multiple system-level metrics at higher concurrency.

    This table shows key improvements at 200 concurrent pipelines:

    Metric

    Deployment (Leader Election)

    StatefulSet (Sharding)

    Improvement

    PipelineRun duration (avg)

    176.1 s

    57.3 s

    ~3× faster

    TaskRun duration (avg)

    87.9 s

    44.7 s

    ~2× faster

    TaskRun scheduling delay (avg)

    31.7 s

    14.5 s

    ~2.2× lower

    Controller CPU usage (avg)

    ~0.42 cores

    ~0.70 cores

    Higher utilization

    Controller memory usage (avg)

    ~4.5 GB

    ~5.0 GB

    +12% (slight increase)

    Workqueue depth (avg)

    ~658

    ~2649

    ~4× higher concurrency

    Observations:

    • Faster feedback loops: Pipeline execution time drops significantly, reducing developer wait time from minutes to under a minute at scale.
    • Reduced queuing delays: Lower TaskRun scheduling delay indicates the system spends less time waiting and more time executing.
    • Better resource utilization: Higher CPU usage reflects effective parallel processing rather than idle replicas.
    • Higher concurrent processing capacity: StatefulSet maintains 4× more items in active processing, indicating better parallelization across controller replicas rather than sequential bottlenecks.
    • Predictable performance: More consistent execution enables reliable capacity planning.  

    This is not a marginal improvement. It significantly impacts overall delivery time at scale.

    Making the switch

    OpenShift Pipelines 1.20 and later support StatefulSet-based controller deployments (based on Tekton v0.56.0+).

    To enable StatefulSet mode, patch the TektonConfig resource:

    kubectl patch TektonConfig/config --type merge --patch \
      '{"spec":{"pipeline":{"performance":{"statefulset-ordinals":true,"buckets":1,"replicas":1}}}}'

    This enables StatefulSet mode with a single replica.

    Note: For StatefulSets, the buckets and replicas values must match to ensure even work distribution. To achieve better performance, you can configure higher values based on your workload concurrency.

    Higher replica counts improve parallelism but also increase resource usage. Monitor CPU and memory utilization to determine the optimal configuration for your workload.

    Recommended approach:

    1. Test in staging with your actual workloads.
    2. Measure the improvements against your baseline metrics.
    3. Tune buckets and replicas based on observed performance.
    4. Roll out to production during a maintenance window.

    No major architectural changes are required. The controller behavior remains compatible with existing pipelines. We repeated the same experiments on OpenShift Pipelines 1.21 and observed comparable performance improvements across concurrency levels.

    Scaling pipelines: Recommendations for production

    For most production workloads at scale, StatefulSet is the recommended choice. StatefulSet-based deployments provide significantly better performance, more predictable behavior, and improved resource utilization.

    OpenShift Pipelines 1.20+ introduces a simple change with measurable impact. The trade-off is minimal, and the gains are substantial, making it a practical optimization for teams looking to improve pipeline efficiency at scale.

    Related Posts

    • Build trust in your CI/CD pipelines with OpenShift Pipelines

    • Log retention and pruning in OpenShift Pipelines

    • DevOps with OpenShift Pipelines and OpenShift GitOps

    • Manage credentials with Tekton and OpenShift on IBM Cloud

    Recent Posts

    • How StatefulSet deployments tripled OpenShift Pipelines throughput

    • How we turned Storybook into a behavioral verification engine

    • Boosting speed: Use eBPF and netstacklat to troubleshoot latency

    • New features in GCC 16: Improved error messages and SARIF output

    • OpenShift AI observability summarizer: Transform metrics into meaning

    What’s up next?

    Learning Path Using OpenShift Pipelines_Feature Image

    Using OpenShift Pipelines

    Get an introduction to OpenShift Pipelines for automated builds and deployment.
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility