Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Scaling OpenShift Network Policies: Results and Takeaways

August 11, 2025
Venkata Anil Kommaddi
Related topics:
Kubernetes
Related products:
Red Hat OpenShiftRed Hat OpenShift Container PlatformRed Hat OpenShift LocalRed Hat OpenShift Service on AWS

Share:

    In a previous blog post, we discussed the design of a new workload specifically created for network policy scale testing. This follow-up post will delve into the results of those tests, evaluating the scalability of network policies and how scaling affects OVS flow programming latency, system resources, and overall performance.

     

    Test Objectives

    Our primary objectives were to:

    • Evaluate the scalability of OpenShift Network Policies.
    • Measure OpenShift Network Policy readiness latency through connection testing.
    • Measure CPU and Memory utilization during testing.

     

    Testing Environment

    Testing was conducted on a ROSA OCP 4.16.18 environment with 24 worker nodes.

     

    Test Methodology

    Our kube-burner network policy workload utilizes two jobs. Both jobs ran the same number of iterations and used the same namespaces. In a 24-worker node environment,

    Job 1:

    • Ran for 240 iterations.
    • Each iteration created one namespace.
    • Each created namespace contained 10 pods.

    Job 2:

    • Ran for a corresponding number of iterations (240)
    • Each iteration targeted one of the namespaces created by Job 1.
    • Within each targeted namespace, 20 network policies were created.
    • Example: Job 2's first iteration created 20 network policies in namespace1 (which was created during Job 1's first iteration).

     

    Network Policy Configuration

    For our testing, each network policy had the following configurations:

    Configuration ItemValueDescription

    `single_ports`

     5

    Number of single ports in `ingress.from.ports` or `egress.to.ports`.

    `port_ranges`

     5

    Number of port ranges in `ingress.from.ports` or `egress.to.ports`.

    `remote_namespaces`

     5

    Number of namespace labels in `ingress.from.namespaceSelector.matchExpression`.

    `remote_pods`

     5

    Number of pod labels in `ingress.from.podSelector.matchExpressions`.

    `cidr_rules`

     5

    Number of `from.ipBlock.cidr` or `to.ipBlock.cidr` entries.

    `local_pods`

     10

    Number of local pods selected using `spec.podSelector.matchExpressions`.

    For a detailed explanation of how the workload configuration options translate into network policy configurations, please refer to my previous blog post.

     

    Scenario 1: System Metrics Testing

    This scenario measured system metrics by creating network policies with ingress and egress rules, focusing on resource usage rather than network policy latency.

    • All the tests have 240 namespaces, each with 10 pods.

    OVN resources created when we scale network policies. For example, 403K OVS flows per node created when each namespace has 20 network policies and 4381K when 200 network policies per namespace.

    OVN resources per node during the test
    • Average ovs-vswitchd CPU usage is around 5% across all the  tests

    Avg CPU Usage when scaling Network policy per namespace
    Memory (GiB) Usage when scaling Network policy per namespace

     

    Scenario 2: Network Policy Readiness Latency Testing

    This scenario tested the time taken for programming OVS flows by measuring connection latency between client and server pods when a network policy is applied.

    • Each network policy defined connections between 10 local pods and 25 remote pods. We test all the 250 connections for each network policy and the max latency among the 250 connections is reported as the network policy readiness latency.
    • All the tests have 240 namespaces, each with 10 pods.
    Network Policy Latency (P99) in Milliseconds

     

    Observations

    • Observed a proportional increase in OVS flows, logical flows, and ACLs with the increase in network policies.
    • Successfully scaled to 4381K OVS flows per worker node.
    • Average ovs-vswitchd CPU usage was around 5% across all tests.
    • Observed OVN components not releasing memory after resource cleanup (reported bug OCPBUGS-44430).
    • Network policy readiness latency testing was successful even when the max OVS flows per worker node were 1016K. Network policy readiness latency is 5.5 seconds when we have 412K OVS flows per worker node.
    • Worker node CPU usage was between 100% and 150%  (100% = 1 core) during the testing.
    • Worker node memory usage increased as OVS flows increased.
    • Ovnkube-node pod and worker node CPU and memory usage increased with the number of network policies.

    This scaling testing provides valuable insights into the performance and resource utilization of OpenShift Network Policies at scale. These results help us understand the limitations and potential bottlenecks when deploying a large number of network policies.

    Disclaimer: Please note the content in this blog post has not been thoroughly reviewed by the Red Hat Developer editorial team. Any opinions expressed in this post are the author's own and do not necessarily reflect the policies or positions of Red Hat.

    Related Posts

  • Scaling OpenShift Network Policies: Our Journey in Developing a Robust Workload Testing Tool

  • Recent Posts

    • Cloud bursting with confidential containers on OpenShift

    • Reach native speed with MacOS llama.cpp container inference

    • A deep dive into Apache Kafka's KRaft protocol

    • Staying ahead of artificial intelligence threats

    • Strengthen privacy and security with encrypted DNS in RHEL

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue