Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • Guided learning
      Receive custom learning paths powered by our AI assistant.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Beyond guesswork: Generating accurate ingress firewall rules with oc commatrix

Avoid firewall misconfiguration in OpenShift with oc commatrix

April 2, 2026
Amal Abu Gosh Shir Moran
Related topics:
Automation and managementSecurity
Related products:
Red Hat OpenShift

    Every Red Hat OpenShift admin has been there. You open a spreadsheet someone last touched six months ago, squint at a row that says "port 8080 openshift-network-operator" and wonder: Is this still the full picture? Spoiler alert — it almost never is.

    Firewall misconfiguration remains one of the top causes of post-installation failures in OpenShift. And the root cause isn't carelessness. It's that clusters are living systems. Operators get installed, services spin up NodePorts, pods claim host ports, custom MachineConfigPools appear. The static documentation you wrote on day one starts drifting on day two.

    That's the problem OpenShift Commatrix CLI was built to solve.

    How firewall rules were managed before

    A typical workflow for firewall rules used to look something like this:

    1. Start with the docs. Look up the required ports for the OpenShift version, including the API server, etcd, kubelet, OVN, and so on.
    2. Account for operators manually. If you installed something like MetalLB or the SR-IOV network operator, you'd refer to each operator's documentation for additional ports.
    3. Write the firewall rules by hand. Transcribe ports into nftables, grouped by node role.
    4. Repeat for each cluster. Different platforms, topologies, or installed operators meant different rules.

    This works, but it has some well-known limitations:

    • The firewall configuration goes stale: An operator installed after the initial setup can open new ports that nobody updates in the firewall config.
    • It misses things: Host-networked pods, hostPort containers, and custom MachineConfigPools don't appear in generic documentation.
    • It's hard to verify: There's no straightforward way to check whether the rules you wrote actually match what the cluster needs.

    What oc commatrix does differently

    Instead of relying on external documentation, oc commatrix inspects the live cluster directly, generates ingress communication matrix and formats it already into nftables to easily apply them.

    $ oc commatrix generate --format nft

    The ingress communication matrix is generated declaratively by leveraging the Kubernetes EndpointSlice API. This approach enables the automated discovery of exposed ports across the cluster, including::

    • LoadBalancer services
    • NodePort services
    • Host-networked pods (hostNetwork: true)
    • HostPort containers (containers with explicit hostPort mappings)

    On top of that, it adds static entries for well-known system services (kubelet, OVN, sshd) and adjusts them based on the cluster's environment. Each one detects different aspects:

    • Platform: AWS, bare metal, or platform-none
    • Topology: Highly Available, Single Node (SNO), Multi Node (MNO), or HyperShift
    • Network stack: IPv4, IPv6, or dual-stack
    • Node groups: MachineConfigPools, HyperShift NodePools, or role-based fallback

    The output is an ingress communication matrix that reflects your specific cluster.

    Note: The communication matrix relies on EndpointSlices to discover exposed ports. EndpointSlices are automatically created by Kubernetes for each Service. All core OpenShift ports are fully covered by the matrix from OpenShift 4.21 release. However, non-core operators that do not expose a service do not appear in the matrix. For a full picture of the cluster's listening ports (as seen with ss), use the --host-open-ports option to capture the actual open ports on the nodes. You can also compare this output with the declared ports using the generated diff file, which shows differences between intended and actual state.

    Key improvements over the manual approach

    Automation is easy to justify under most circumstances, but you don't just gain convenience by using the oc commatrix command. There are at least five critical improvements over the manual process:

    1. Automatic operator port discovery

    When an operator creates a NodePort or LoadBalancer service, it shows up in the cluster's EndpointSlice data. Because oc commatrix reads that data directly, all operator-exposed ports are included without any manual lookup.

    2. Support for custom MachineConfigPools

    The manual approach typically assumes two node groups: master and worker. If you're running additional pools like infra or gpu-workers, then oc commatrix resolves the actual pool name from the machineconfiguration.openshift.io/currentConfig annotation on each node. When using the nftables output format, it generates separate rule files per pool:

    communication-matrix-master.nft
    communication-matrix-worker.nft
    communication-matrix-gpu-workers.nft

    3. Validation against actual listening ports

    The --host-open-ports flag adds a verification step. It deploys debug pods on each node, runs ss to find ports that are actually listening, and then diffs the results against the EndpointSlice-derived matrix:

     Ingress,TCP,22,Host system service,sshd,,,master,true
     Ingress,TCP,80,openshift-ingress,router-internal-default,...,master,false
    - Ingress,TCP,111,Host system service,rpcbind,,,master,true
    + Ingress,UDP,59975,,rpc.statd,,,master,false

    Lines with a minus sign (-) are in the matrix but not actually listening. Lines with a plus sign (+) are listening but not in the matrix. This makes it straightforward to spot gaps between your firewall rules and what the cluster is actually doing.

    4. Direct firewall rule generation

    Rather than generating a CSV and translating it into firewall rules manually, you can output nftables rules directly:

    $ oc commatrix generate --format nft

    The generated ruleset includes loopback allowances, connection tracking for established sessions, ICMP handling, and a default-drop policy with logging.

    5. The benefits of automation

    Finally, there are all the usual benefits of using automation over error-prone manual processes. Here's a comparison of how oc commatrix handles common tasks:

     Manual approachoc commatrix
    Source of truthStatic docs or spreadsheetsLive cluster state
    Operator portsManual audit for each operatorAuto-discovered with EndpointSlice
    Node groupsmaster / worker onlyAny MCP, NodePool, or role
    Platform awarenessSeparate docs for each platformAdapts automatically
    Topology supportTypically HA onlyHA, SNO, MNO, HyperShift
    IPv6Often missedAuto-detected
    ValidationNoneDiffs against live ports generated by ss
    Firewall outputManual transcriptionNative nftables generation

    Getting started

    The core idea behind oc commatrix is simple: instead of maintaining a separate record of what ports your cluster needs, derive it from the cluster itself. This keeps the output accurate as the cluster evolves, handles platform and topology differences automatically, and reduces the manual work involved in configuring and verifying firewall rules.

    $ oc commatrix generate

    To set up oc commatrix on your OpenShift cluster, see the official guide.

    Related Posts

    • Unlocking efficiency: A guide to operator cache configuration on Red Hat OpenShift and Kubernetes

    • Evaluate OpenShift cluster health with the cluster observability operator

    • Deeper visibility in Red Hat Advanced Cluster Security

    Recent Posts

    • Trusted software factory: Building trust in the agentic AI era

    • Build a zero trust AI pipeline with OpenShift and RHEL CVMs

    • Red Hat Hardened Images: Top 5 benefits for software developers

    • How EvalHub manages two-layer Kubernetes control planes

    • Tekton joins the CNCF as an incubating project

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.