Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

So you need more than port 80: Exposing custom ports in Kubernetes

January 28, 2026
Simon Delord
Related topics:
Hybrid cloudKubernetesVirtualization
Related products:
Microsoft SQL Server on Red Hat Enterprise LinuxRed Hat OpenShiftRed Hat OpenShift Virtualization

    In Kubernetes, many developers are familiar with using Ingresses and Services to access web-based containerized applications, such as HTTP and HTTPS traffic.

    However, as part of my job, I am often asked about "uncommon" ports (such as UDP or different TCP port combinations) and how to run and expose them outside the cluster.

    Why would you want to expose or run these "nonstandard" ports? Here are a few common examples:

    • TCP and UDP ports for database and database clustering connections (see MS SQL and PostGreSQL).
    • As more virtual machines (VMs) are deployed in Kubernetes using the Kubevirt upstream project, you must offer SSH (TCP port 22) and RDP (TCP 3389 and UDP 3389) connections to those VMs.
    • UDP ports often support applications like real-time gaming, video conferencing, DNS name resolution, and some IoT functions.

    In this post, I explain how to create Kubernetes services that support ports other than TCP/80 and TCP/443. I also show how to expose those services outside the cluster for cloud and datacenter deployments.

    What are Kubernetes Services and Ingresses?

    In Kubernetes, a Service is a way to expose an application running as one or more Pods in your cluster. The Service is an abstraction to help expose groups of Pods over a network. A selector usually determines which Pods a Service targets.

    You can think of a service as a "Kubernetes-native reverse proxy." A reverse proxy takes inbound traffic, looks at some metadata (hostname, path, headers, TCP port), and forwards it to the correct backend instance. Along the way it might load balance, retry, hide the backend's identity, or keep connections warm. It sits out front saying, "Don't worry about finding the servers; talk to me."

    A Kubernetes Service behaves the same way, only simplified and embedded into the cluster fabric. A Service gives you a single virtual IP address. In Red Hat OpenShift, it provides a DNS service name. Any traffic that hits that IP is quietly funneled to one of the Pods behind it. The caller doesn't know or care about Pod IP churn because the Service keeps that turbulence out of sight. This is the core of the reverse proxy ethos: a stable façade over mutable internals.

    There are several types of Services. The most common is the ClusterIP type, which makes the Service reachable only from within the cluster. Figure 1 shows this process.

    A Test Pod in one namespace connects to a ClusterIP service in a second namespace, which then routes traffic to a target Pod.
    Figure 1: Communication between Pods through an internal service.

    This diagram shows a Pod (on the right-hand side) served by an internal ClusterIP Service in green, linked by a selector. Communication from another Pod (called Pod Test) will route via the internal Service to reach the target Pod.

    To make the ClusterIP service accessible from outside the cluster, the standard option is to expose it using an Ingress, as shown in Figure 2. (OpenShift extends this concept with Routes, which offer additional enterprise-grade features.)

    An external client connects to an Ingress via a cloud or on-premise load balancer. The Ingress uses a routing rule to send traffic to an internal service, which then reaches the target Pod.
    Figure 2: Accessing a Pod through an Ingress and load balancer.

    With an Ingress, you create a load balancer that allows external traffic (represented by the computer) to reach the Pod through the Service.

    The drawback with an Ingress is that it does not expose arbitrary ports or protocols. Exposing services other than HTTP (TCP:80) and HTTPS (TCP:443) to the internet typically uses a Service of type NodePort or LoadBalancer.

    So for nonstandard traffic, the creation of a Service of type LoadBalancer will be used to allow traffic from outside the cluster to reach the Pod(s). This is shown in Figure 3.

    An external client connects via a cloud or on-premise load balancer to an external LoadBalancer service, which then routes traffic to a target Pod.
    Figure 3: Exposing a Pod using an external load balancer.

    In this diagram, there is a Pod (on the right-hand side) that is serviced by an internal Service (of type ClusterIP) in green as defined by a selector and also by an external Service (of type LoadBalancer) in red also using a selector. The external service is also coupled to a load balancer that can be sitting outside the cluster (typically in cloud providers like AWS, Google or Azure) or inside the cluster (for datacenter deployments).

    In the rest of this blog, we will demonstrate how both ClusterIP and LoadBalancer Service types can be used for traffic that does not use TCP:80 and TCP:443, e.g for any other UDP or TCP port or any combination thereof.

    A little more about MetalLB

    MetalLB appears in the Figure 3 diagram, and it deserves a few more words.

    A Kubernetes LoadBalancer Service is the cluster's way of saying, "Give me a stable, externally reachable IP and handle all the messy routing for me." Inside the cluster it behaves like any other Service: a single virtual address that forwards traffic to the matching Pods.

    The special bit is what happens outside the cluster. When you create a Service of type LoadBalancer, Kubernetes tells the underlying infrastructure to provision a real, outward-facing load balancer. In cloud environments, this might be an AWS Elastic Load Balancer (ELB), Azure Load Balancer, Google Cloud Platform forwarding rule, or another service.

    In cloud platforms, load balancers are conjured by APIs, but in the datacenter, there is no such magic. This is where MetalLB helps; it essentially teaches your cluster to speak the language of real routers, giving LoadBalancer Services first-class treatment without any cloud provider behind them.

    The overall setup

    In this post, I demonstrate a Pod listening on UDP:5005. The setup for a nonstandard TCP port (that is, other than 80 or 443) follows the same process. See this GitHub repository for more details and examples if you are interested.

    Figure 4 illustrates the architecture for this setup, showing how both the internal and external services connect to the target Pod.

    External clients route traffic through cloud or on-premise load balancers and an external service to a target Pod, which also receives cluster-local traffic via an internal service.
    Figure 4: Comprehensive overview of the internal and external service architecture.

    After deploying the udp-pod Pod (see the GitHub repository for the build and deployment), I created two Services:

    • my-service, an internal Kubernetes Service of type ClusterIP:

      kind: Service
      apiVersion: v1
      metadata:
        name: my-service
        spec:
        ports:
          - name: simple-udp
            protocol: UDP
            port: 5005
            targetPort: 5005
        type: ClusterIP
        selector:
          app.kubernetes.io/name: kubernetes-services-git
    • external-lb-udp, an external Kubernetes Service of type LoadBalancer:

      kind: Service
      apiVersion: v1
      metadata:
        name: external-lb-udp
        namespace: metallb-system
        annotations:
          ### specific to the Load-balancer used
      spec:
        ports:
          - protocol: UDP
            port: 5005
            targetPort: 5005
            type: LoadBalancer
        selector:
          app.kubernetes.io/name: kubernetes-services-git

    On-premise, colocation, and datacenter deployments

    For these deployments, you can use MetalLB to expose the Service, as shown in the external-lb-udp Service annotations.

      annotations:
        MetalLB.universe.tf/address-pool: ip-addresspool-beehive
        metallb.io/ip-allocated-from-pool: ip-addresspool-beehive

    You can find more information on the MetalLB setup in the GitHub repository.

    Looking at the my-service and external-lb-udp Services, we can see that both are pointing to the udp-pod Pod.

    First, let's look at my-service (Figure 5).

    The my-service details page shows a Cluster IP address of 172.30.221.139 and a service port mapping for simple-udp on port 5005 using the UDP protocol.
    Figure 5: Configuration details for the internal ClusterIP service.

    We can see that my-service has an IP address of 172.30.221.139 and is doing a 5005/UDP to 5005/UDP port mapping towards the udp-pod.

    And then let's look at external-lb-udp (Figure 6).

    The external-lb-udp details page shows an external load balancer address of 192.168.0.201 and a UDP port mapping for port 5005.
    Figure 6: Configuration details for the external LoadBalancer service.

    Similarly, we can see that external-lb-udp has an IP address of 192.168.0.201 (the 192.168.0.0/24 is the external network our test system is running on) and is doing a 5005/UDP to 5005/UDP port mapping towards the udp-pod.

    Now, we can test access to this udp-pod using the following command:

    echo "Message" | nc -u <target_ip> <target_port>

    The -u flag for Netcat specifies UDP.

    You can run this command either from another Pod, where the traffic will route via the my-service Service, or from your laptop, where the traffic will route via the external-lb-udp Service.

    If I generate a command from another Pod within my cluster, I get the following:

    Listening on UDP port 5005
    b'Hello UDP\n'
    b'Hello from outside\n'
    b'Hello from rainy Melbourne\n'
    b'traffic sent from within the cluster\n'

    If I run the command from an external computer, I get the following:

    Listening on UDP port 5005
    b'Hello UDP\n'
    b'Hello from outside\n'
    b'Hello from rainy Melbourne\n'
    b'traffic sent from within the cluster\n'
    b'traffic sent from outside the cluster\n'

    We've managed to expose access to the port on the Pod both internally (via a ClusterIP Service type) and externally (via a LoadBalancer Service type).

    Cloud deployments

    For a cloud-based service, the only difference in the configuration is for the load balancer that sits outside of the cluster (as opposed using MetalLB, where the load balancer sits within the cluster).

    Here is external Service YAML definition for a cloud load balancer:

    kind: Service
    apiVersion: v1
    metadata:
      name: external-udp-service
      namespace: simon-udp-app
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: nlb
    spec:
      ports:
        - name: simple-udp
          protocol: UDP
          port: 5005
          targetPort: 5005
      type: LoadBalancer
      selector:
        app.kubernetes.io/name: kubernetes-services-git
    

    The only difference compared to the Service deployed for the datacenter use case, is the annotations component.

    Figure 7 shows the Service after deployment.

    The Services list shows the external-udp-service with a location value corresponding to an Amazon Web Services (AWS) network load balancer URL.
    Figure 7: Service location details showing an AWS network load balancer.

    The Location column points to an AWS load balancer.

    I can now perform the same test on the external side. Here is the output:

    Listening on UDP port 5005
    b'external udp on aws\n'

    Access to the port on the Pod is now exposed both internally using a ClusterIP Service and externally using a LoadBalancer Service. This requires only minimal configuration changes, such as adding annotations for the external Service.

    How about Ingress for on-premise, colocation, and datacenter deployments?

    In previous blog posts, I discussed Ingress sharding and creating multiple Ingress controllers to expose various applications.

    To demonstrate on-premise support, I added the following IngressController configuration to use MetalLB. This defines the loadBalancer value under endpointPublishingStrategy and includes a routeSelector.

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: ingress-metal-lb
      namespace: openshift-ingress-operator
    spec:
      nodePlacement:
        nodeSelector:
          matchLabels:
            node-role.kubernetes.io/worker: ''
      domain: apps.metal-lb.melbourneopenshift.com
      routeSelector:
        matchLabels:
          type: externalapplications
      endpointPublishingStrategy:
        loadBalancer:
          scope: External
        type: LoadBalancerService

    Figure 8 provides a simplified view of the cluster architecture.

    External client traffic routes through a MetalLB on-premise setup to an external LoadBalancer service, then passes through a Router Pod and Route to an internal service and target NGINX Pod.
    Figure 8: Architecture for on-premise ingress using MetalLB.

    The ingress-metal-lb Ingress controller runs on top of the default controller. It uses a specific Pod called router-ingress-metal-lb-c97d8589f-h2cbv.

    The router-ingress-metal-lb-c97d8589f-h2cbv Pod is targeted by two services: an internal router-internal-ingress-metal-lb and an external router-ingress-metal-lb. Note that the external service uses the IP address 192.168.0.202 from the MetalLB address pool (Figure 9).

    The Services list in the OpenShift console  displays the internal and external router-ingress services within the openshift-ingress project.
    Figure 9: Service list showing the router targeted by internal and external services.

    Now if we deploy a simple NGINX Pod with a Service on port 80 and expose it via a route that is mapped to the ingress-metal-lb controller, we have the following:

    • A ClusterIP Service called nginx-service with an IP address of 172.30.33.141 pointing to a Pod called nginx on TCP port 80 (http).
    • A route with the type: externalapplications label to make sure it goes through the ingress-metal-lb controller we created earlier.
    kind: Route
    apiVersion: route.openshift.io/v1
    metadata:
      name: external-app
      namespace: metallb-system
      uid: b07f8645-cd92-4da3-a19c-8fe197ab6e3a
      resourceVersion: '2375797'
      creationTimestamp: '2025-11-17T04:36:49Z'
      labels:
        type: externalapplications
      managedFields: ...
    spec:
      host: external-app.apps.metal-lb.melbourneopenshift.com
      to:
        kind: Service
        name: nginx-service
        weight: 100
      port:
        targetPort: 80
      wildcardPolicy: None

    You can then access the newly created external route using a web browser or curl call. For example: http://external-app.apps.metal-lb.melbourneopenshift.com. See Figure 10.

    The NGINX welcome page is displayed in a web browser, while a terminal window shows the matching HTML source code from a curl command.
    Figure 10: Successful connection to the NGINX application via the custom route.

    Conclusion

    In this blog, I explained how to configure and expose Kubernetes Services using ports other than the standard TCP/80 (HTTP) and TCP/443 (HTTPS) ports.

    The process for setting up and exposing those Services requires minimal configuration changes (typically only modifying the annotations for the external Service), regardless of whether the Kubernetes environment is in a datacenter or in the cloud.

    Thanks to Shane Boulden, Derek Waters, and Kiran Patel for their thorough review and feedback.

    Related Posts

    • Network observability using TCP handshake round-trip time

    • How to add public Ingress to a PrivateLink ROSA cluster

    • Multicluster resiliency with global load balancing and mesh federation

    • Load balancing, threading, and scaling in Node.js

    • Simplifying transit router deployment in Open Virtual Network

    • How in-place pod resizing boosts efficiency in OpenShift

    Recent Posts

    • So you need more than port 80: Exposing custom ports in Kubernetes

    • A guide to AI code assistants with Red Hat OpenShift Dev Spaces

    • Performance and load testing in Identity Management (IdM) systems using encrypted DNS (eDNS)

    • Migrate BuildConfig resources to Builds for Red Hat OpenShift with Crane

    • How to install Red Hat Developer Hub

    What’s up next?

    Learn how to create a set of microservices and move this Spring Pet Clinic example application into a container using the Source-to-Image (S2I) feature.

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue