Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

How HaProxy router settings affect middleware applications

July 2, 2025
Francisco De Melo Junior
Related topics:
ContainersHybrid CloudIntegrationKubernetesMicroservicesOperatorsServerless
Related products:
Red Hat OpenShiftRed Hat OpenShift Container PlatformRed Hat OpenShift Serverless

Share:

    Red Hat OpenShift is the leading hybrid cloud application platform, bringing together a comprehensive set of tools and services that streamline the entire application lifecycle built on top of Kubernetes. 

    Among the embedded features it provides is the Ingress route, which is used to externalize the applications for access (e.g., DataGrid access via Route or JBoss EAP via HTTP). The Ingress route provides HaProxy load balancing features for the distribution of loads. This is already installed by default and no extra setting is required by the user.

    This article explains briefly how to use these features and how HaProxy route settings affect middleware applications such as:

    • Red Hat JBoss Enterprise Application Platform 8 (JBoss EAP), which is a server for middleware applications that delivers enterprise-grade security, performance, and scalability in any environment.
    • Red Hat OpenShift Serverless, which simplifies the development of hybrid cloud applications by eliminating the complexities associated with Kubernetes and the infrastructure applications are developed and deployed on.
    • Red Hat Data Grid 8, which is a performance-distributed caching solution that boosts application performance, provides greater deployment flexibility, and minimizes the overhead of standing up new applications.

    OpenShift Container Platform Ingress route and HaProxy 

    OpenShift 4 provides a route as the main alternative for external access of an application, such as service type nodePort and service type loadbalancer.

    As a very brief summary, this means for routing requests, OpenShift relies on HaProxy to handle internal route distribution to all backends on the cluster. Default ingress handling will route incoming requests to *.apps.cluster.domain on ports 443 and 80 to the infra nodes, and on to the running router pod scheduled on that host node, which will translate the request directly to the exposed route/service and back-end pods linked as defined in a constantly-updating config file: haproxy.config.

    Load balance

    The router is a Red Hat OpenShift Container Platform 4 features analogous Ingress object in Kubernetes and it will bring by default OpenShift Container Platform 4 HaProxy loadbalancer and brings some default configurations, such as the default balance:

    haproxy.router.openshift.io/balance:

     Sets the load-balancing algorithm. Available options are random, source, round-robin, and leastconn. 

    The default value is source for TLS passthrough routes.  For all other routes that default to random setting, where the most "fair" deployment would be round-robin. However, random balance could be adequate as well in terms of distribution.

    See haproxy documentation - balance for more on the specifics of these algorithm types. 

    Although not the main point of this discussion, all OCP routes that go through ingress controller have a default wildcard FullyQualifiedDomainName (FQDN): *.apps. This can be changed on the default ingress controller or via another Ingress controller plus an custom certificate.

    Configuring TLS security profiles

    A quick note about TLS security for Ingress controller: it is not required to set the Ingress controller for the unmanage state to set a different TLS security profile. Quite the opposite, the Ingress controller one can use a TLS (transport layer security) security profile to define which TLS ciphers. See more details here. This can be relevant, for example, for applications using re-encrypt Route, where Route pod because the Rotue pod cipher will playing a role on the middle of the communication and the cypher will matter.

    Session stickiness and cookies

    The default options in the HaProxy configuration set the following value:

        cookie ID insert indirect nocache httponly secure attr SameSite=None

    This means the preserve is not used, which means the behavior is, as explained in the HaProxy Configuration Manual: "If the server sets such a cookie itself, it will be removed, unless the 'preserve' option is also set."

    In terms of controlling stickiness, HaProxy uses cookies for session stickiness control. In other words, the HaProxy's cookies determine which endpoint the traffic will be sent. If those cookies are disabled, then it will just use the load balance algorithm set to decide the traffic being random by default for edge and re-encrypt.

    Troubleshooting

    For a deep dive investigation, it is relevant to pull a haproxy-gather so you can see the backends for each pod, and confirm balance. Refer to this solution: How to review HaProxy router pod statistics on OpenShift 4.x .

    Secondly, given the haproxy.config configuration, You can access it as follows:

    $ oc cp <router-default-podname>:haproxy.config -n openshift-ingress ./haproxy.config

    Finally, some items can be quickly verified for instance the annotations from the Route YAML. In the example below, MTA creates a route (see owner reference Tackle CR):

    $ oc get route
    NAME   HOST/PORT                                                               PATH   SERVICES   PORT    TERMINATION     WILDCARD
    mta    mta-openshift-mta.apps.example.com          mta-ui     <all>   edge/Redirect   None
    ...
    $ oc get route mta -o yaml
    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      annotations:
        haproxy.router.openshift.io/timeout: 300s <---
        openshift.io/host.generated: "true"       <---
      creationTimestamp: "2024-07-26T18:12:17Z"
      labels:
        app: mta
        app.kubernetes.io/component: route
        app.kubernetes.io/name: mta-ui
        app.kubernetes.io/part-of: mta
      name: mta <----------------------- name
      namespace: openshift-mta <-------- namespace
      ownerReferences:
      - apiVersion: tackle.konveyor.io/v1alpha1 <---
        kind: Tackle <--------
        name: tackle <--------
    spec:
      host: mta-openshift-mta.apps.example.com
      tls:
        insecureEdgeTerminationPolicy: Redirect
        termination: edge
      to:
        kind: Service
        name: mta-ui
        weight: 100
      wildcardPolicy: None

    For verifying annotations such as the balance, get the route and export to YAML and then verify the specs, labels, and annotations. In the example above, oc get route/<name> -o yaml . Let's understand its output:

    • The route is created by MTA's Tackle, given ownerReferences.
    • The route binds the pods in the mta-ui Service.
    • It has termination edge not passthrough or reencrypt.
    • insecureEdgeTerminationPolicy means that insecure traffic is redirected instead of disabled (None) or allowed (Allow).

    Regarding annotations:

    • openshift.io/host.generated: "true" a generated route does not specify route.spec.host at creation and lets OpenShift generate the hostname for the route. The generated route will have an annotation openshift.io/host.generated: 'true'
    • haproxy.router.openshift.io/timeout: 300s sets a server-side timeout for the route. (TimeUnits). In this example, HaProxy will listen for 300 seconds for responses from the MTA application. If it takes longer than 300 seconds it closes the connection.

    Implications and scenarios

    There are a few implications for HaProxy's configuration and setting as currently set: default configuration cannot be modified. HaProxy will ignore the application's cookies and HaProxy will overwrite cookies with the same name.

    Let's examine the scenarios below.

    Scenario 1: JBoss EAP 7 externalization to Data Grid 8

    JBoss EAP 8 (and JBoss EAP 7) provides the externalization feature that allows sessions to be stored in Data Grid 8, which provides the combined features of JBoss EAP 7 handling requests together with Data Grid performance in handling cache entries.

    In technical terms, externalization is the act of sending JBoss EAP sessions (which are persisted on cookies) to Data Grid for a series of reasons, like footprint and high availability. There are two main approaches to doing that:

    • Option 1: Infinispan Session Manager (ISM)

      Infinispan Session Manager uses an embedded Infinispan cache, rather than a remote cache so the characteristics depend on how the embedded cache is configured e.g. replication, distribution, invalidation+shared cache store, local.
      One of the embedded cache modes can be the invalidation-cache where there are no client events. Does not have near-cache. Assumes the EAP is in a cluster. ISM is the embedded cache version, not a remote cache. Generally, you configure it with a shared persistent cache store, and the invalidation-cache acts as an L1 so the invalidation-cache already serves this purpose with no need for near-caching.

    • Option 2: Hotrod Session Manager (HSM)

      Hotrod Session Manager assumes the EAP pods are not in a cluster. It has near-cache. As the name suggests HSM is the HotRod session manager == remote caching using HotRod.

    What matters for this article is that JBoss EAP 7/JBoss EAP 8 will send cookies for the session control and HaProxy will ignore the cookie provided by JBoss EAP 7/JBoss EAP 8, when doing externalization from EAP to DataGrid therefore the optimization that the cookie has is ignored.

    In this matter, it is not possible to set the HaProxy template as cookie sessionID indirect preserve to preserve the original JBoss EAP cookie. One option, although not supported, would be to override the default template with a custom router deployment with a modified haproxy.config.

    Scenario 2: Application sending cookies 

    Similar to the previous scenario, any application sending its cookie (e.g., OpenShift Serverless application sending cookie to Data Grid) will be:

    • If the application sends a cookie for stickiness, current template of Openshift 4's HaProxy will ignore it.
    • If the application sets the same cookie name as HaProxy, current template of Openshift 4's will overwrite it.  

    As mentioned above, HaProxy will not use the application's cookie, therefore it won't use the application cookie or Serverless cookie (so then it cannot be passed) in case of externalization, for instance to Data Grid communication with an external client.

    Red Hat OpenShift Servless Operator can be installed via the OperatorHub. After its installation, the user creates the following via APIS: Knative Serving, Knative Eventing, and Knative Kafka Custom Resources. Example:

    $ oc get kservice -o yaml
    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: showcase
      namespace: knative-serving
      annotations:
        serving.knative.dev/creator: cluster-admin
        serving.knative.dev/lastModifier: cluster-admin
    spec:
      template:
        metadata:
          annotations:
            queue.sidecar.serving.knative.dev/resourcePercentage: "40" <---- 40% of the user-container will be queue-proxy

    Therefore, using serverless (Knative's) routing to configure session stickiness to the Data Grid via OpenShift's Ingress route is not possible. For example, when creating a Serverless Serving a Route will declare a Route, unless the user sets the serving.knative.openshift.io/disableRoute: "true" annotation in the Servlerless Serving Custom Resource. However, that's not the main object of this discussion.

    Moreover, if the applications set the same cookie name as HaProxy, then HaProxy will overwrite it.  At the current moment, it is not possible to use the same cookie name consequently. For that to work, it will require the RFE-3724, which aims to allow custom haproxy-configuration.

    Scenario 3: SNI imposition

    HaProxy will impose SNI.

    As explained in the solution DG 8 TLS SNI usage in Client server communication, when connecting to a DG server (client outside OpenShift Container Platform) connecting to the DG inside OpenShift Container Platform, via OpenShift Container Platform router so then SNI will be a limitation when using two servers (main server and backup server). Given that currently there is a limitation on the hot rod client, which only allows one SNI to be set after SNI is fixed in the client, it would be possible to use routes with direct access to each node (one route per node, configured as its external host).

    In terms of alternatives, the options are:

    • Deploy everything in OpenShift Container Platform, which will avoid Route/NP/LB and would improve the performance.
    • Use service expose, not via route, for instance, via LoadBalancer or NodePort.
    In summary, SNI is required in order to know to what endpoint the TLS connection is for as HAProxy and not decrypt and directly determine within the HTTP field, so it will impose even on Passthrough routes.

    Future developments and improvements

    As previously mentioned, the current Openshift 4.18 default implementation: 

    • Ignores the application cookie, JBoss EAP cookie, or serverless cookie.
    • Overwrites any cookie with the same name.
    • Does not allow configuration changes such as preserve.

    However, in the future, RFE-3724 will allow custom/modifications on the default haproxy-configuration, and this can be applicable for both scenarios above and will allow the user to set the HaProxy configuration to preserve and not overwrite an application provided cookie.

    This article describes HaProxy in a brief explanation and troubleshooting steps, followed by the two scenarios where the HaProxy settings will impact namely JBoss EAP 7/8 externalization and serverless cookies. Finally, we discuss future developments/how they can impact these specific scenarios, and how they extrapolate. 

    To learn more about container awareness, read Java 17: What’s new in OpenJDK's container awareness.

    For any other specific inquiries, please open a case with Red Hat support. Our global team of experts can help you with any issues. 

    Special thanks to Alexander Barbosa, Will Russell, Courtney Ruhm, and Jacob Shivers for the excellent work we did in these last several years with OpenShift Container Platform networking in collaboration with the Middleware team.

    Related Posts

    • Write a Quarkus function in two steps on Red Hat OpenShift Serverless

    • Serverless applications made faster and simpler with OpenShift Serverless GA

    • Create your first serverless function with Red Hat OpenShift Serverless Functions

    • 4 steps to run an application under OpenShift Service Mesh

    • Why service mesh and API management are better together

    Recent Posts

    • Create and enrich ServiceNow ITSM tickets with Ansible Automation Platform

    • Expand Model-as-a-Service for secure enterprise AI

    • OpenShift LACP bonding performance expectations

    • Build container images in CI/CD with Tekton and Buildpacks

    • How to deploy OpenShift AI & Service Mesh 3 on one cluster

    What’s up next?

    In this learning path, you will learn how to implement a back-end function and a front-end web application, then use a FaaS object to process data.

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue