Red Hat OpenShift is the leading hybrid cloud application platform, bringing together a comprehensive set of tools and services that streamline the entire application lifecycle built on top of Kubernetes.
Among the embedded features it provides is the Ingress route, which is used to externalize the applications for access (e.g., DataGrid access via Route or JBoss EAP via HTTP). The Ingress route provides HaProxy load balancing features for the distribution of loads. This is already installed by default and no extra setting is required by the user.
This article explains briefly how to use these features and how HaProxy route settings affect middleware applications such as:
- Red Hat JBoss Enterprise Application Platform 8 (JBoss EAP), which is a server for middleware applications that delivers enterprise-grade security, performance, and scalability in any environment.
- Red Hat OpenShift Serverless, which simplifies the development of hybrid cloud applications by eliminating the complexities associated with Kubernetes and the infrastructure applications are developed and deployed on.
- Red Hat Data Grid 8, which is a performance-distributed caching solution that boosts application performance, provides greater deployment flexibility, and minimizes the overhead of standing up new applications.
OpenShift Container Platform Ingress route and HaProxy
OpenShift 4 provides a route as the main alternative for external access of an application, such as service type nodePort and service type loadbalancer
.
As a very brief summary, this means for routing requests, OpenShift relies on HaProxy to handle internal route distribution to all backends on the cluster. Default ingress handling will route incoming requests to *.apps.cluster.domain
on ports 443 and 80 to the infra nodes, and on to the running router pod scheduled on that host node, which will translate the request directly to the exposed route/service and back-end pods linked as defined in a constantly-updating config file: haproxy.config
.
Load balance
The router is a Red Hat OpenShift Container Platform 4 features analogous Ingress object in Kubernetes and it will bring by default OpenShift Container Platform 4 HaProxy loadbalancer
and brings some default configurations, such as the default balance:
haproxy.router.openshift.io/balance:
Sets the load-balancing algorithm. Available options are random
, source
, round-robin
, and leastconn
.
The default value is source
for TLS passthrough routes. For all other routes that default to random
setting, where the most "fair" deployment would be round-robin
. However, random balance could be adequate as well in terms of distribution.
See haproxy documentation - balance for more on the specifics of these algorithm types.
Although not the main point of this discussion, all OCP routes that go through ingress controller have a default wildcard FullyQualifiedDomainName (FQDN): *.apps
. This can be changed on the default ingress controller or via another Ingress controller plus an custom certificate.
Configuring TLS security profiles
A quick note about TLS security for Ingress controller: it is not required to set the Ingress controller for the unmanage state to set a different TLS security profile. Quite the opposite, the Ingress controller one can use a TLS (transport layer security) security profile to define which TLS ciphers. See more details here. This can be relevant, for example, for applications using re-encrypt Route, where Route pod because the Rotue pod cipher will playing a role on the middle of the communication and the cypher will matter.
Session stickiness and cookies
The default options in the HaProxy configuration set the following value:
cookie ID insert indirect nocache httponly secure attr SameSite=None
This means the preserve
is not used, which means the behavior is, as explained in the HaProxy Configuration Manual: "If the server sets such a cookie itself, it will be removed, unless the 'preserve' option is also set."
In terms of controlling stickiness, HaProxy uses cookies for session stickiness control. In other words, the HaProxy's cookies determine which endpoint the traffic will be sent. If those cookies are disabled, then it will just use the load balance algorithm set to decide the traffic being random
by default for edge and re-encrypt.
Troubleshooting
For a deep dive investigation, it is relevant to pull a haproxy-gather
so you can see the backends for each pod, and confirm balance. Refer to this solution: How to review HaProxy router pod statistics on OpenShift 4.x .
Secondly, given the haproxy.config
configuration, You can access it as follows:
$ oc cp <router-default-podname>:haproxy.config -n openshift-ingress ./haproxy.config
Finally, some items can be quickly verified for instance the annotations from the Route YAML. In the example below, MTA creates a route (see owner reference Tackle CR
):
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
mta mta-openshift-mta.apps.example.com mta-ui <all> edge/Redirect None
...
$ oc get route mta -o yaml
apiVersion: route.openshift.io/v1
kind: Route
metadata:
annotations:
haproxy.router.openshift.io/timeout: 300s <---
openshift.io/host.generated: "true" <---
creationTimestamp: "2024-07-26T18:12:17Z"
labels:
app: mta
app.kubernetes.io/component: route
app.kubernetes.io/name: mta-ui
app.kubernetes.io/part-of: mta
name: mta <----------------------- name
namespace: openshift-mta <-------- namespace
ownerReferences:
- apiVersion: tackle.konveyor.io/v1alpha1 <---
kind: Tackle <--------
name: tackle <--------
spec:
host: mta-openshift-mta.apps.example.com
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: mta-ui
weight: 100
wildcardPolicy: None
For verifying annotations such as the balance, get the route and export to YAML and then verify the specs, labels, and annotations. In the example above, oc get route/<name> -o yaml
. Let's understand its output:
- The route is created by MTA's Tackle, given
ownerReferences
. - The route binds the pods in the
mta-ui
Service. - It has termination
edge
notpassthrough
orreencrypt
. insecureEdgeTerminationPolicy
means that insecure traffic is redirected instead of disabled (None
) or allowed (Allow
).
Regarding annotations:
openshift.io/host.generated: "true"
a generated route does not specifyroute.spec.host
at creation and lets OpenShift generate the hostname for the route. The generated route will have an annotationopenshift.io/host.generated: 'true'
haproxy.router.openshift.io/timeout: 300s
sets a server-side timeout for the route. (TimeUnits). In this example, HaProxy will listen for 300 seconds for responses from the MTA application. If it takes longer than 300 seconds it closes the connection.
Implications and scenarios
There are a few implications for HaProxy's configuration and setting as currently set: default configuration cannot be modified. HaProxy will ignore the application's cookies and HaProxy will overwrite cookies with the same name.
Let's examine the scenarios below.
Scenario 1: JBoss EAP 7 externalization to Data Grid 8
JBoss EAP 8 (and JBoss EAP 7) provides the externalization feature that allows sessions to be stored in Data Grid 8, which provides the combined features of JBoss EAP 7 handling requests together with Data Grid performance in handling cache entries.
In technical terms, externalization is the act of sending JBoss EAP sessions (which are persisted on cookies) to Data Grid for a series of reasons, like footprint and high availability. There are two main approaches to doing that:
Option 1: Infinispan Session Manager (ISM)
Infinispan Session Manager uses an embedded Infinispan cache, rather than a remote cache so the characteristics depend on how the embedded cache is configured e.g. replication, distribution, invalidation+shared cache store, local.
One of the embedded cache modes can be theinvalidation-cache
where there are no client events. Does not have near-cache. Assumes the EAP is in a cluster. ISM is the embedded cache version, not a remote cache. Generally, you configure it with a shared persistent cache store, and the invalidation-cache acts as an L1 so the invalidation-cache already serves this purpose with no need for near-caching.Option 2: Hotrod Session Manager (HSM)
Hotrod Session Manager assumes the EAP pods are not in a cluster. It has near-cache. As the name suggests HSM is the HotRod session manager == remote caching using HotRod.
What matters for this article is that JBoss EAP 7/JBoss EAP 8 will send cookies for the session control and HaProxy will ignore the cookie provided by JBoss EAP 7/JBoss EAP 8, when doing externalization from EAP to DataGrid therefore the optimization that the cookie has is ignored.
In this matter, it is not possible to set the HaProxy template as cookie sessionID indirect preserve
to preserve the original JBoss EAP cookie. One option, although not supported, would be to override the default template with a custom router deployment with a modified haproxy.config
.
Scenario 2: Application sending cookies
Similar to the previous scenario, any application sending its cookie (e.g., OpenShift Serverless application sending cookie to Data Grid) will be:
- If the application sends a cookie for stickiness, current template of Openshift 4's HaProxy will ignore it.
- If the application sets the same cookie name as HaProxy, current template of Openshift 4's will overwrite it.
As mentioned above, HaProxy will not use the application's cookie, therefore it won't use the application cookie or Serverless cookie (so then it cannot be passed) in case of externalization, for instance to Data Grid communication with an external client.
Red Hat OpenShift Servless Operator can be installed via the OperatorHub. After its installation, the user creates the following via APIS: Knative Serving, Knative Eventing, and Knative Kafka Custom Resources. Example:
$ oc get kservice -o yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: showcase
namespace: knative-serving
annotations:
serving.knative.dev/creator: cluster-admin
serving.knative.dev/lastModifier: cluster-admin
spec:
template:
metadata:
annotations:
queue.sidecar.serving.knative.dev/resourcePercentage: "40" <---- 40% of the user-container will be queue-proxy
Therefore, using serverless (Knative's) routing to configure session stickiness to the Data Grid via OpenShift's Ingress route is not possible. For example, when creating a Serverless Serving a Route will declare a Route, unless the user sets the serving.knative.openshift.io/disableRoute: "true"
annotation in the Servlerless Serving Custom Resource. However, that's not the main object of this discussion.
Moreover, if the applications set the same cookie name as HaProxy, then HaProxy will overwrite it. At the current moment, it is not possible to use the same cookie name consequently. For that to work, it will require the RFE-3724, which aims to allow custom haproxy-configuration
.
Scenario 3: SNI imposition
HaProxy will impose SNI.
As explained in the solution DG 8 TLS SNI usage in Client server communication, when connecting to a DG server (client outside OpenShift Container Platform) connecting to the DG inside OpenShift Container Platform, via OpenShift Container Platform router so then SNI will be a limitation when using two servers (main server and backup server). Given that currently there is a limitation on the hot rod client, which only allows one SNI to be set after SNI is fixed in the client, it would be possible to use routes with direct access to each node (one route per node, configured as its external host).
In terms of alternatives, the options are:
- Deploy everything in OpenShift Container Platform, which will avoid Route/NP/LB and would improve the performance.
- Use service expose, not via route, for instance, via
LoadBalancer
orNodePort
.
Future developments and improvements
As previously mentioned, the current Openshift 4.18 default implementation:
- Ignores the application cookie, JBoss EAP cookie, or serverless cookie.
- Overwrites any cookie with the same name.
- Does not allow configuration changes such as
preserve
.
However, in the future, RFE-3724 will allow custom/modifications on the default haproxy-configuration
, and this can be applicable for both scenarios above and will allow the user to set the HaProxy configuration to preserve and not overwrite an application provided cookie.
This article describes HaProxy in a brief explanation and troubleshooting steps, followed by the two scenarios where the HaProxy settings will impact namely JBoss EAP 7/8 externalization and serverless cookies. Finally, we discuss future developments/how they can impact these specific scenarios, and how they extrapolate.
To learn more about container awareness, read Java 17: What’s new in OpenJDK's container awareness.
For any other specific inquiries, please open a case with Red Hat support. Our global team of experts can help you with any issues.
Special thanks to Alexander Barbosa, Will Russell, Courtney Ruhm, and Jacob Shivers for the excellent work we did in these last several years with OpenShift Container Platform networking in collaboration with the Middleware team.