Red Hat Data Grid is a stable product for in-memory distributing caching solutions. In Red Hat OpenShift Container Platform, Data Grid can be installed via Data Grid Operator or Helm Charts can be used to deploy Data Grid 8 and have containers deployed running Data Grid. Those pods can then join together in a cluster—therefore having clustering capabilities.
Given the deployment of the Data Grid cluster as described above and with the pods deployed, the next step is to access the Data Grid cluster.
When accessing the clustering within the OpenShift Container Platform cluster—i.e., when the application access is done inside OpenShift Container Platform—the communication can be via internal service. However, if the application is outside OpenShift Container Platform, Data Grid must expose a service or route to enable the connection, which can be done in a few ways: via Route
, LoadBalancer
, and NodePort
.
Below, these options are described in the context of Data Grid Operator and Data Grid Helm charts, respectively. Parts of this article are derived from this solution: Data Grid 8 Operator Exposition Route vs NodePort vs LoadBalancer in OpenShift 4
Application inside OpenShift Container Platform
The internal service can be used if a hotrod application is deployed in OpenShift Container Platform, either in the same namespace or in another namespace. The internal service won't be accessible outside the OpenShift Container Platform environment.
The options are either to deploy with the automatically created certificate, use a custom certificate, or do not use the certificate.
Example:
When deploying any trivial clustering the internal service will be created:
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-infinispan ClusterIP 172.??.???.??? <none> 11222/TCP 7m28s
example-infinispan-admin ClusterIP None <none> 11223/TCP 7m28s
example-infinispan-ping ClusterIP None <none> 8888/TCP 7m28s
infinispan-operator-controller-manager-service ClusterIP 172.??.???.??? <none> 443/TCP 22m
And then it can be fetched such as:
oc get service infinispan-test -o go-template --template='{{.metadata.name}}.{{.metadata.namespace}}.svc.cluster.local{{println}}'
For more details and an excellent example see this blog post, by Alexander Barbosa.
Application outside OpenShift Container Platform
Data Grid Operator setup
Given a hotrod application deployed outside OpenShift Container Platform, the Data Grid must be exposed in the Infinispan
CustomResource, and can be of three types: Route, (service type) LoadBalancer, and NodePort.
Route
By setting Data Grid Infinispan CR to expose the type Route
, on the spec.expose.Type: Route
a Route
will be created and another service to interact with the application:
Example:
kind: Infinispan
...
spec:
...
expose:
type: Route
service:
type: DataGrid
Given a route is created, the user can get this route via oc get route
:
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
dg-cluster-route-external example.openshift.org dg-cluster-route 11222 passthrough None
LoadBalancer
By setting Data Gridto expose the type Route
, on the spec.expose.Type: LoadBalancer
a service of type LoadBalancer
is created. This is a service with a type LoadBalancer
, and the OpenShift Container Platform infrastructure must provide the LoadBalancer
, as not all providers will have this feature.
Example:
kind: Infinispan
...
spec:
...
expose:
type: LoadBalancer
service:
type: DataGrid
Given a service is created, the user can get the route via oc get svc
:
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dg-cluster-lb ClusterIP 127.0.0.1 <none> 11222/TCP 30m
dg-cluster-lb-admin ClusterIP 127.0.0.1 <none> 11223/TCP 30m
dg-cluster-lb-external LoadBalancer 127.0.0.1 example.amazonaws.com 11222:30436/TCP 30m
dg-cluster-lb-ping ClusterIP None <none> 8888/TCP 30m
In case the provider does not have that feature, e.g., in case you are running OpenShift on bare metal that does not have a LoadBalancer implementation available, the following error message will appear:
LoadBalancer expose type is not supported on the target platform
NodePort
By setting Data Grid to expose the type Route
, on the spec.expose.Type: NodePort
a service of type NodePort
is created.
Example:
kind: Infinispan
...
spec:
...
expose:
type: NodePort
service:
type: DataGrid
Given a service is created, the user can get the route via oc get svc
:
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dg-cluster-lb ClusterIP 127.0.0.1 <none> 11222/TCP 30m
dg-cluster-lb-admin ClusterIP 127.0.0.1 <none> 11223/TCP 30m
dg-cluster-lb-external NodePort 127.0.0.1 example.amazonaws.com 11222:30436/TCP 30m
dg-cluster-lb-ping ClusterIP None <none> 8888/TCP 30m
Data Grid Helm Charts
For Data Grid the options are the same: deploy.expose.type
.
Summary
Below I summarized in the table below the comparison of each one.
Comparison Table
Type of expose | How to set it up in Data Grid Operator | How to set it up in Data Grid Helm Charts | Implications |
NodePort | spec.expose.Type: NodePort | deploy.expose.type NodePort | Data Grid Operator will create a service of type NodePort |
LoadBalancer | spec.expose.Type: LoadBalancer | deploy.expose.type LoadBalancer | Data Grid Operator will create a service of type LoadBalancer |
Route | spec.expose.Type: Route | deploy.expose.type Route | Data Grid Operator will create a service of type ClusterIP and a HaProxy's Route |
Conclusion
Above I described the approaches to access Data Grid deployed in OpenShift Container Platform 4, which include NodePort
, LoadBalancer
, and Routing
exposition. Each method provides some feature/functionality but has its disadvantages as well.
The usage of Route
for example, although very simple in OpenShift Container Platform 4 and provides load balancing, will require the limitations/boundaries of OpenShift Container Platform 4's Ingress controller and therefore HaProxy to be respected.
This is important because, in scenarios of externalization for Data Grid, HaProxy might overwrite the cookies that are created by an application externalizing to Data Grid given its default template the original cookie is not preserved. Regarding HaProxy configurations, it is usually configured via annotations, and not all features have annotations ready available. For haproxy-config.template
customization, this is currently not possible in OpenShift Container Platform 4, but perhaps it will be added later.
Moreover, the fact that HaProxy is the middleman, has implications in terms of settings but also SNI validations, where although an OpenShift Container Platform 4 route created by Data Grid is passthrough, the Route
will impose SNI and there is a default cert being used so encryption settings might need to be considered.
On the other hand, the usage of service type LoadBalancer
does not provide full load balancing capabilities such as HaProxy, but it requires the underneath OpenShift Container Platform infrastructure to provide this feature. NodePort
is basic and does not provide load balancing to get external traffic directly to the Data Grid Cluster.
Finally, in terms of deployment, both Data Grid Operator and Helm Charts provide an easy approach to deploy the specific Ingress method at the choice of the user with a simple setting:
Infinispan
Custom Resource in the case of DG Operator deployment- The values YAML in the case of Helm Charts deployment.
Additional resources
To learn more about Data Grid 8.4 see its main page. And for Data Grid Operator see its main guide. Finally, Data Grid 8 Operator Exposition Route vs NodePort vs LoadBalancer in OpenShift 4 brings other interesting topics about externalization using Route vs LoadBalancer vs NodePort.
For any other specific inquiries, please open a case with Red Hat support. Our global team of experts can help you with any issues.
Special thanks to Alexander Barbosa for his input and contributions throughout those years in Data Grid.