Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Externalized HTTP Session in an OpenShift 3.9 Environment

May 4, 2018
Mauro Vocale
Related topics:
JavaKubernetes
Related products:
Red Hat OpenShift Container Platform

Share:

    In this article, I will show how you can implement a common use case that often happens when you migrate a classic Java EE application into a Red Hat OpenShift environment.

    Scenario

    Usually a classic Java EE application stores a user's information, such the profile's configuration, in the HTTP session. In a typical production scenario, there are several application server instances that build a cluster and are used to implement high availability, failover, and load balancing. To make sure that stateful information is preserved across the application server instances, you must distribute your session as described in the Java EE 7 specification section EE.6.4, "Servlet 3.1 Requirements."

    But how can you do that in an xPaaS environment, such as OpenShift, where you usually deploy a stateless application?

    The recommended approach is to revisit your application in order to make it stateless; in this way, you can scale it in a better way without problems. However, a common request, when a customer evaluates the use of an xPaaS environment, is to migrate an application without modifying it.

    Solution

    The solution to this problem is to distribute the application inside a cluster of Red Hat JBoss Enterprise Application Platform (EAP) instances. But you can improve your architecture through the externalization of the HTTP session from JBoss EAP to Red Hat JBoss Data Grid.

    JBoss Data Grid can be used as an external cache container for application-specific data in JBoss EAP, such as HTTP sessions. This allows scaling of the data layer independently of the application, and enables different JBoss EAP clusters, which may reside in various domains, to access data from the same JBoss Data Grid cluster. Using it, you could preserve your stateful data in scenarios like these:

    • Pod crashes caused by unpredictable problems
    • Deployment of a new version of your application into the xPaaS environment

    Furthermore, when sessions are dealing with large objects, the use of an external JBoss Data Grid cluster will keep the individual JBoss EAP nodes light and free of heap usage.

    Implementation

    For the implementation of this solution, I will use JBoss EAP and JBoss Data Grid inside OpenShift.

    Environment Configuration

    In this article, I will implement this architecture using:

    • Minishift v1.15.1 for my local environment
    • JBoss EAP 7.1.2
    • JBoss Data Grid 7.1.2
    • OpenShift 3.9
    • Apache Maven 3.3.9
    • GIT 2.16.3

    Create OpenShift Node

    The first step is to create the OpenShift machine, using Minishift, in order to have an xPaaS environment. I will use Minishift v1.15.1 and OpenShift 3.9.

    Minishift

    If you don't have Minishift installed in your machine, you can download the Red Hat Container Development Kit.

    Then, check your Minishift installation:

    $ minishift version

    You should obtain output like this:

    minishift v1.15.1+f19ac09
    CDK v3.4.0-2
    

    Then create your OpenShift environment using the minishift start command as shown in the example below. The command will be interactive and you will provide your Red Hat Subscription Management username and password (the same username and password that you used to register yourself to the Red Hat developer program.

    $ minishift start --cpus 4 --memory 6144

    Then you will be able to connect to the web console, at https://192.168.64.9:8443, using the following credentials:

    • Username: developer
    • Password: developer

    Openshift console

     

    Create the Project

    Now you are able to create the project:

    $ oc new-project http-session-externalization --display-name="HTTP Session Externalization into JDG" --description="Project to demonstrate how to externalize EAP HTTP sessions into a remote JDG cluster"

    Install JBoss Data Grid and Scale It to a Cluster Size of Three

    Install JBoss Data Grid using the template and create the cache to store HTTP session information:

    $ oc new-app --template=datagrid71-basic -p CACHE_NAMES=http-session-cache -p MEMCACHED_CACHE=memcached

    Note: In order to avoid an issue related to JBoss Data Grid version 7.1.2, you must set the value of MEMCACHED_CACHE to a value that is different than the default proposed by template. In the command example above, I set the value of MEMCACHED_CACHE parameter to memcached, but you can choose any value except the default.

    Scale up to three nodes:

    $ oc scale --replicas=3 dc datagrid-app

    The three instances create a cluster using the openshift.DNS_PING protocol, and the service datagrid-app-ping is responsible for managing the communication between JBoss Data Grid instances. The service is built through the template datagrid71-basic, which is used to create the JBoss Data Grid app.

    Verify the JBoss Data Grid cluster

    To verify the cluster composition, go to Applications → Pods and select one of the pods named datagrid-app-x-xxxxx.

    Openshift console pods list

    Then click Open Java Console.

    Pod Details

    In the JMX tree, choose jboss.datagrid-infinispan → CacheManager → clustered → CacheManager and verify the value of Cluster size; it should be 3.

    Pod JMX Data Grid Console

    Deploy Your Application in the JBoss EAP App

    Now it's time to deploy the application. You have two strategies to build and deploy it:

    • Using one of the JBoss EAP templates and a GIT source repository
    • Using a binary build, the WAR artifact, and the official JBoss EAP container image

    I will show both implementations.

    JBoss EAP Template and GIT Source Repository

    In order to create the app, execute the following command:

    $ oc new-app --template=eap71-basic-s2i -p SOURCE_REPOSITORY_URL=https://github.com/mvocale/http-session-counter-openshift.git -p SOURCE_REPOSITORY_REF= -p CONTEXT_DIR= -e JGROUPS_PING_PROTOCOL=openshift.DNS_PING -e OPENSHIFT_DNS_PING_SERVICE_NAME=eap-app-ping -e OPENSHIFT_DNS_PING_SERVICE_PORT=8888 -e CACHE_NAME=http-session-cache

    In this way, you will launch a build of the source code, stored in a GIT repository, that creates an image and, finally, a deployment of your application using the variables defined before.

    Binary Build and JBoss EAP Container Image

    Sometimes you must work in an enviroment with some restrictions that create trouble in operations such Maven builds. In this case, you should compile your project, in order to have a deployable artifact, and then use it to release your application.

    In this case, you should clone the example GIT project:

    $ cd ~/Projects
    $ git clone https://github.com/mvocale/http-session-counter-openshift.git

    Then compile the project using Maven:

    $ mvn clean package

    After that, create a directory that stores the artifact and the JBoss EAP configuration file (use the standalone-openshift.xml file that you can find in the configuration directory of the project):

    ### Set to my user_home ####
    $ cd
    ### Create a directory deploy_dir ####
    $ mkdir deploy_dir
    ### Create a directory configuration ####
    $ mkdir deploy_dir/configuration
    $ cd deploy_dir/
    ### Copy the artifact ####
    $ cp ~/Projects/http-session-counter-openshift/target/http-session-counter.war .
    ### Copy the JBoss EAP configuration file ####
    $ cp ~/Projects/http-session-counter-openshift/configuration/standalone-openshift.xml configuration

    Now deploy the application into OpenShift:

    #### Create a new build based on binary strategy ####
    $ oc new-build registry.access.redhat.com/jboss-eap-7/eap71-openshift --binary=true --name=eap-app
    ### Set the current position to my user_home where I previous created the deploy_dir directory ####
    $ cd
    #### Start a new build ####
    $ oc start-build eap-app --from-dir=~/deploy_dir
    #### Create a new app and set the cluster attribute for JBoss EAP and the remote cache name where I want to store HTTP session information ####
    $ oc new-app eap-app -e JGROUPS_PING_PROTOCOL=openshift.DNS_PING -e OPENSHIFT_DNS_PING_SERVICE_NAME=eap-app-ping -e OPENSHIFT_DNS_PING_SERVICE_PORT=8888 -e JGROUPS_CLUSTER_PASSWORD=myPwd$!! -e CACHE_NAME=http-session-cache

    Then you should patch the deployment config in order to expose the Jolokia port in the eap-app service created by the new-app operation. The easiest way to do this is to use the web console. Go to Application → Deployment and select the eap-app deployment configuration.

    Openshift Deployment config list
    Then select Actions → Edit YAML in order to update it.

    Openshift deployment config edit YAML

    Now update the ports section of the deployment configuration using this code:

    ports:
      - containerPort: 8080
        name: http
        protocol: TCP
      - containerPort: 8443
        name: https
        protocol: TCP
      - containerPort: 8778
        name: jolokia
        protocol: TCP

    Now create and expose the service needed for the JBoss EAP cluster configuration:

    $ oc expose dc eap-app --port=8888 --name=eap-app-ping --cluster-ip=None

    Finally, create the route needed to access the application:

    $ oc expose svc/eap-app

    Scale Up the Application to a Cluster Size of Two

    After the deployment of your application is completed, regardless of the chosen deployment strategy, you should be able to scale up your application using this command:

    $ oc scale --replicas=2 dc eap-app

    Verify the JBoss EAP Cluster

    To verify the cluster composition, you should perform the same steps that you did for the JBoss Data Grid. The only difference is that you need to go to Applications → Pods and select one of the pods named eap-app-x-xxxxx.

    Then, once you have opened the Java console, choose org.wildfly.clustering.infinispan → CacheManager → web → CacheManager and verify the value of Cluster size; it should be 2.

    Pod JMX EAP Console

     

    The two instances create a cluster using the openshift.DNS_PING protocol, and the service eap-app-ping is responsible for managing the communication between JBoss EAP instances. If you want to use KUBE_PING as a procotol to build a cluster, you must perform the following steps:

    #### Create a service account of the name eap-service-account for EAP 7 ####
    $ oc create serviceaccount eap-service-account -n http-session-externalization
    
    #### Assign a view permission to the service account in the current project namespace ####
    $ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap-service-account -n $(oc project -q)
    
    #### Also assign the default service account the view access ####
    $ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)

    Then you should change/add the following environment variables into the deployment configuration:

     JGROUPS_PING_PROTOCOL=openshift.KUBE_PING
     OPENSHIFT_KUBE_PING_NAMESPACE=http-session-externalization (the value is the project name)
     OPENSHIFT_KUBE_PING_LABELS=eap-app (the value is the application name)

    Test the HTTP Session Externalization

    For testing, I will use the Firefox browser and its Developer Tools, especially the Network component.

    First of all execute, oc get routes and determine the service URL for the JBoss EAP pods.

    $ oc get routes

    The output should be something similar to this:

    eap-app eap-app-http-session-externalization.192.168.64.9.nip.io eap-app 8080-tcp None
    

    Copy the URL to a web browser and append the context path /http-session-counter; the counter is set to 1 in the session.

    Application I screenshotEnable the Network tool of the Developer Tools and then re-run/refresh the same URL; the counter is now set to 2, as shown in the image below:

    Application screenshot 2

    Now select the record in the developer tool and do copy -> copy as cURL.

    Paste the result in a terminal and perform the operation. The counter should continue to increase:

    $ curl 'http://eap-app-http-session-externalization.192.168.64.9.nip.io/http-session-counter/' -H 'Host: eap-app-http-session-externalization.192.168.64.9.nip.io' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:59.0) Gecko/20100101 Firefox/59.0' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Accept-Language: it-IT,it;q=0.8,en-US;q=0.5,en;q=0.3' --compressed -H 'Cookie: JSESSIONID=TvkZgWtyxDIW-Cxr63CyzbsO2G4i3UKaCsq3pY_m.eap-app-1-49d68' -H 'Connection: keep-alive' -H 'Upgrade-Insecure-Requests: 1'

    The output will be something like this:

    <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <title>Http Session Counter</title>
    
    </head>
    <body>
     Counter is set at : 3 <br/>
     The request arrived at node : eap-app-1-49d68
    </body>
    </html>

    Now scale down the application in order to simulate all types of problems that could cause pods to be destroyed and re-created:

    #### This will shut down all the JBoss EAP pods ####
    $ oc scale --replicas=0 dc eap-app

    Then scale up the cluster again:

    #### This will create two fresh new JBoss EAP pods ####
    $ oc scale --replicas=2 dc eap-app

    If you perform the same tests, both from the browser and from the terminal via curl, you should notice that the counter value still increases.

    Key Points of the Architecture

    The key points of the implementation that has just been implemented are the following:

    1) You must mark your application as distributable through the web.xml file:

    <?xml version="1.0" encoding="UTF-8"?>
    <web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd" version="3.1">
        ...
        <distributable/>
        ...
    </web-app>

    2) The JBoss EAP instances must build a cluster in order to activate the HTTP session replication feature.

    3) The communication between JBoss EAP and JBoss Data Grid is defined in this section of standalone-openshift.xml: infinispan subsystem -> web cache container -> remote-store:

    <subsystem xmlns="urn:jboss:domain:infinispan:4.0">
        ...
        <cache-container name="web" default-cache="repl" module="org.wildfly.clustering.web.infinispan">
            ...
            <replicated-cache name="repl" mode="ASYNC">
                <locking isolation="REPEATABLE_READ"/>
                <transaction mode="BATCH"/>
                <remote-store remote-servers="remote-jdg-server"
                     cache="${env.CACHE_NAME}" socket-timeout="60000"
                     preload="true" passivation="false" purge="false" shared="true"/>
            </replicated-cache>
            ....
        </cache-container>
    </subsystem>

    4) The remote-jdg-server is an outbound socket connection that is defined in this section of the standalone-openshift.xml: socket-binding-group -> outbound-socket-binding:

    <socket-binding-group name="standard-sockets" default-interface="public" port-offset="0">
        ...
        <outbound-socket-binding name="remote-jdg-server">
            <remote-destination host="${env.DATAGRID_APP_HOTROD_SERVICE_HOST:127.0.0.1}" port="${env.DATAGRID_APP_HOTROD_SERVICE_PORT:11222}"/>
            </outbound-socket-binding> 
    </socket-binding-group>

    5) The value of env.DATAGRID_APP_HOTROD_SERVICE_HOST and env.DATAGRID_APP_HOTROD_SERVICE_PORT are automatically provided to the pods by the service datagrid-app-hotrod. In OpenShift, the service automatically adds the environment variables using the following schema: ${SVCNAME}_SERVICE_HOST, where ${SVCNAME} is the service name.

    Conclusion

    You have seen how to preserve old-style Java EE application behaviour in an xPaaS environment such as OpenShift. This could be useful to port stateful applications to cloud environments without impacting the applications. In this way, using this implementation, you could move the stateful part of your application from the application server to the data layer. You could consider this as a first step to going towards a stateless and cloud-ready application.

    Last updated: March 24, 2023

    Recent Posts

    • How to run a fraud detection AI model on RHEL CVMs

    • How we use software provenance at Red Hat

    • Alternatives to creating bootc images from scratch

    • How to update OpenStack Services on OpenShift

    • How to integrate vLLM inference into your macOS and iOS apps

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue