One of the common requirements for Java based applications on OpenShift is to have these workloads connect back out to an enterprise database that resides outside of the OpenShift infrastructure. While OpenShift natively supports a variety of relational databases (including Postgres and MySQL) as Docker based deployments within the platform, connecting to an existing enterprise database infrastructure is preferred in many large organizations for a variety of reasons including:
- Inherent confidence in traditional databases due to in house experience around developing and managing these databases
- Ability to leverage existing backup/recovery procedures around these databases
- Technical limitations with these databases in being able to be deployed in a containerized model
One of the strengths of the OpenShift platform is its ability to accommodate these "traditional" workloads so that middleware operations can take advantage of the benefits/efficiencies gained from Dockeri'zed applications while giving development teams a platform to start designing/architecting applications that would fit into more of a Microservice based pattern that would leverage a datastore such as MongoDB or MySQL that OpenShift supports.
In addition to that, another common workflow in many organizations from a deployment point of view is to externalize the database connection information so that the application can be migrated from environment to environment (example Dev to QA to Prod) with the appropriate database connection information for the various environments. In addition, these teams typically work with the application binary (.war, .ear, .jar) deployment as the artifact thats promoted between environments as opposed to Docker based images.
In this article, I will walk through an example implementation for achieving this. A sensitive aspect of this migration process are the credentials to the database, where storing credentials in clear text is frowned upon. I will cover a variety of strategies in dealing with this in a follow on article. For this example, I will be using the following project which contains the source code that I will be covering in this article.
Lets get started!
Detailed Configuration
Customize the JWS 3/Tomcat Builder Image
The main goal in this exercise is to ensure that the application archive can be built and deployed in various environments with the appropriate Database configuration. From an application's perspective this should not matter since they are using standard JNDI to lookup the Datasource so as long as naming convention is consistent in the different environments, the application should work.
- Create a Kubernetes template that allows specific configuration parameters to be captured by the application deployer (Developer, QA Tester, Production Operator) as input feed to configure the remote database connection. The template also allows us to define appropriate characteristics for the input parameters to aid with the data collection process. These fields can then be entered either via the OpenShift UI or CLI which I will cover later.
{ "description": "Database JNDI name used by application, e.g. jboss/datasources/postgresqlDS", "name": "DB_JNDI", "value": "jdbc/PostgreSQLDS", "required": false }, { "description": "Database Host Name", "name": "DB_HOST", "value": "127.0.0.1", "required": true }, { "description": "Database Port", "name": "DB_PORT", "value": "5432", "required": true }, { "description": "Database name", "name": "DB_DATABASE", "value": "parksdb", "required": true }
- Next, lets add these parameters as environment variables to the Build Configuration so they can be referenced during the Image build phase. Please refer to the actual template for the complete configuration.
. . "env": [ { "name": "DB_JNDI", "value": "${DB_JNDI}" }, { "name": "DB_HOST", "value": "${DB_HOST}" }, { "name": "DB_PORT", "value": "${DB_PORT}" }, { "name": "DB_USERNAME", "value": "${DB_USERNAME}" } ] . .
- Modify the Tomcat builder image to read the appropriate environment properties and configure the appropriate datasource configuration in OpenShift. For this purpose I will leverage my custom .sti/bin/assemble script to perform this configuration. For more information on the S2I process and also instructions on how to leverage custom S2I behavior, please read see this documentation. Here is the code snippet that does this.
# Get Env Variables and Generate Config driver="org.postgresql.Driver" jndi=$(find_env "DB_JNDI") password=$(find_env "DB_PASSWORD") username=$(find_env "DB_USERNAME") database=$(find_env "DB_DATABASE") url="jdbc:postgresql://$(find_env "DB_HOST"):$(find_env "DB_PORT")/$(find_env "DB_DATABASE")" datasources="$(generate_datasource $jndi $username $password $driver $url)nn" C=$(echo $datasources | sed 's///\//g') sed -i "/</Context>/ s/.*/${C}n&/" $JWS_HOME/conf/context.xml
- Thats it! Now lets see how we can leverage this to do environment specific configuration builds.
Verifying the Setup
The first step in this process is to set up a standalone Postgres database. This database can be created in a separate project within OpenShift or it can live outside OSE. For simplicity, I will start with a Postgres 9.4 database deployed within OSE.
Once created please capture the credentials as well as location of the Postgres instance that we will connect to from the sample application.
- Create a new project to host the remote Postgres database.
oc new-project remotedb --description="Remote Postgres Database Version 9.4" --display-name="Remote Postgres DB"
- Create the actual database with the OpenShift provided Postgres image.
oc new-app -e POSTGRESQL_USER=admin,POSTGRESQL_PASSWORD=passw0rD1,POSTGRESQL_DATABASE=jpetstore registry.access.redhat.com/rhscl/postgresql-94-rhel7
- Now get the IP address of the pod running the database thats exposed by the Service layer within OpenShift.
Gunas-MacBook-Pro:jpetstore gvijayar$ oc describe service | grep IP Type: ClusterIP IP: 172.30.90.178
We should have all the information that we need to connect to this database from the Petstore application.
Now we are going to install the JWS reference application along with the necessary database connection parameters (defined in previous step to connect our OpenShift application to the remote Postgres database.
- Create a new project to host the JWS 3/Tomcat 8 based JPetstore reference application.
oc new-project jpetstore --description="jpetstore on jws 8" --display-name="JPetstore application deployed on a JWS/Tomcat image connecting to a remote database
- Now install the application template in the project namespace. This template includes the environment specific database connection details which will be collected at runtime and subsequently used to build the runtime configuration.
oc create -f https://raw.githubusercontent.com/gvijayar/jpetstore-tomcat/master/jpetstore-ose3-externaldb.json
- Create the application by passing in the necessary database connection information as environment variables. These variables will be used by the S2I process to dynamically build the configuration from the Tomcat pod to connect to the external database.
oc new-app jws3-tomcat8-postgresql-custom-s2i -p DB_HOST=172.30.90.178,DB_PORT=5432,DB_DATABASE=jpetstore,DB_USERNAME=admin,DB_PASSWORD=passw0rD1
The reference application will initialize the database schema and pre-populate it with data. In order to verify that the application was able to connect with the remote database, we can log in and verify the table set for the database. In the example where the Postgres instance is deployed on OpenShift, go to the terminal window for the pod and enter the following:
postgres=# c jpetstore You are now connected to database "jpetstore" as user "postgres". jpetstore=# select table_schema, table_name from information_schema.tables where table_schema='public';
table_schema | table_name --------------+------------- public | signon public | account public | profile public | bannerdata public | orders public | orderstatus public | lineitem public | category public | inventory public | sequence public | product public | supplier public | item (13 rows)
Thats all there is to it!
Last updated: February 23, 2024