Featured image: Ansible, JBoss EAP, and Wildfly part 2

Welcome to the second part of this series introducing Ansible collection for JCliff. This new extension is designed for fine-tuning WildFly or Red Hat JBoss Enterprise Application Platform (JBoss EAP) configurations using Ansible. In Part 1, we installed JCliff and its Ansible collection and prepared our environment. We set up a minimal, working playbook for installing JCliff on the target system. In this article, we will focus on configuring a few of our WildFly server’s subsystems.

Configuring a new data source

Java applications often need to consume data originating from a database. To connect to a database, WildFly uses a driver. Multiple applications hosted on WildFly can even access a data source at the same time. In this section, you will learn how to configure the data source subsystem using Ansible.

WildFly includes an embedded in-memory database called H2. In the H2 database, the required Java Database Connectivity (JDBC) driver is available and ready to use. The following data source configuration example is already defined in the server’s configuration:

<subsystem xmlns="urn:jboss:domain:datasources:6.0">
            <datasources>
                <datasource jndi-name="java:jboss/datasources/ExampleDS" pool-name="ExampleDS" enabled="true" use-java-context="true" statistics-enabled="${wildfly.datasources.statistics-enabled:${wildfly.statistics-enabled:false}}">
                    <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE</connection-url>
                    <driver>h2</driver>
                    <security>
                        <user-name>sa</user-name>
                        <password>sa</password>
                    </security>
                </datasource>
...

Now, let's say we need another H2 database instance for testing purposes. To set up this new instance we'll use the datasources: element in the jcliff: module:

---

- hosts: localhost
  gather_facts: true
  vars:
    jboss_home: "{{ lookup('env','JBOSS_HOME') }}"
  collections:
    - wildfly.jcliff
  roles:
    - jcliff
  tasks:
    - jcliff:
        wfly_home: "{{ jboss_home }}"
        subsystems:
          - system_props:
              - name: jcliff.enabled
                value: 'enabled.plus'
          - datasources:
              - name: H2DS4Test
                use_java_context: 'true'
                jndi_name: java:jboss/datasources/H2DS4Test
                connection_url: "jdbc:h2:mem:test2;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
                driver_name: h2

Here again, the configuration is straightforward. We only had to add the required connection information. When we run the playbook, a new data source is added. We can verify this by checking WildFly's log file:

...
09:53:03,542 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-6) WFLYJCA0001: Bound data source [java:jboss/datasources/H2DS4Test]
...

Setting up a new JDBC driver

Now, we can go further by using Ansible to add a new JDBC driver to WildFly. First, we’ll need to download the driver and deploy it inside a JBoss module. We set up the deployment by creating the required directory structure under the ${JBOSS_HOME}/modules directory and adding the XML descriptor to it. The example below shows how to use Ansible to add the JDBC driver to WildFly.

...

vars:
  jboss_home: "{{ lookup('env','JBOSS_HOME') }}"
  psql_module_home: "{{ jboss_home }}//modules/org/postgresql/main/"
  jdbc_driver_version: 9.2-1002-jdbc4
  jdbc_driver_jar_filename: "postgresql-{{ jdbc_driver_version }}.jar"
collections:
  - wildfly.jcliff
roles:
  - jcliff

tasks:

  - name: "Set up module dir for Postgres JDBC Driver"
    file:
      path: "{{ psql_module_home }}"
      state: directory
      recurse: yes

  - name: "Ensures WildFly Postgres Driver is present"
    uri:
      url: "https://repo.maven.apache.org/maven2/org/postgresql/postgresql/{{ jdbc_driver_version }}/{{ jdbc_driver_jar_filename }}"
      dest: "{{ psql_module_home }}"
      creates: "{{ psql_module_home }}"

  - name: "Deploy module.xml for Postgres JDBC Driver"
    template:
      src: templates/pgsql_jdbc_driver_module.xml.j2
      dest: "{{ psql_module_home }}/module.xml"

With three tasks, we have managed to fully automate the installation of a new JDBC driver for WildFly. We still need to update the server’s configuration to make the new driver available to our hosted applications. We’ll do that in a moment. First, let's run this part of the playbook to ensure everything is working as expected:

$ ansible-playbook playbook.yml
...
TASK [wildfly.jcliff.jcliff : Test if package jcliff is already installed] *******************************************************************************************************************
ok: [localhost]

TASK [wildfly.jcliff.jcliff : Install Jcliff using standalone binary] ************************************************************************************************************************
skipping: [localhost]

TASK [Set up module dir for Postgres JDBC Driver] ********************************************************************************************************************************************
changed: [localhost]

TASK [Ensures WildFly Postgres Driver is present] ********************************************************************************************************************************************
ok: [localhost]

TASK [Deploy module.xml for Postgres JDBC Driver] ********************************************************************************************************************************************
changed: [localhost]

TASK [jcliff] ********************************************************************************************************************************************************************************
ok: [localhost]

PLAY RECAP ***********************************************************************************************************************************************************************************
localhost                  : ok=10   changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0

Everything we did to install our driver used Ansible's existing primitives. The new part comes now. Thanks to JCliff, we can activate the driver within the server:

   - jcliff:
       wfly_home: "{{ jboss_home }}"
       subsystems:
         - system_props:
             - name: jcliff.enabled
               value: 'enabled.plus'
         - drivers:
             - driver_name: postgresql
               driver_module_name: org.postgresql
               driver_class_name: org.postgresql.Driver
               driver_xa_datasource_class_name: org.postgresql.xa.PGXADataSource  
         - datasources:
             - name: H2DS4Test
               use_java_context: 'true'
               jndi_name: java:jboss/datasources/H2DS4Test
               connection_url: "jdbc:h2:mem:test2;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
               driver_name: h2

After running the playbook, we can see that WildFly has properly deployed the driver. Checking the server's log file verifies it:

...
13:24:25,474 INFO  [org.jboss.as.connector.subsystems.datasources] (management-handler-thread - 1) WFLYJCA0005: Deploying non-JDBC-compliant driver class org.postgresql.Driver (version 9.2)
13:24:25,474 INFO  [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-1) WFLYJCA0018: Started Driver service with driver-name = postgresql
13:24:27,464 INFO  [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-2) WFLYJCA0019: Stopped Driver service with driver-name = postgresql
...

Deploying an application with JCliff

Until now, we have used JCliff only to update WildFly’s configuration. But it can go even further: We can also use JCliff to deploy applications on WildFly. In this section, we will deploy Jenkins CI (webapp) on our WildFly server.

First, we need to add a yum repository so that we can install the required files on the target’s system. Add the following to the Ansible playbook before the call to the jcliff: module:

...
   - name: Jenkins Yum Repository
     yum_repository:
         name: jenkins
         description: Jenkins
         baseurl: http://pkg.jenkins.io/redhat
         gpgcheck: 1

   - name: Install Jenkins
     yum:
         name: jenkins
         state: present
...

Note: This Yum repository requires you to import a Gnu Privacy Guard (GPG) key, so be sure to do that before running the playbook. Enter the following to import the key:
rpm --import https://pkg.jenkins.io/redhat/jenkins.io.key
.

When the playbook runs successfully, the RPM package associated with Jenkins will be installed. The WAR file that we use to deploy the application will be available on the target’s system:

$ yum whatprovides */jenkins*.war
… 

jenkins-2.253-1.1.noarch : Jenkins Automation Server
Repo        : @jenkins
Matched from:
Filename    : /usr/lib/jenkins/jenkins.war

Now, we just need to deploy the application on the WildFly server. To do that, we’ll update the jcliff: module configuration to add the deployments: element:

   - jcliff:
       wfly_home: "{{ jboss_home }}"
       subsystems:
         - system_props:
             - name: jcliff.enabled
               value: 'enabled.plus'
         - drivers:
             - driver_name: postgresql
               driver_module_name: org.postgresql
               driver_class_name: org.postgresql.Driver
               driver_xa_datasource_class_name: org.postgresql.xa.PGXADataSource  
         - datasources:
             - name: H2DS4Test
               use_java_context: 'true'
               jndi_name: java:jboss/datasources/H2DS4Test
               connection_url: "jdbc:h2:mem:test2;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
               driver_name: h2
         - deployments:  
             - name: jenkins
               path: /usr/lib/jenkins/jenkins.war

Once again, we only need to add the path to the WAR file and the name of the application to be deployed. Ansible and JCliff handle the rest.

The playbook runs successfully, and it’s easy to verify that Jenkins has been deployed. Here, again, checking the server’s logfile confirms it:

...
14:06:09,148 INFO  [org.jboss.as.repository] (management-handler-thread - 2) WFLYDR0001: Content added at location /opt/wildfly-20.0.1.Final/standalone/data/content/6b/30899e9a28a361241d19561c965977aea70d7f/content
14:06:09,171 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-5) WFLYSRV0027: Starting deployment of "jenkins" (runtime-name: "jenkins")
14:06:09,840 WARN  [org.jboss.as.server.deployment] (MSC service thread 1-5) WFLYSRV0274: Excluded dependency com.fasterxml.jackson.core.jackson-core via jboss-deployment-structure.xml does not exist.
14:06:09,841 WARN  [org.jboss.as.server.deployment] (MSC service thread 1-5) WFLYSRV0274: Excluded dependency org.jboss.resteasy.resteasy-jackson-provider via jboss-deployment-structure.xml does not exist.
14:06:09,841 WARN  [org.jboss.as.server.deployment] (MSC service thread 1-5) WFLYSRV0274: Excluded dependency com.fasterxml.jackson.core.jackson-databind via jboss-deployment-structure.xml does not exist.
14:06:09,841 WARN  [org.jboss.as.server.deployment] (MSC service thread 1-5) WFLYSRV0274: Excluded dependency com.fasterxml.jackson.core.jackson-annotations via jboss-deployment-structure.xml does not exist.
14:06:10,189 INFO  [org.infinispan.PERSISTENCE] (MSC service thread 1-7) ISPN000556: Starting user marshaller 'org.wildfly.clustering.infinispan.marshalling.jboss.JBossMarshaller'
14:06:10,209 INFO  [org.infinispan.CONTAINER] (MSC service thread 1-7) ISPN000128: Infinispan version: Infinispan 'Turia' 10.1.8.Final
14:06:10,501 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 78) WFLYCLINF0002: Started client-mappings cache from ejb container
14:06:10,557 INFO  [org.jboss.as.server] (management-handler-thread - 2) WFLYSRV0010: Deployed "jenkins" (runtime-name : "jenkins")
...

Conclusion

That’s all for Part 2! You've learned the basics of using the jcliff: module in a playbook, but there are still a few advanced use cases to explore. We'll do that in Part 3, the final installment in this series.

Acknowledgment

Special thanks to Andrew Block for reviewing this series of articles.

Last updated: December 23, 2020