A developer-centered approach to application development

Do you dream of a local development environment that’s easy to configure and works independently from the software layers that you are currently not working on? I do!

As a software engineer, I have suffered the pain of starting projects that were not easy to configure. Reading the technical documentation does not help when much of it is outdated, or even worse, missing many steps. I have lost hours of my life trying to understand why my local development environment was not working.

An ideal scenario

As a developer, you have to meet a few prerequisites before contributing to a project. For instance, you must agree to the version-control requirements, and you need to know how to use the project IDE, how to use a package manager, and so on.

But nothing more. You don’t need to learn a poorly documented, made-in-house framework just to satisfy the ego of an architect who wanted to reinvent the wheel. You don’t need to run an external virtual machine to emulate the production environment. As a developer, you are free to invest your time in improving the code and adding value to the product.

A developer-centered approach to application development

My goal with this article is to describe strategies for building an Angular 8 application in a way that centers the developer experience.

Continue reading “A developer-centered approach to application development”

Share

Install Apache Tomcat and deploy a Java web application on Red Hat OpenShift

If you are new to OpenShift, then you might want to install Apache Tomcat on top of it for simpler experimentation. This article guides you through installing Apache Tomcat from a Docker image and then using it to deploy a Java web app on Red Hat OpenShift. I also show you how to access the Tomcat management console on OpenShift.

To follow the examples, you must have an OpenShift account. We will use the OpenShift command-line interface (CLI) for this demonstration, so be sure to install the CLI (oc) before you begin.

A note about the sample application: You will need a Java web application to use for the deployment example. I am using the Sample Java Web Application from the OpenShift Demos GitHub repository. It is a simple application that is useful for understanding basic concepts. You may use the provided sample or choose your own application to work with.

Continue reading “Install Apache Tomcat and deploy a Java web application on Red Hat OpenShift”

Share

Develop Eclipse MicroProfile applications on Red Hat JBoss Enterprise Application Platform Expansion Pack 1.0 with Red Hat CodeReady Workspaces

This article builds on my previous tutorial, Enable Eclipse MicroProfile applications on Red Hat JBoss Enterprise Application Platform 7.3. To follow the examples, you must have Eclipse MicroProfile enabled in your Red Hat JBoss Enterprise Application Platform Expansion Pack (JBoss EAP XP) 1.0.0.GA installation, via Red Hat CodeReady Studio. See the previous article for installation instructions.

In this article, we will use the installed MicroProfile-enabled image to set up a JBoss EAP XP quickstart project in Red Hat CodeReady Workspaces (CRW). You can also apply what you learn from this article to develop your own applications using CodeReady Workspaces.

Note: For more examples, be sure to see the video demonstration at the end of the article.

Continue reading “Develop Eclipse MicroProfile applications on Red Hat JBoss Enterprise Application Platform Expansion Pack 1.0 with Red Hat CodeReady Workspaces”

Share

Kourier: A lightweight Knative Serving ingress

Until recently, Knative Serving used Istio as its default networking component for handling external cluster traffic and service-to-service communication. Istio is a great service mesh solution, but it can add unwanted complexity and resource use to your cluster if you don’t need it.

That’s why we created Kourier: To simplify the ingress side of Knative Serving. Knative recently adopted Kourier, so it is now a part of the Knative family! This article introduces Kourier and gets you started with using it as a simpler, more lightweight way to expose Knative applications to an external network.

Let’s begin with a brief overview of Knative and Knative Serving.

Continue reading “Kourier: A lightweight Knative Serving ingress”

Share

Migrating a namespace-scoped Operator to a cluster-scoped Operator

Within the context of Kubernetes, a namespace allows dividing resources, policies, authorization, and a boundary for cluster objects. In this article, we cover two different types of Operators: namespace-scoped and cluster-scoped. We then walk through an example of how to migrate from one to the other, which illustrates the difference between the two.

Namespace-scoped and cluster-scoped

A namespace-scoped Operator is defined within the boundary of a namespace with the flexibility to handle upgrades without impacting others. It watches objects within that namespace and maintains Role and RoleBinding for role-based access control (RBAC) policies for accessing the resource.

Meanwhile, a cluster-scoped Operator promotes reusability and manages defined resources across the cluster. It watches all namespaces in a cluster and maintains ClusterRole and ClusterRoleBinding for RBAC policies for authorizing cluster objects. Two examples of cluster-scoped operators are istio-operator and cert-manager. The istio-operator can be deployed as a cluster-scoped to manage the service mesh for an entire cluster, while the cert-manager is used to issue certificates for an entire cluster.

These two types of Operators support both types of installation based on your requirements. In the case of a cluster-scoped Operator, upgrading the Operator version can impact resources managed by the Operator in the entire cluster, as compared to upgrading the namespace-scoped Operator, which will be easier to upgrade as it only affects the resource within its scope.

Continue reading “Migrating a namespace-scoped Operator to a cluster-scoped Operator”

Share

Introducing the Red Hat build of the OpenJDK Universal Base Images—now in Red Hat Enterprise Linux 8.2

With the recent release of Red Hat Enterprise Linux 8.2, we also added the first Red Hat build of OpenJDK Universal Base Images. These General Availability (GA) images for OpenJDK 8 and OpenJDK 11 set a new baseline for anyone who wants to develop Java applications that run inside containers in a secure, stable, and tested manner.

In this article, we introduce the new OpenJDK Universal Base Images and explain their benefits for Java developers. Before we do that, let’s quickly review what we know about UBIs in general.

About Universal Base Images

Red Hat Universal Base Images (UBIs) are:

OCI-compliant container base operating system images with complementary runtime languages and packages that are freely redistributable. Like previous base images, they are built from portions of Red Hat Enterprise Linux (RHEL). UBI images can be obtained from the Red Hat container catalog and be built and deployed anywhere.

In other words, UBIs help application developers reach the secure, stable, and portable world of containers. These images are accessible using well-known tools like Podman/Buildah and Docker. Red Hat Universal Base Images also allow users to build and distribute their own applications on top of enterprise-quality bits that are supportable on Red Hat OpenShift and Red Hat Enterprise Linux.

Continue reading “Introducing the Red Hat build of the OpenJDK Universal Base Images—now in Red Hat Enterprise Linux 8.2”

Share

What enterprise developers need to know about security and compliance

One of the luxuries of my job is that I get to speak to and work with a range of IT people employed by U.S. federal and state government agencies. That range includes DevOps engineers, developers, sysadmins, database administrators, and security professionals. Everyone I talk to, even security professionals, says that IT security and compliance can be imprecise, subjective, overwhelming, and variable—especially in the federal government.

The plethora of policies, laws, and standards can be intimidating in aggregate. Here is a short list:

  • Authorization to Operate (ATO)
  • Federal Information Security Management Act (FISMA)
  • Federal Risk and Authorization Management Program (FedRAMP)
  • Department of Defense Cloud Computing Security Requirements Guide (DoD SRG)
  • 508 Compliance

Additionally, what it means to be secure or compliant changes from agency to agency, and even between authorizing officers within a single agency, based on a range of factors.

Many developers ask how newer technologies and behaviors like infrastructure as code (IaC), continuous integration/continuous delivery (CI/CD), containers, and a range of cloud services map to compliance frameworks and federal security laws. As a developer, you’re often wondering what your security responsibility is right from the beginning of the software development lifecycle, and you might still be wondering all the way into production. Trust me when I say that you’re not the only one who is wondering.

This article is dedicated to helping you, the developer, understand more of the standards so that less is unknown and variable. The ultimate goal is to make security more precise, establish known responsibility (and gaps in responsibility), and incorporate security into your daily workflow—even when the requirements and how they are interpreted change from project to project.

Share responsibility and inherit when you can

Start by establishing that it’s not your job to cover it all (or, to say it another way, don’t try to boil the ocean). As a developer in the enterprise, it is impossible for you to deliver secure code, configure operating system images, monitor the network, and scan filesystems. If you try to do all of that, you’ll spend far less time securing the components that you are responsible for—source code, application dependencies, and testing—and you’ll spread yourself too thin.

The number of platforms and vendors that offer reliable, reputable options for services and products has never been higher, and it is still growing. Infrastructure cloud providers supply resources like compute, network, and storage provision, while software vendors typically provide supported container images for their products.

The lesson here is to inherit the responsibility that these platforms and vendors assumed for securing an information system. Doing that allows you to check off a significant number of security controls during the ATO process. In the case of patches or responding to security incidents, these providers (not just you) are on the hook to deliver.

This doesn’t mean that you should stop talking to ops and infosec, or that you can place all of the blame on them if an internal, enterprise service they’ve provided fails. If anything, it means you should work with providers of all types that welcome your collaboration. Intentionally share the responsibility for managing risk. Having an established RACI matrix supports clear communication and assignments of responsibility between developers, operations, and security teams. It enforces the behaviors necessary to generate a successful DevSecOps culture.

Embrace the world of continuous security

If you’re still doubting when it’s important to focus on security in the areas that you are responsible for, then do it all the time. Rather, do it continuously.

Authors Gene Kim, Nicole Forsgren, and Jez Humble did everyone involved in enterprise software delivery a big favor with their book, Accelerate: Building and Scaling High Performing Technology Organizations, which measures the behaviors that lead to overall organizational performance.  Using data collected from enterprises of all types, the book proves that behaviors like continuous delivery and continuous security lead to a more collaborative culture and overall improved organizational performance. As the book states:

“Teams that build security into their work also do better at continuous delivery.”

In other words, your organization will reach its goals and deliver better software if you build security into your daily work. Continuously performing security activities and behaviors forces you to learn as you go. It also forces you out of bad behaviors like leaving SSH keys lying around in a repository.

The DevSecOps Reference Design, authored by the U.S. Department of Defense, is an excellent resource for identifying what security steps to apply during specific software development lifecycle phases to support continuous security. Among other things, the guide recommends:

  • IDE security plugins that scan source code as the developer writes it.
  • Static source code tools that scan code before commits.
  • Source code repository security plugins to check for things like authorization tokens, SSH keys, and passwords.
  • Container registries with security-scanning frameworks that can identify vulnerabilities in container images.
  • Dependency checks for applications using potentially vulnerable open source libraries.
  • Dynamic and interactive tools that scan applications after they’ve been built.
  • Manual penetration tests that mimic an attacker and might discover vulnerabilities not uncovered by another tool.
  • Continuous monitoring solutions that identify threats at runtime and persist event logging for analysis.

Get involved in choosing your tools

Telling a developer to be picky about tools kind of goes without saying. Nevertheless, I’ve had conversations with too many developers who think that information security is someone else’s job. It’s everyone’s job.

Because developers are responsible for security, it’s important to have a choice about the security products used during the development, building, and testing phases.  In Accelerate, authors Kim, Forsgren, and Humble argue that choosing the correct tool is technical work. “When the tools provided actually make life easier for the engineers who use them, they will adopt them of their own free will.”

Security will be more precise if it’s surrounded by the precision of your existing work, such as the security plugins for your favorite IDEs or the scanning tools that are just another component in your CI/CD pipeline. If a tool gives you faster feedback, if it takes less time to use than remediating a vulnerability later in the delivery cycle, then you are more apt to use it. Choose the right tools to incorporate security into your daily work.

Tech is evolving, but security principles don’t

Container orchestration, serverless platforms, intelligent security tooling, and automated deployments might be changing how we scan for vulnerabilities or patch an application in production, but that doesn’t change the fact that security is still more about the process than the products, or that defense-in-depth still reigns supreme for securing an information system. There is a good reason that the National Institute of Standards and Technology’s special publication on security controls includes 18 different control families or categories on security. Everything from how you encrypt sensitive information to where you put the fire extinguishers matters to information security. New technologies for cryptography and fire management are surfacing all the time.

If anything, the best practice is to have an open mind to new technologies that could help you address security concerns in your organization. Automated configuration management, container orchestration, and immutable infrastructure all make it easier to consistently and repeatably configure information systems. IDE security plug-ins and container registry security scanning help to shift security “left” by enabling developers to identify vulnerabilities early rather than during a security review that occurs downstream.

These tools don’t remove the need for standard security principles like security confirmation management and vulnerability scanning; rather, they allow you to focus your attention on where you most need to analyze and manage risk.

Education is the silver bullet

Just like software security and delivery, learning should be continuous. New tools help to manage security prevention and response, but attackers are also continuously innovating and learning. There are plentiful resources for delivering software securely, especially in the context of Agile and DevOps. As a former product manager, I can’t state enough the power of threat modeling as an exercise for scrum teams to identify potential risks early on in the software development lifecycle.

I also recommend borrowing what’s worked before and replicating outcomes. Read studies about how others have delivered security continuously, like the National Geospatial-Intelligence Agency’s success with Red Hat OpenShift. In the case of this program, applications were able to inherit ~90% of their security controls from a hardened CI/CD pipeline and container platform, which not only significantly reduced the developer’s responsibility for addressing the controls within their application, but also reduced the time the time to compliance (application ATOs in a single Agile sprint!) Even better, those information security organizations that are pursuing “continuous authorizations” like the US Air Force can really complement and even accelerate their teams that are conducting continuous security in parallel (i.e., all boats rowing in the same direction).

Reversing the formula, I would also look at security incidents from the past, such as the Apache Struts statement on the Equifax security breach from 2017. Use these past incidents to identify where users and enterprises went wrong. Examine them carefully to understand the consequences of not sharing responsibility and adopting continuous security.

The recommendations from the Apache statement apply to all types of software delivery, not only to projects using Struts, and certainly not only to Equifax. Everyone is building software today, including government departments and agencies of all types. For software to be delivered continuously and securely, security needs to be everyone’s job. This type of behavior leads to lower costs and faster implementation times, which invariably leads to more rapid approvals for compliance and security authorization audits. Let’s make it an integral part of each phase of the software development lifecycle.

Share

IoT Developer Survey takes a turn to edge computing: Deadline June 26 2020

The Eclipse IoT Working Group has launched the annual IoT Developer Survey. This survey is in its sixth edition and is the largest developer survey in the Internet-of-Things (IoT) open source industry. The deadline to submit your responses is June 26, 2020.

About the IoT Developer Survey

This year’s IoT Developer Survey will provide valuable insights into current trends in the IoT-industry landscape and the requirements and challenges that IoT communities are facing. The survey will also highlight how these trends are shaping and impacting businesses and enterprise strategies for software vendors, hardware manufacturers, service providers, original equipment manufacturers (OEMs), enterprises of all sizes, and individual developers.

Continue reading “IoT Developer Survey takes a turn to edge computing: Deadline June 26 2020”

Share

Red Hat JBoss Enterprise Application Platform expansion pack 1.0 released

Red Hat recently released the first Red Hat JBoss Enterprise Application Platform expansion pack (JBoss EAP XP) version 1.0. This version enables JBoss EAP developers to build Java microservices using Eclipse MicroProfile 3.3 APIs while continuing to also support Jakarta EE 8. This article goes into detail on the nature of this new offering and an easy way to get started.

Introduction to JBoss EAP expansion packs and Eclipse MicroProfile

Organizations that have already embarked on—or are thinking about starting—a digital transformation journey are assessing and looking for ways to leverage their Java EE/Jakarta EE expertise. IT development and operations have built Java expertise over years, and there is a challenge to balance their existing skill base with new technologies, such as microservices, APIs, container-based architectures, and reactive programming. Eclipse MicroProfile is an open source project and one of those technologies that enables and optimizes the development of microservices while using familiar Java EE technologies and APIs.

You can think of MicroProfile as a minimal standard profile for Java microservices. As with Jakarta EE, MicroProfile implementations across different vendors are fully interoperable. You can read more about MicroProfile in the free e-book Enterprise Java microservices with Eclipse MicroProfile.

By using this expansion pack with Red Hat JBoss Enterprise Application Platform, which is part of Red Hat Runtimes, developers can use JBoss EAP as a MicroProfile-compliant platform. This release simplifies the inherent complexity of developing cloud-native applications on JBoss EAP with MicroProfile. The expansion pack is a separate downloadable distribution that can be applied on top of existing JBoss EAP servers, or you can use the container images available for use with Red Hat OpenShift when deploying JBoss EAP on OpenShift.

Test driving a sample app

As outlined in the documentation, the expansion pack is distributed as a patch, which is applied using the JBoss EAP XP Patch Manager. To quickly see how this works, let’s take a test drive. You’ll need a few developer tools like OpenJDK 11, Git, Maven, a text editor, and utilities like curl, along with having your Red Hat Developer credentials ready for the first step:

  • Download JBoss EAP 7.3.0: (You will need your Red Hat Developer credentials for this step.) Save it to your local desktop and unzip it into any folder you like, under which you’ll find a new folder called jboss-eap-7.3.0. I’ll extract the zip file to the /tmp folder for brevity:
    $ unzip -d /tmp ~/Downloads/jboss-eap-7.3.0.zip
  • Download the JBoss EAP 7.3 Update 01 Patch: We’ll use this file to patch our 7.3.0 to 7.3.1 (the required version for EAP XP). I’ll save it to /tmp/jboss-eap-7.3.1-patch.zip.
  • Download JBoss EAP XP 1.0.0 Manager: It’s a JAR file. I’ll also save this to /tmp/jboss-eap-xp-1.0.0.GA-manager.jar.

With our downloads complete, let’s apply the patches:

  1. Apply the patch to take EAP from 7.3.0 to 7.3.1 using the following command:
    $ /tmp/jboss-eap-7.3/bin/jboss-cli.sh "patch apply /tmp/jboss-eap-7.3.1-patch.zip"
  2. Set up the JBoss EAP Patch Manager:
    $ java -jar /tmp/jboss-eap-xp-1.0.0-manager.jar setup --jboss-home=/tmp/jboss-eap-7.3
  3. Apply the patch for JBoss EAP XP 1.0:
    $ /tmp/jboss-eap-7.3/bin/jboss-cli.sh "patch apply /tmp/jboss-eap-xp-1.0.0-patch.zip"
  4. Start JBoss EAP using the MicroProfile configuration that was installed as part of the patch, and enable metrics on the server:
    $ /tmp/jboss-eap-7.3/bin/standalone.sh -Dwildfly.statistics-enabled=true -c=standalone-microprofile.xml

With our new JBoss EAP plus MicroProfile server started, let’s deploy a sample app. Open a separate terminal and:

  1. Use Git to clone the Quickstarts repository to your local machine (I’ll put it in /tmp as well):
    $ git clone https://github.com/jboss-developer/jboss-eap-quickstarts /tmp/jboss-eap-quickstarts
  2. Build and deploy the sample helloworld-rs (a simple RESTful app using JAX-RS) to the running JBoss EAP:
    $ mvn clean install wildfly:deploy -f /tmp/jboss-eap-quickstarts/helloworld-rs

Now that our sample app is deployed, let’s try to access the MicroProfile Metrics API to gather metrics about our app and the server:

$ curl -s http://localhost:9990/metrics

# HELP jboss_undertow_request_count_total The number of requests this listener has served
# TYPE jboss_undertow_request_count_total counter
jboss_undertow_request_count_total{https_listener="https",server="default-server",microprofile_scope="vendor"} 0.0
jboss_undertow_request_count_total{http_listener="default",server="default-server",microprofile_scope="vendor"} 3.0
jboss_undertow_request_count_total{deployment="helloworld-rs.war",servlet="org.jboss.as.quickstarts.rshelloworld.JAXActivator",subdeployment="helloworld-rs.war",microprofile_scope="vendor"} 0.0

You’ll see lots of metrics in the OpenMetrics format that JBoss EAP exposes. This output could later be hooked up to Prometheus if you wanted to set up alerts on different metrics. We can filter based on scope and metric name to only show a subset for our app:

$ curl -s http://localhost:9990/metrics/vendor/jboss_undertow_request_count | grep helloworld-rs.war

jboss_undertow_request_count_total{deployment="helloworld-rs.war",servlet="org.jboss.as.quickstarts.rshelloworld.JAXActivator",subdeployment="helloworld-rs.war",microprofile_scope="vendor"} 0.0

This output shows us the metrics from the Undertow subsystem for our helloworld-rs app, showing that there have been zero requests.

Now access the actual app itself one time:

$ curl http://localhost:8080/helloworld-rs/rest/json

{"result":"Hello World!"}

Let’s access the MicroProfile Metrics again, expecting that the request count should be 1.0:

$ curl -s http://localhost:9990/metrics/vendor/jboss_undertow_request_count | grep helloworld-rs.war

jboss_undertow_request_count_total{deployment="helloworld-rs.war",servlet="org.jboss.as.quickstarts.rshelloworld.JAXActivator",subdeployment="helloworld-rs.war",microprofile_scope="vendor"} 1.0

Indeed it is (look at the end of the line for 1.0. Previously it was 0.0). Note that in order for metrics to be generated, you must remember to enable statistics on the server using -Dwildfly.statistics-enabled=true.

Using MicroProfile APIs

In the previous example, we didn’t actually type or use any MicroProfile APIs in the helloworld-rs application. Instead, the core MicroProfile capabilities of JBoss EAP XP were reporting on standard metrics. Let’s actually use one of the MicroProfile APIs for OpenAPI in our app.

JBoss EAP XP can generate OpenAPI documentation for applications, even without using OpenAPI. Run this curl command to see what it generates for our app:

$ curl http://localhost:8080/openapi
---
openapi: 3.0.1
info:
  title: helloworld-rs.war
  version: "1.0"
servers:
- url: /helloworld-rs
paths:
  /rest/json:
    get:
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                type: string
  /rest/xml:
    get:
      responses:
        "200":
          description: OK
          content:
            application/xml:
              schema:
                type: string

This outputs the OpenAPI-formatted documentation for our example REST APIs. While this is useful, let’s improve it with MicroProfile OpenAPI!

Before we do that, we’ll need to add the dependencies to our pom.xml. First, add a <dependencyManagement> element to pull in the MicroProfile Bill of Materials (BOM). Add this to the pom.xml in the quickstart’s base directory—in my case in /tmp/jboss-eap-quickstarts/helloworld-rs/pom.xml—right before the existing <dependency> block:

    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.jboss.bom</groupId>
                <artifactId>jboss-eap-xp-microprofile</artifactId>
                <version>1.0.0.GA</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>

Next, inside of the <dependency> block, add a new dependency for the MicroProfile OpenAPI:

<dependency>
  <groupId>org.eclipse.microprofile.openapi</groupId>
  <artifactId>microprofile-openapi-api</artifactId>
  <scope>provided</scope>
</dependency>

With our dependencies specified, we can now use the MicroProfile APIs. In the src/main/java/org/jboss/as/quickstarts/rshelloworld/HelloWorld.java file, look for the getHelloWorldJSON() method, and add the following line above the @GET annotation:

@Operation(description = "Get Helloworld as a JSON object")

This adds a simple OpenAPI annotation that will add the description in the /openapi output. You will also need to add a new import statement at the top of the file:

import org.eclipse.microprofile.openapi.annotations.Operation;

Save the file, and re-deploy the app with this command:

$ mvn clean install wildfly:deploy -f /tmp/jboss-eap-quickstarts/helloworld-rs

With the new OpenAPI-annotated app in place, access the OpenAPI endpoint once again:

$ curl  http://localhost:8080/openapi
---
openapi: 3.0.1
info:
  title: helloworld-rs.war
  version: "1.0"
servers:
- url: /helloworld-rs
paths:
  /rest/json:
    get:
      description: Get Helloworld as a JSON object
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                type: string
  /rest/xml:
    get:
      responses:
        "200":
          description: OK
          content:
            application/xml:
              schema:
                type: string

You can see the new description added in the docs for the /rest/json endpoint. You can further enhance/complete your OpenAPI documentation by adding additional MicroProfile OpenAPI annotations. You will need to rebuild/redeploy for those changes to be reflected in the OpenAPI document.

There are many other MicroProfile APIs you can use to enhance your applications, ranging from fault tolerance, security with JWT, REST clients, and more. The JBoss EAP XP Quickstarts illustrate how each is used to create Java microservices on JBoss EAP. Users of CodeReady Studio can also use MicroProfile APIs, as outlined in this article.

Using JBoss EAP XP on OpenShift

JBoss EAP XP is also available through OpenShift S2I images, just like JBoss EAP itself. Let’s deploy an example. First, you’ll need an OpenShift 4.x cluster that has access to registry.redhat.io, and the oc command line, and be logged into your cluster. Then:

    1. Create a new project to house our app:
      $ oc new-project eap-demo
    2. Import the ImageStream definitions for XP on OpenJDK 11 (this requires cluster-admin privileges):
      $ oc replace --force -n openshift -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap-xp1/jboss-eap-xp1-openjdk11-openshift.json
    3. Import the Templates that define how apps are deployed (this requires cluster-admin privileges):
      $ oc replace --force -n openshift -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap-xp1/templates/eap-xp1-basic-s2i.json
    4. Create the app from the template:
      $ oc new-app --template=eap-xp1-basic-s2i -p APPLICATION_NAME=eap-demo \
      -p EAP_IMAGE_NAME=jboss-eap-xp1-openjdk11-openshift:1.0 \
      -p EAP_RUNTIME_IMAGE_NAME=jboss-eap-xp1-openjdk11-runtime-openshift:1.0 \
      -p IMAGE_STREAM_NAMESPACE=openshift \
      -p SOURCE_REPOSITORY_URL=https://github.com/jboss-developer/jboss-eap-quickstarts \
      -p SOURCE_REPOSITORY_REF=7.3.x \
      -p CONTEXT_DIR="helloworld-rs"
    5. Two builds will run, one after the other, to build the app.
      $ oc logs -f bc/eap-demo-build-artifacts && oc logs -f bc/eap-demo
    1. After both builds complete, watch the rollout with:
      $ oc rollout status -w dc/eap-demo

      The build and rollout will take some time to finish.

    2. Once the app is done building and deploying, access the same OpenAPI endpoint as before:
      $ curl -k https://$(oc get route eap-demo -o jsonpath="{.spec.host}")/openapi

You can also see the deployed app on the OpenShift Topology view in the OpenShift Console:

OpenShift Topology View with JBoss EAP Application

OpenShift Topology View with JBoss EAP Application

Documentation

The new Using Eclipse MicroProfile in JBoss EAP guide can be found within the JBoss EAP documentation. The guide covers important details about MicroProfile and how developers can quickly get started using MicroProfile APIs in their JBoss projects. You can also find important information about the release itself in the Red Hat JBoss Enterprise Application Platform Expansion Pack 1.0 Release Notes.

Getting support for JBoss EAP XP

Support for expansion packs is available to Red Hat customers through a subscription to Red Hat Runtimes. Contact your local Red Hat representative or Red Hat Sales for details on how you can enjoy world-class support offered from Red Hat and its worldwide partner network.

JBoss Enterprise Application Platform expansion pack (JBoss EAP XP or EAP XP) is subject to its own separate support policy and life cycle for closer alignment with Eclipse MicroProfile specification release cadence. JBoss EAP server instances with the EAP XP setup will be covered in their entirety by the new EAP XP policy and life cycle.

By setting up EAP XP, your server will be subject to the EAP XP support and life cycle policy. Please refer to the JBoss Enterprise Application Platform expansion pack Life Cycle page for more details.

JBoss EAP XP resources

Share

Enterprise Kubernetes development with odo: The CLI tool for developers

Kubernetes conversations rarely center the developer’s perspective. As a result, doing our job in a k8s cluster often requires building complicated YAML resource files, writing custom shell scripts, and understanding the countless options that are available in kubectl and docker commands. On top of all of that, we have the learning curve of understanding Kubernetes terminology and using it the way that operations teams do.

To address these challenges, the Red Hat Developer Tools team created odo (OpenShift Do), a command-line interface (CLI) tool built for developers and designed to prioritize the things that developers care about. In this article, I will use a hands-on example to introduce you to the benefits of using odo in conjunction with Kubernetes.

Improving the developer workflow

First, let’s consider a typical workflow for a developer whose team has adopted Kubernetes. The workflow starts with local development activities and finishes with containers deployed and code running in one or more Kubernetes clusters. To help visualize this flow, you can think of it in terms of an inner loop and an outer loop. The inner loop consists of local coding, building, running, and testing the application—all activities that you, as a developer, can control. The outer loop consists of the larger team processes that your code flows through on its way to the cluster: code reviews, integration tests, security and compliance, and so on. The inner loop could happen mostly on your laptop. The outer loop happens on shared servers and runs in containers, and is often automated with continuous integration/continuous delivery (CI/CD) pipelines. Usually, a code commit to source control is the transition point between the inner and outer loops. Figure 1 illustrates the interplay of these loops in a Kubernetes development process.

A flow diagram of the inner and outer loops in a Kubernetes development process.

Figure 1. A flow diagram of the inner and outer loops in a Kubernetes development process.

Notice that, while you code, you are constantly iterating through various development activities: You code, build, deploy locally, and debug—and you keep going until you achieve a degree of feature completeness. At some point, you will be ready to transition from inner to outer, right? Not so quick.

Deploying from the inner loop

You might think that your job stops at local testing and a Git pull request (or a git push)—but that’s not usually the case. You will still need to ensure that your code functions correctly in containers, runs in the cluster, and plays nicely with other containerized components. Therefore, you will want some iterations of your inner loop to deploy and debug directly into the Kubernetes cluster.

Here’s a list of steps you might typically follow to deploy from the inner loop:

  1. Describe how to configure the OS for your container:
    • Write a Dockerfile to set up Linux.
  2. Describe how to package your app into a container image:
    • Update the Dockerfile.
  3. Create a container image:
    • Issue the commands docker build and docker tag.
  4. Upload the container image to a registry:
    • Issue a docker push.
  5. Write one or more Kubernetes or OpenShift resource files:
    • Write lots of YAML.
  6. Deploy your app to the cluster:
    • Issue the command: kubectl apply -f my_app.yaml.
  7. Deploy other services to the cluster:
    • Issue the command: kubectl apply -f svc*.yaml.
  8. Write the config (or set ENV) to allow apps to work together:
    • Issue a kubectl create configmap.
  9. Configure apps to work together correctly:
    • Issue a kubectl apply -f my_configmap.yaml.

That’s a lot of steps!

Enter, odo

Red Hat OpenShift’s oc CLI tool can help make many of those steps easier; however, oc is operations focused. Using it requires a deep understanding of Kubernetes and OpenShift concepts. Odo, on the other hand, was designed to be simple and concise:

  • Its syntax and design center concepts familiar to developers, such as projects, applications, and components.
  • It automates the creation of deployment configurations, build configurations, service routes, and other OpenShift elements.
  • It is designed for quick iterations—as an example, it detects changes to local code and deploys to the cluster automatically, giving developers instant feedback to validate changes in realtime.
  • It is completely client-based, so no server-side-component setup is required.

Odo also offers:

  • Red Hat support for Node.js and Java components.
  • Compatibility with other languages such as Ruby, Perl, PHP, and Python.
  • Status updates for components and services on the cluster.

Odo works from any terminal on the Windows, macOS, and Linux operating systems, and it supports autocompletion for bash and zsh command-line shells.

That’s enough overview. Let’s see odo in action.

Hands-on development with Odo

If you want to follow along with this example, start by downloading odo for your platform of choice.

For macOS, the command is:

> curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64 -o /usr/local/bin/odo && chmod +x /usr/local/bin/odo

For Linux, it’s:

> curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 -o /usr/local/bin/odo && chmod +x /usr/local/bin/odo

Next, clone the example source code:

> git clone https://github.com/RedHatGov/openshift-workshops.git > cd openshift-workshops/dc-metro-map

If you aren’t already logged in to your cluster with oc, run this and enter your login info:

> odo login https://api.yourcluster.com:6443

Alternatively, you could use the following link to get a token-based login (note that you must update the URL with your cluster’s domain name): https://oauth-openshift.apps.yourcluster.com/oauth/token/display.

We now have a setup for a sample Node.js application. In the next sections, I’ll show you how to use odo to deploy the app to a Kubernetes cluster; configure and connect the app to other services; and update an environment variable and verify the changes in a web browser. I’ll conclude by showing you how to do a simple code change and quickly iterate through the development process before propagating your local code back into the Kubernetes cluster.

Part 1: Deploy the app

The first thing you’ll do is set up a new project and deploy it on a Kubernetes cluster.

  1. Create a project that only you can work in by entering a command similar to the one below:
    > odo project create jasons-odo
    

    You should see output similar to mine below:

    ✓ Project 'jasons-odo' is ready for use
    ✓ New project created and now using project: jasons-odo
    
  2. Create a Node.js component for the new project:
    > odo create nodejs
    

    The output should look something like this:

    ✓ Validating component [61ms]
    Please use `odo push` command to create the component with source deployed
    
  3. Push the changes—in this case, a new component and the example application code—to the cluster:
    > odo push
    

    You should see something like this:

    Validation
     ✓  Checking component [116ms]
    
    Configuration changes
     ✓  Initializing component
     ✓  Creating component [336ms]
    
    Pushing to component nodejs-dc-metro-map-zvff of type local
     ✓  Checking files for pushing [2ms]
     ✓  Waiting for component to start [1m]
     ✓  Syncing files to the component [7s]
     ✓  Building component [32s]
     ✓  Changes successfully pushed to component
    

The code is now running in a container on the cluster. But we also want to create a URL route into the code so that we can view the running application in a web browser. Next steps:

  1. Expose an HTTP route into your Node.js app:
    > odo url create --port 8080

    Check the output:

    ✓  URL nodejs-dc-metro-map-zvff-8080 created for component: nodejs-dc-metro-map-zvff
    To create URL on the OpenShift Cluster, please use `odo push`
    
  2. Push the new URL change to the cluster:
    > odo push
    

    Check the output:

    Validation
     ✓  Checking component [88ms]
    
    Configuration changes
     ✓  Retrieving component data [107ms]
     ✓  Applying configuration [107ms]
    
    Applying URL changes
     ✓  URL nodejs-dc-metro-map-zvff-8080: http://nodejs-dc-metro-map-zvff-8080-app-jasons-odo.apps.yourcluster.com created
    
    Pushing to component nodejs-dc-metro-map-zvff of type local
     ✓  Checking file changes for pushing [7ms]
     ✓  No file changes detected, skipping build. Use the '-f' flag to force the build.
    

To verify that the deployment has worked, locate the URL in the command output just shown (or run odo url list) and try opening it in your web browser. You should see something like the map in Figure 2.

A map of transit stops in Washington DC's Federal Triangle.

Figure 2. A map of transit stops in Washington D.C.’s Federal Triangle.

Part 2: Configure and connect the app to other services

Next, you’ll use odo to add a database dependency to your Node.js app. For this to work, your cluster will need to have both OpenShift Service Catalog and Template Service Broker installed.

  1. Create the database and pass-in the defaults for config variables:
    > odo service create mongodb-persistent --plan default --wait \
    -p DATABASE_SERVICE_NAME=mongodb -p MEMORY_LIMIT=512Mi \
    -p MONGODB_DATABASE=sampledb -p VOLUME_CAPACITY=1Gi
    

    Here’s the output:

    Deploying service mongodb-persistent of type: mongodb-persistent
    
     ✓  Deploying service [55ms]
     ✓  Waiting for service to come up [3m]
     ✓  Service 'mongodb-persistent' is ready for use
    

    Optionally, link mongodb-persistent to your component by running: odo link.

  2. Provide your Node.js app with the database credentials and other secrets needed to configure and connect to the database:
    > odo link mongodb-persistent
    

    You should see something like the following output:

    ✓  Service mongodb-persistent has been successfully linked to the component nodejs-dc-metro-map-zvff
    
    The below secret environment variables were added to the 'nodejs-dc-metro-map-zvff' component:
    
    admin_password
    database_name
    password
    uri
    username
    
    You can now access the environment variables from within the component pod, for example:
    $uri is now available as a variable within component nodejs-dc-metro-map-zvff
    

Part 3: Update the environment variables

Let’s say you need to update some env vars for your containerized Node.js app. Doing that with odo is really straightforward.

  1. Tell odo what env var to add or update:
    > odo config set --env BEERME=true
    

    You should see something like the following output:

     ✓  Environment variables were successfully updated
    Run `odo push --config` command to apply changes to the cluster.
    
  2. Push the changes with the new env var to the cluster:
    > odo push --config
    

    You should see something like this:

    Validation
     ✓  Checking component [84ms]
    
    Configuration changes
     ✓  Retrieving component data [96ms]
     ✓  Applying configuration [40s]
    
    Applying URL changes
     ✓  URL nodejs-dc-metro-map-zvff-8080 already exists
    

Now refresh the page in your web browser. You’ll see that the new env has taken effect. Your map icons should now look like pint glasses, as shown in Figure 3.

The updated map shows the effect of changing the environment variable.

Figure 3. The updated map icons verify that changing the environment variable worked.

Part 4: Iterate the inner loop

In this last part, I’ll show you how to do a simple code change with odo. I’ll also demonstrate how iterating on your inner loop easily propagates local code into the cluster deployment.

  1. Edit the local file public/assets/stations.geojson to add a new bus stop. Append it to the bottom of the file, right after Ronald Reagan Washington National Airport:
    > vim public/assets/stations.geojson
       {
          "type": "Feature",
          "properties": {
            "name": "Presidential Metro Stop",
            "marker-color": "#ffd700",
            "marker-symbol": "rail-metro",
            "line": "blue"
          },
          "geometry": {
            "type": "Point",
            "coordinates": [
              -77.0365,
              38.8977
            ]
          }
        }
    
  2. Push changes to the cluster:
    > odo push

    You should see the following output:

    Validation
     ✓  Checking component [86ms]
    
    Configuration changes
     ✓  Retrieving component data [96ms]
     ✓  Applying configuration [114ms]
    
    Applying URL changes
     ✓  URL nodejs-dc-metro-map-zvff-8080 already exists
    
    Pushing to component nodejs-dc-metro-map-zvff of type local
     ✓  Checking file changes for pushing [3ms]
     ✓  Waiting for component to start [23ms]
     ✓  Syncing files to the component [1s]
     ✓  Building component [3s]
     ✓  Changes successfully pushed to component
    

Now, refresh the web page. You should see that there’s a new transit stop for the White House, as shown in Figure 4.

An updated map with a new bus icon located at the White House.

Figure 4. The updated map shows that code changes have been successfully pushed to the deployed cluster.

Conclusion

In this article, I showed you how to use odo for a variety of day-to-day development activities (what I call the inner loop of a Kubernetes-based development process). I also showed you how to deploy and debug iterations of your inner loop directly into the Kubernetes cluster.

We completed all of the tasks required to develop and deploy the example application without writing any YAML, without bash scripts, and without needing to understand the deep concepts of Kubernetes operations. Instead, we used the CLI and just a handful of commands—odo, project, create, push, service, url, link, config.

Odo can do a few things I didn’t cover in this article. See the official odo documentation to learn more about its full capabilities.

Also, if you liked the concepts in this article but really don’t like using a CLI, Red Hat has you covered. We’ve embedded odo into a VS Code plugin and a JetBrains plugin, so that you can get the same capability directly in an IDE.

Odo is just one of the awesome tools that Red Hat has been working on to make it easier for developers to build modern applications with open source software. Stay tuned for more articles introducing these tools that are tailored just for developers.

Share