In Open Liberty 20.0.0.1, you can configure the Social Login feature to use Red Hat OpenShift’s OAuth server for authentication. In addition, there is a new MicroProfile Metric to measure CPU time, memory heap and response time. This release also offers faster application startups with Liberty annotation caching, and an updated JavaServer Face.
Run your apps using 20.0.0.1
If you're using Maven, here are the coordinates:
<dependency> <groupId>io.openliberty</groupId> <artifactId>openliberty-runtime</artifactId> <version>20.0.0.1</version> <type>zip</type> </dependency>
If you're using Gradle:
dependencies { libertyRuntime group: 'io.openliberty', name: 'openliberty-runtime', version: '[20.0.0.1,)' }
Or, if you're using Docker:
FROM open-liberty
Use Liberty Social Login with Red Hat OpenShift
The Social Login feature socialLogin-1.0
can now be configured to use OpenShift’s built-in OAuth server and the OAuth Proxy sidecar as authentication providers. The Social Login feature has several pre-configured providers (e.g., Google, GitHub, and Facebook) but you can also configure additional providers (e.g. Instagram). OpenShift's OAuth server and OAuth Proxy sidecar can now be configured as additional providers too. The first is a standard OAuth Authorization Code flow, where a web browser accessing an app running in Liberty is redirected to the OpenShift OAuth server to authenticate. The second accepts an inbound token from the OpenShift OAuth Proxy sidecar or obtains one from an OpenShift API call. This second approach requires less cluster-specific configuration.
Most people using this feature will run Liberty in a pod. However, in the Authorization Code flow, Liberty can run outside the OpenShift cluster. In either mode, an optional JWT can be created for propagation to downstream services.
Using OpenShift as a provider differs slightly from other OAuth providers, in that it requires a service account token to obtain information about the OAuth tokens. Once the client ID, secret, and token have been obtained from OpenShift, Liberty can be configured.
To enable your server to use an OpenShift OAuth server, add it to the server.xml
file like this:
<server description="social"> <!-- Enable features --> <featureManager> <feature>appSecurity-3.0</feature> <feature>socialLogin-1.0</feature> </featureManager> <logging traceSpecification="com.ibm.ws.security.*=all=enabled" maxFiles="8" maxFileSize="200"/> <httpEndpoint id="defaultHttpEndpoint" host="*" httpPort="8941" httpsPort="8946" > <tcpOptions soReuseAddr="true" /> </httpEndpoint> <!-- specify your clientId, clientSecret and userApiToken as liberty variables or environment variables --> <oauth2Login id="openshiftLogin" scope="user:full" clientId="${myclientId}" clientSecret="${myclientSecret}" authorizationEndpoint="https://oauth-openshift.apps.papains.os.fyre.ibm.com/oauth/authorize" tokenEndpoint="https://oauth-openshift.apps.papains.os.fyre.ibm.com/oauth/token" userNameAttribute="username" groupNameAttribute="groups" userApiToken="${serviceAccountToken}" userApiType="kube" userApi="https://api.papains.os.fyre.ibm.com:6443/apis/authentication.k8s.io/v1/tokenreviews"> </oauth2Login> <keyStore id="defaultKeyStore" password="keyspass" /> <!-- more application config would go here --> </server>
In the sidecar scenario, the configuration changes to accept an inbound token from the sidecar. To set up your server to use an OAuth proxy sidecar:
<!-- specify your userApiToken as a liberty variable or environment variable --> <!-- note that no clientId or clientSecret are needed --> <oauth2Login id="openshiftLogin" scope="user:full" userNameAttribute="username" groupNameAttribute="groups" userApiToken="${serviceAccountToken}" userApiType="kube" accessTokenHeaderName="X-Forwarded-Access-Token" accessTokenRequired="true" userApi="https://kubernetes.default.svc/apis/authentication.k8s.io/v1/tokenreviews"> </oauth2Login>
Using HTTPS communication requires one of two things. You can either give your server a key signed by a well-known certificate authority, which Liberty can trust automatically, or you can add the server's public key to the Liberty trust store. OpenShift does not come with CA-signed keys by default, so the Red Hat OpenShift OAuth server's public key needs to be added. The most convenient way to do this is to specify an environment variable in server.env
, like so:
# server.env # OAuth sidecar scenario: causes the Kubernetes default certificate that is pre-installed in pods to be added to Liberty trust store. cert_defaultKeyStore=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt # OAuth server scenario: causes the public keys from /tmp/trustedcert.pem (obtained seperetly) to be added to Liberty trust store. cert_defaultKeyStore=/tmp/trustedcert.pem
Monitor the process CPU time (MicroProfile Metrics 2.0)
A new metric, processCpuTime,
which returns the CPU time used by the process on which the JVM is running. The MicroProfile Metrics feature provides information monitoring an application, such as CPU time used, memory heaps, and servlet response time.
The new processCpuTime
metric provides a more accurate CPU load percentage on cloud platforms via Grafana. Previously, the CPU load percentage was shown with the metric processCpuLoad
. However, the load percentage was calculated using the total number of cores allocated to the deployment. If the deployment has a restricted number of cores, the processCpuLoad
ends up showing a plateau on Grafana when the maximum number of cores is reached. For example, on a deployment with 32 cores allocated but restricted to four cores, the processCpuLoad
graph shows a plateau at 12.5% when all four cores are used.
The new metric, processCpuTime,
can be manipulated on Grafana to create a more accurate representation of the CPU use. The PromQL query rate(processCpuTime)[1m]
shows the average rate of increase in CPU time over one minute. Dividing this result by the total number of CPU cores, we can see a more accurate percentage of the CPU used, taking into account the constraints.
The new processCpuTime
metric is displayed on the /metrics
endpoint with the MicroProfile Metrics 2.0 and 2.2 features. On the dashboard, a new panel can be created with the following PromQL query:
(rate(base:cpu_process_cpu_time[1m])/1e9) / base:cpu_available_processors{app=~[[app]]}
Start your applications faster with Liberty annotation caching
Application startup time is now faster due to adding annotation caching to the core class and annotation scanning function. Depending on application characteristics, startup times are reduced by 10% to more than 50%. Applications with many jar files, or that use CDI or JAX-RS functions, see the best improvements, as shown in Figure 1.
Note: Good news! Annotation caching is enabled by default.
Annotation cache data is stored in the server's work area. The cache of application class data is cleared when performing a clean server start (starting the server with the --clean
option). In normal operations, the clearing of cache data is not necessary, since the cache automatically regenerates cache data for changed application classes.
In container environments, for annotation caching to be effective, the server image must be warmed when the container image is created. Warming the server can be done by starting and stopping the server during the container build. Warming the image moves the annotation scan into the container build, meaning that you get optimal startup for container deployment. Using the configure.sh
file in the base open-liberty
docker images causes the server to be started and stopped during the container build.
Bug fixes in JavaServer Faces 2.3
JavaServer Faces 2.3 contains a new feature to load bug fixes that are in Apache MyFaces 2.3.6. The jsf-2.3
feature pulls the Apache MyFaces implementation and integrates it into the Liberty runtime.
The Apache MyFaces 2.3.6 release contains bug fixes. View the release notes for more information.
To use JSF 2.3, enable the jsf-2.3
feature to leverage the latest Apache MyFaces 2.3.x release. For more information about the JavaServer feature, view the Apache website.
Try Open Liberty 20.0.0.1 in Red Hat Runtimes now
Open Liberty is part of the Red Hat Runtimes offering. If you're a Red Hat Runtimes subscriber, you can try Open Liberty now.
To learn more about deploying Open Liberty applications to OpenShift, take a look at our Open Liberty guide: Deploying microservices to OpenShift.
Last updated: March 28, 2023