Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

MicroProfile 3.3 now available on Open Liberty 20.0.0.4, brings updated features, yum/apt-get support, pattern tracking

 

April 24, 2020
yasmin-aumeeruddy
Related topics:
LinuxKubernetesMicroservices
Related products:
Developer Tools

Share:

    Open Liberty 20.0.0.4 provides support for MicroProfile 3.3 which includes updates to MicroProfile Rest Client, Fault Tolerance, Metrics, Health, and Config. Improved developer experience is also achieved with support for yum/apt-get installs and the ability to track use patterns with JAX-RS 2.1.

    The updates for Open Liberty 20.0.0.4 include:

    • MicroProfile 3.3
      • Determine HTTP headers on outgoing requests easily (MicroProfile Rest Client 1.4)
      • Monitor microservice applications (MicroProfile Metrics 2.3)
      • Provide health check procedures (MicroProfile Health 2.2)
      • Monitor faults in your microservices (MicroProfile Fault Tolerance 2.1)
      • Configure your microservices externally (MicroProfile Config 1.4)
    • Track usage patterns and performance of services (JAX-RS 2.1)
    • Support for yum/apt-get installations
    • Automatically compress HTTP responses
    • Open Liberty Grafana dashboard now available at grafana.com
    • Support OpenShift service account credentials for authentication

    View the list of fixed bugs in 20.0.0.4.

    Run your applications using 20.0.0.4

    If you're using Maven, here are the coordinates:

    <dependency>
        <groupId>io.openliberty</groupId>
        <artifactId>openliberty-runtime</artifactId>
        <version>20.0.0.4</version>
        <type>zip</type>
    </dependency>

    Or for Gradle:

    dependencies {
        libertyRuntime group: 'io.openliberty', name: 'openliberty-runtime', version: '[20.0.0.4,)'
    }

    Or if you're building an image:

    FROM open-liberty

    MicroProfile 3.3 support

    MicroProfile 3.3 improves developer experience with updates to the Rest Client, Fault Tolerance, Metrics, Health, and Config features.

    Determine HTTP headers on outgoing requests (MicroProfile Rest Client 1.4)

    MicroProfile Rest Client 1.4 adds injection into ClientHeadersFactory instances. When executing a Rest Client inside a JAX-RS request, it can be useful to pull data from the JAX-RS request's context or from CDI in order to determine which HTTP headers to send on the outgoing request. With MicroProfile Rest Client 1.4, this is now possible.

    To enable MicroProfile Rest Client 1.4, add this feature to your server.xml:

    mpRestClient-1.4

    CDI and JAX-RS injection into your ClientHeadersFactory helps you to do things like this:

    >@ApplicationScoped
    public class MyCustomClientHeadersFactory implements ClientHeadersFactory {
    
        @Context
        private UriInfo uriInfo;
    
        @Inject
        private Foo foo;
    
        @Override
        public MultivaluedMap <String, String> update(MultivaluedMap<String, String> incomingHeaders MultivaluedMap<String, String> clientOutgoingHeaders) {
            MultivaluedMap<String, String> myHeaders = new MultivaluedHashMap<>();
            myHeaders.putSingle("X-HEADER_FROM_CUSTOM_CLIENTHEADERSFACTORY", "456");
    
            URI uri = uriInfo.getAbsolutePath();
            myHeaders.putSingle("X-INJECTED_URI_INFO", uri == null ? "null" : uri.toString());
    
            myHeaders.putSingle("X-INJECTED_FOO", foo.getWord());
    
            return myHeaders;
        }
    }

    Learn more about the MicroProfile Rest Client here.

    Track REST resource use with Simple Timer metric and runtime (MicroProfile Metrics 2.3)

    MicroProfile Metrics 2.3 introduces a new metric type called a Simple Timer (annotated with @SimplyTimed) and runtime provided metrics that track microservice application REST resource method usage and is backed by the new Simple Timer metric. Simple Timer is a lightweight alternative to the existing Timer metric, only tracking the total timing duration and counting the number of times it was invoked. The Timer metric, on the other hand, is performance-heavy and continually calculates duration statistics and throughput statistics resulting in 14 values.

    The new REST stat metrics are gathered from REST resource method usage (for example, GET, POST, PUT, DELETE, OPTIONS, PATCH, and HEAD). Simple Timer metric tracks total time duration and total invocation count. This functionality is properly enabled when used in combination with the jaxrs-2.1 feature. All REST stat metrics will use the REST.request metric name and will be tagged/labeled with their fully qualified class name and method signature.

    To enable the feature, include the following in the server.xml.

    mpMetrics-2.3

    The monitor-1.0 feature will be activated with the mpMetrics-2.3 feature. As a result, vendor metrics will be enabled due to the monitor-1.0 activation. To filter out this data you can configure the following into the server.xml to choose the stats you want to see:

    <monitor filter="JVM,ThreadPool,WebContainer,Session,ConnectionPool,REST"/>

    Alternatively, disable all monitor-1.0 stats:

    <monitor filter=“ ”/> <!-- space required -->

    To use the new SimpleTimer metric programmatically:

    >@Inject
    MetricRegistry metricRegistry;
    
    //create metric
    Metadata metadata= Metadata.builder().withName("sampleSimpleTimer").build();
    SimpleTimer simpleTimer = metricRegistry.simpleTimer(metadata);
    
    //retrieve simple timer context (will start timing)
    SimpleTimer.Context simpleTimerContext = simpleTimer.time()
    
    doLogic();
    //stops the simple timer from timing
    simpleTimerContext.close();

    To use the SimpleTimer metric with annotations:

    >@SimplyTimed(name=“sampleSimpleTimer”)
    public void doSomething() {
        doLogic();
    }

    Here is the resulting OpenMetrics output:

    # TYPE application_sampleSimpleTimer_total counter
    application_sampleSimpleTimer_total 12
    # TYPE application_sampleSimpleTimer_elapsedTime_seconds gauge
    application_sampleSimpleTimer_elapsedTime_seconds 12.3200000

    You can enable REST stat metrics with the mpMetrics-2.3 feature by using the following REST resource:

    >package org.eclipse.microprofile.metrics.demo;
    
    @ApplicationScoped
    public class RestDemo {
    
      @POST
      public void postMethod(String... s, Object o){
          ...
      }
    }

    Regarding REST stat metrics, the OpenMetrics formatted REST metrics would be:

    # TYPE base_REST_request_total counter
    base_REST_request_total{class="org.eclipse.microprofile.metrics.demo.RestDemo",method="postMethod_java.lang.String[]_java.lang.Object"} 1
    # TYPE base_REST_request_elapsedTime_seconds gauge
    base_REST_request_elapsedTime_seconds{class="org.eclipse.microprofile.metrics.demo.RestDemo",method="postMethod_java.lang.String[]_java.lang.Object"} 1.000

    Provide your own health check procedures (MicroProfile Health 2.2)

    MicroProfile Health Check 2.2 lets you provide your own health check procedures, which are invoked by Open Liberty to verify the health of your microservice. In the mpHealth-2.2 feature, all of the supported Qualifiers (Liveness and Readiness) now have annotation literals added in the specification. These ease programmatic lookup and support for inline instantiation of the qualifiers, which was not supported in the previous versions.

    Also, for better integration with third-party frameworks—like the MicroProfile Rest Client—the HealthCheckResponse class declaration was changed from an abstract class to a concrete class, with constructors allowing for direct instantiation on the consuming end. To enable the feature, include the following in the server.xml:

    <feature>mpHealth-2.2</feature>

    Applications are expected to provide health check procedures by implementing the HealthCheck interface with the @Liveness or @Readiness annotations. These annotations are used by Open Liberty to verify the application's Liveness or Readiness, respectively. Add your health checks in the call() method and return the HealthCheckResponse object, by using the simple up()/down() methods from the API:

    >*Liveness Check*
    @Liveness
    @ApplicationScoped
    public class AppLiveCheck implements HealthCheck {
    ...
        @Override
         public HealthCheckResponse call() {
           ...
           HealthCheckResponse.up("myCheck");
           ...
         }
    }

    To view the status of each health check, access either the http://:/health/live or http://:/health/ready endpoints.

    Monitor faults in your microservices (MicroProfile Fault Tolerance 2.1)

    MicroProfile Fault Tolerance allows developers to easily apply strategies for mitigating failure to their code. It provides annotations that developers can add to methods to use bulkhead, circuit breaker, retry, timeout and fallback strategies. In addition, MicroProfile Fault Tolerance provides an annotation that causes a method to be run asynchronously.

    MicroProfile Fault Tolerance 2.1:

    • Adds the new parameters applyOn and skipOn to @Fallback, and adds skipOn to @CircuitBreaker, to give the user more control over which exceptions should trigger these strategies. For example:
    >@Fallback(applyOn=IOException.class, skipOn=FileNotFoundException.class, fallbackMethod="fallbackForService")
    public String readTheFile() {
        ...
    }
    • Ensures that the CDI request context is active during the execution of methods annotated with @Asynchronous.
    • Adds more detail into the Javadoc and makes minor clarifications to the specification.

    For more information on MicroProfile Fault Tolerance:

    • Get an introduction to MicroProfile Fault Tolerance:
      • Failing fast and recovering from errors.
      • Preventing repeated failed calls to microservices.
    • Reference the Javadoc.
    • Reference the full specification including the 2.1 release notes.
    • Report any issues on Github.

    Configure your microservices externally with updated converters (MicroProfile Config 1.4)

    The MicroProfile Config 1.4 feature provides an implementation of the Eclipse MicroProfile Config 1.4 API, which mainly had changes to the built-in and implicit converters. The Open Liberty implementation already supported byte/Byte and short/Short, but char/Character has now been added.

    If we have the following properties available in a ConfigSource:

    byte1=128
    short1=5
    char1=\u00F6

    You can inject those properties into your application, either as primitives or as their boxed equivalents:

    >@Dependent
    public class MyBean {
    
        @Inject
        @ConfigProperty(name = "byte1")
        private Byte property1;
    
        @Inject
        @ConfigProperty(name = "byte1")
        private byte property2;
    
        @Inject
        @ConfigProperty(name = "short1")
        private Short property3;
    
        @Inject
        @ConfigProperty(name = "short1")
        private short property4;
    
        @Inject
        @ConfigProperty(name = "char1")
        private Character property5;
    
        @Inject
        @ConfigProperty(name = "char1")
        private char property6;
    }

    To enable the feature, include the following in the server.xml:

    <feature>mpConfig-1.4</feature>

    Previously, in MicroProfile Config 1.3, the String constructor would have been used to do the implicit conversion. In version 1.4, the parse(CharSequence) method is used instead. Notice that the parse method uses a simple cache. If the same raw String (String extends CharSequence) is converted twice then the same instance of MyType will be returned. This result would not be possible with a String constructor.

    We have also made a notable internal change to the Open Liberty implementation. In versions prior to 1.4, our implementation included a background update thread that frequently scanned through the available ConfigSources and cached the results. This behavior made very fast calls to the Config API. However, because the size and complexity of user-provided ConfigSources is unknown, this process was a potentially expensive thing to be doing in the background.

    The background update thread has been replaced with an expiry process. What this means is that the first request for a property might be a little slower as it might need to go through all of the available ConfigSources to find a value. Once found, this value is then cached and a timer is started to expire the cache. If a second request is made for that property before the cache expires then the cached value is used and returns quickly.

    In order to maintain the same dynamic characteristics of the previous versions, the expiry time is set to only 500ms. This value might be increased by setting the microprofile.config.refresh.rate system property. A setting of 500ms is the minimum expiry time allowed, but if the property is set to zero or less, then caching is disabled.

    For more information, see changes to the API since 1.3.

    Track use patterns and service performance (JAX-RS 2.1)

    The JAX-RS 2.1 auto-feature is enabled whenever the jaxrs-2.0 (or jaxrs-2.1) features are specified within the server.xml, along with the monitor-1.0 feature. This auto-feature introduces the ability to collect statistics related to the execution of an application's RESTful resource methods (specifically the number of invocations and the cumulative execution time). This data is useful for designing, debugging, and monitoring purposes.

    RESTful metrics can be accessed via the monitor-1.0 feature in combination with the mpMetrics-2.3 feature. This information is also accessible via JMX (JConsole, for example). Include the following in the server.xml for JMX/PMI access:

    <feature>jaxrs-2.0</feature> (or jaxrs-2.1)
    <feature>monitor-1.0</feature>

    The JMX/PMI data collected is per-method and is aggregated to the class and web-module level. For example, suppose that a server has two web modules, each with identically named classes containing two resource methods. The results for the REST_Stats in Jconsole will look like Figure 1.

    Jconsole with REST_Stats results displayed
    Figure 1: Examine your RESTful metrics with JAX-RS 2.1.

    Install OpenLiberty with yum/apt-get

    Open Liberty is now available as a native Linux .deb or .rpm package, so you can now use native OS tools (yum/apt) to manage your Open Liberty installations. To access Open Liberty rpms/debs, you'll have to configure your machine to use the Open Liberty repository.

    Configure your machine

    On Ubuntu systems:

    1. Append the following line to file /etc/apt/sources.list:
    $ deb https://public.dhe.ibm.com/ibmdl/export/pub/software/openliberty/runtime/os-native-packages/deb/ /
    1. Add the repository's public key with this command:
    $ wget -O http://public.dhe.ibm.com/ibmdl/export/pub/software/openliberty/runtime/os-native-packages/public.key | sudo apt-key add -
    1. Run the command:
    $ sudo apt-get update
    1. Install the latest version of Open Liberty from the repository by running:
    $ sudo apt-get install openliberty

    On Red Hat Systems, create a file named /etc/yum.repos.d/openliberty.repo that contains:

    [olrepo]
    name=olrepo
    baseurl=http://public.dhe.ibm.com/ibmdl/export/pub/software/openliberty/runtime/os-native-packages/rpm/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://public.dhe.ibm.com/ibmdl/export/pub/software/openliberty/runtime/os-native-packages/public.key

    Then, install the latest version of Open Liberty with:

    $ sudo yum update
    $ sudo yum install openliberty

    Use OpenLiberty

    Whether you're Ubuntu or Red Hat systems, after the openliberty.rpm or openliberty.deb are installed, the empty defaultServer is created and configured to run as a service. Here are the details:

    • Open Liberty services will run as user openliberty.
    • The server is located in /var/lib/openliberty/usr/servers/defaultServer.
    • Logs will be stored in /var/log/openliberty/defaultServer.
    • The PID for the server is in /var/run/openliberty/defaultServer.pid.

    Once this process is complete, you can use the following standard Linux service commands:

    • systemctl status openliberty@defaultServer.service
    • systemctl start openliberty@defaultServer.service
    • systemctl restart openliberty@defaultServer.service
    • systemctl stop openliberty@defaultServer.service

    Automatically compress HTTP responses

    You can now try out HTTP response compression. Previous to this feature, Open Liberty only considered compression through the use of the $WSZIP private header. There was no way for a customer to configure the compression of response messages. Support now mainly consists of using the Accept-Encoding header in conjunction with the Content-Type header, which determines whether the response message can be compressed.

    This behavior allows the Liberty server to compress response messages when possible, which lets developers use the compression feature to reduce network traffic, therefore reducing bandwidth and decreasing the exchange times between clients and Liberty servers. A new element, compression, is now available within the httpEndPoint for a user to opt-in and use compression support.

    The optional types attribute allows the user to configure a comma-delimited list of content types that should or should not be considered for compression. This list supports the use of the plus (+) and minus (-) characters to add (or remove) content types to (and from) the default list. Each content type contains a type and a subtype separated by a slash (/) character. A wild card (*) character can be used as the subtype to indicate all subtypes for a specific type.

    The default value of the types optional attribute is text/*, application/javascript.

    When configuring the optional serverPreferredAlgorithm attribute, the configured value is verified against the “Accept-Encoding” header values. If the client accepts the configured value, this is set as the compression algorithm to use. If the client does not accept the configured value, or if the configured value is set to "none," the client-preferred compression algorithm is chosen by default:

    <httpEndpoint  id="defaultHttpEndpoint"
            httpPort="9080"
            httpsPort="9443">
        <compression types=“+application/pdf, -text/html” serverPreferredAlgorithm=“gzip”/></httpEndpoint>

    Open Liberty supports the following compression algorithms: gzip, x-gzip, deflate, zlib, and identity (no compression).

    The Http Response Compression functionality was designed from the following Open Liberty Epic: #7502. This design is outlined within the Epic for more detailed reading. The basic flow of the design is shown in Figure 2.

    Figure for the design behind Http Response Compression
    Figure 2: The initial design for Http Response Compression functionality.
    Figure 2: The initial design for Http Response Compression functionality.

    Open Liberty Grafana dashboard now available at grafana.com

    The Grafana dashboard provides a wide range of MicroProfile Metrics data time-series visualizations, such as CPU, Servlet, Connection Pool, and Garbage Collection metrics. This dashboard is powered by a Prometheus datasource, which is configured to ingest data from one or more Liberty servers' /metrics endpoints, enabling us to view the results in Grafana in near real-time.

    This new dashboard works with Liberty instances outside of Red Hat OpenShift. For Liberty servers running on OpenShift Container Platform, use the Grafana dashboards published here. The new Grafana dashboard is intended for Open Liberty servers with mpMetrics-2.x that are not running on OCP.

    You can use this dashboard to spot performance issues when running your applications in Open Liberty. For instance, metrics such as servlet response times, or CPU or heap usage, could be indicative of an underlying performance issue or memory leak when seen as a time-series on Grafana.

    To configure the dashboard, first add the mpMetrics-2.3 feature. This will automatically enable the monitor-1.0 feature:

    <featureManager>
        <feature>mpMetrics-2.3</feature>
    </featureManager>
    
    <mpMetrics authentication="false" />

    Run the server using the following command:

    $ ./server run DashboardTest

    Then, download Prometheus. Once unpackaged, it should contain a startup script called prometheus alongside a configuration file: prometheus.yml. Within prometheus.yml, append one of the following jobs to scrape_configs, depending on whether you are dealing with a secure endpoint or an insecure one.

    For mpMetrics on an insecure endpoint:

    - job_name: 'liberty'
        scrape_interval: 5s
        static_configs:
          - targets: ['localhost:9080']

    For mpMetrics on a secure endpoint:

    - job_name: 'liberty-secure'
        scrape_interval: 5s
        static_configs:
          - targets: ['localhost:9443']
        basic_auth:
          username: "<your-username>"
          password: "<your-password>"
        tls_config:
           insecure_skip_verify: true
        scheme: "https"

    Start the prometheus script and visit http://localhost:9090/targets, where you should see your Open Liberty server listed as one of the targets as shown in Figure 3.

    Prometheus showing the results of running the script

    Then, download Grafana, and once finished, visit https://localhost:3000. When you have navigated to Grafana, click the gear icon and select Data Source. Add a new Prometheus datasource with the URL http://localhost:9090, and then click Save & Test.

    To import the dashboard:

    1. Click the plus icon.
    2. Select Import.
    3. Paste the dashboard ID 11706.
    4. When prompted in the dropdown menu, link it to the new data source you have just created.
    Open Liberty displaying your MicroProfile Metrics dashboard
    Figure:

    For more information, check out the dashboard found on Grafana's website and Using Prometheus to create your own custom visualizations.

    Support for Red Hat OpenShift service account authentication credentials

    The socialLogin-1.0 feature can now be configured to use Red Hat OpenShift service accounts to authenticate and authorize protected resource requests. This option allows server administrators to secure, for example, monitoring and metrics endpoints that might produce sensitive information but require repeated access by an automated process or non-human entity. The new behavior allows service accounts to authenticate themselves by providing in the request a service account token that was created within the OpenShift cluster.

    A new configuration element is now provided to support this behavior. The socialLogin-1.0 feature must be enabled to gain access to this new element. The minimum configuration requires only that an element be specified in the server xml:

    <server>
    
    <!-- Enable features -->
    <featureManager>
      <feature>appSecurity-3.0</feature>
      <feature>socialLogin-1.0</feature>
    </featureManager>
    
    <okdServiceLogin />
    
    </server>

    The minimum configuration assumes that the Liberty server is packaged and deployed within an OpenShift cluster. By default, the element will be used to authenticate all protected resource requests that the Liberty server receives.

    Incoming requests to protected resources must include a service account token. The token must be specified as a bearer token in the request's Authorization header. The Liberty server will use the service account token to query information about the associated service account from the OpenShift cluster. The OpenShift project that the service account is in will be used as the group for the service account when making authorization decisions. The OpenShift project name is concatenated with the name of the service account to create the username.

    If the Liberty server is not deployed within an OpenShift cluster, the userValidationApi attribute should be configured and set to the value for the appropriate User API endpoint in the OpenShift cluster:

    <okdServiceLogin userValidationApi="https://cluster.domain.example.com/apis/user.openshift.io/v1/users/~" />

    Multiple elements can be configured as long as each element has a unique id attribute specified. In those cases, authentication filters should also be configured to ensure the appropriate endpoints are protected by a unique instance.

    More information about OpenShift service accounts can be found in the OpenShift documentation for Understanding and creating service accounts.

    Get Liberty 20.0.0.4 now

    Open Liberty 20.0.0.4 is available through Maven, Gradle, Docker, and as a downloadable archive.

    Try Open Liberty 20.0.0.4 in Red Hat Runtimes now

    Open Liberty is part of the Red Hat Runtimes offering and is available to Red Hat Runtimes subscribers. To learn more about deploying Open Liberty applications to OpenShift, take a look at our Open Liberty guide: Deploying microservices to OpenShift.

    Last updated: February 6, 2024

    Recent Posts

    • More Essential AI tutorials for Node.js Developers

    • How to run a fraud detection AI model on RHEL CVMs

    • How we use software provenance at Red Hat

    • Alternatives to creating bootc images from scratch

    • How to update OpenStack Services on OpenShift

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue