Featured image for Java topics.

In this article we're going to discuss some new patterns for concurrent systems that are enabled by the new virtual threads feature from Java 21 and some related new features that "follow on" from virtual threads—​specifically Structured Concurrency (JEP 453) and Scoped Values (JEP 446).

Note that as of JDK 21, both Structured Concurrency and Scoped Values are in Preview state and so cannot really be used in production applications.

With this limitation in mind, let's start by looking at the patterns that are enabled by virtual threads alone—​which can, therefore, be used without restriction in Java 21 applications.

Virtual thread concurrency patterns

Virtual threads were introduced in Java 21 as one of the major outputs from Project Loom and are, perhaps, most similar to goroutines from the Go language.

The aim of the project was to solve a scalability problem with Java's concurrency model, which was formerly based solely on operating system (a.k.a. platform) threads. This is achieved by introducing a new type of thread, known as a virtual thread (or vthreads for short).

Vthreads are managed by the JVM and are not permanently associated with a platform thread. They only use a platform thread (called the carrier) when they are actually running, and they are much cheaper to create and destroy than platform threads.

To operate, vthreads automatically unbind from their carrier whenever they encounter a blocking I/O call.

One side effect of this, and our first new pattern, is that vthreads should completely remove the need for developers to use the non-blocking form of the NIO APIs directly. Instead, for each task, programs can create a dedicated virtual thread that uses the blocking API and let the runtime sort it out.

The intent is that this is essentially the same thing as using non-blocking I/O in terms of performance while providing a simpler programming model.

In fact, avoiding more complex programming models that display asynchronous contagion (such as async-await or colored functions) was a major design goal for Project Loom. (Colored functions were named as such in the blog post What Color is Your Function?, although the author incorrectly assumes that Java is planning to support async-await.)

At the same time, bringing explicit "reactive approaches" into the JDK was a definite non-goal. The end result was the form of virtual threads that we see in Java 21.

This great new language feature opens the door to some interesting new patterns. We've already discussed the possible sunsetting of the direct use of non-blocking I/O, but let's look at some others, starting with one of the most obvious:

Just replace some of your program's threads with vthreads.

The carrier threads for vthreads come from a ForkJoinPool executor and will yield on (most) blocking operations. This means that for threads that do at least some I/O, then there is a potential performance benefit by switching them to virtual.

You should remember that performance depends on a lot of things, and vthread's main advantage is programmer convenience and maintainability. So, if we are able to obtain a performance boost by converting some threads to virtual then we have to test the change—​in a real-world complete system—​to ensure that we actually realize the expected benefit.

Also, remember that Java 21 does not automatically make existing code use virtual threads—unless you explicitly construct a virtual thread, you will always get a platform thread.

As well as manual creation of virtual threads, there is also a new executor type, which we can get from Executors.newVirtualThreadPerTaskExecutor(). As the name suggests, rather than relying upon a traditional threadpool that is reused for multiple tasks, this executor creates a new virtual thread for each task that is submitted.

To accommodate this new executor type, the ExecutorService interface is now AutoCloseable—so it can be used in try-with-resources blocks.

This is a great example of a new pattern that is designed specifically for virtual threads. Executors for platform threads are typically long-lived objects—​because they create threads at startup, which is an expensive operation. Therefore, it doesn't make sense to create them as local objects within a method—​they're much more likely to be seen as (possibly static) fields.

Virtual threads, on the other hand, are very cheap to create—​they're just Java objects without any permanently allocated native resources. Creating an executor for virtual threads is similarly cheap, and so creating a locally scoped executor avoids a possible performance penalty.

This leads to code like this example showing the bare bones of a web server, which makes use of the block-scoped virtual thread executor:

    private volatile boolean isShutdown = false;

    void handle(Socket socket) {
        // Handle incoming request
    }

    void serveVT(ServerSocket serverSocket) throws IOException, InterruptedException {
        try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
            try {
                while (!isShutdown) {
                    var socket = serverSocket.accept();
                    executor.submit(() -> handle(socket));
                }
            } finally {
                // If there's been an error, or we're interrupted, we stop accepting
                executor.shutdown();
            }
        }
    }

    public shutdown() {
        isShutdown = true;
    }

The server socket is passed into the main serveVT() method and handles each incoming request by starting a new virtual thread.

Every request is isolated from every other request, so there is no need to share data or context, and the requests will all complete in bounded time (and require network I/O). We can call this type of operation fire-and-forget, and this pattern is very suitable for implementing a simple web server using vthreads.

Also note that this code uses the Volatile Shutdown pattern to provide a controlled shutdown of the server—​another thread calls the shutdown() method, and the volatile nature of the boolean ensures that the no more incoming requests will be processed.

Let's move on to meet the new Preview APIs that arrived in Java 21 and complement virtual threads.

Structured Concurrency

The first of the two new APIs is known as Structured Concurrency. This is an API for thread handling, which provides an approach for cooperating tasks (often virtual threads) to be considered and managed collectively as a collection of subtasks.

One class of problem in concurrent programming is known as data-parallel problems. These are problems where the same operation is applied to a large amount of data, and the operations are (more or less) independent of each other. This is the domain of applicability of such things as Amdahl's Law, a well-known constraint on the ability of parallel computing to speed up computation.

By contrast, Structured Concurrency is designed for task-parallel problems, which involves different but related subtasks that need to proceed in parallel. The API's affinity to virtual threads means it is primarily useful for tasks involving some amount of I/O (especially calls to remote services). However, the approach is much less useful for operations that act solely (or mostly) on in-memory data because virtual threads will contend with each other for CPU time.

The general flow for a structured concurrency task looks something like this:

  1. Create a task scope—​the creating thread owns the scope.

  2. Fork concurrent subtasks in the scope (each is a virtual thread).

  3. Scope owner joins the scope (all subtasks) as a unit.

  4. Scope’s join() method blocks until all subtasks have completed.

  5. After joining, owner handles any errors in forks and processes results.

  6. Owner closes the scope.

It's worth pointing out that the version of Structured Concurrency that shipped in Java 21 included some minor API changes over Java 20. The main one is that fork() now returns a Subtask (which implements Supplier) instead of a bare Future (as it was in Java 20).

The reason for this new interface, rather than just using Future, is that results are only queried after a join(), because Structured Concurrency treats multiple subtasks as single unit of work. As a result, neither blocking calls to get() nor checked exceptions from subtasks are useful, and so Future is something of an awkward interface (Subtask is a checked-exception-free interface).

Developers who have previously used the older previews of Structured Concurrency should be aware of this change when upgrading.

Let's see this in action in an example using the calculation of a stock tip, a record class that we'll define like this:

record StockTip(String symbol, double sentiment, double delta24) {}

We'll assume that the strength of the market's attitude to the stock (the sentiment) and the possible change in price over the next 24 hours (the delta24) are to be calculated by some external process. These elements might take some time to compute, and this is likely to involve network traffic.

We can therefore use structured subtasks to compute them, like this:

try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
    Callable<Double> getSentiment = () -> getSentiment(s);
    Subtask<Double> fSentiment = scope.fork(getSentiment);

    Callable<Double> getDelta = () -> getDelta24(s);
    Subtask<Double> fDelta = scope.fork(getDelta);

    scope.join();
    scope.throwIfFailed();

    return new StockTip(s, fSentiment.get(), fDelta.get());
} catch (ExecutionException | InterruptedException e) {
    throw new RuntimeException(e);
}

This follows the general flow for structured concurrency that we established above. Note that closing the scope is handled implicitly via the try-with-resources block—​this shuts down the scope and waits for any straggling subtasks to complete.

We should also point out a couple of other points.

Firstly, joining the subtasks can also be canceled by calling a shutdown() method. Secondly, there is also a timed variant of join(), called joinUntil(), which accepts a deadline (as an Instant parameter).

There are two built-in shutdown policies for the scope (and custom shutdown policies are also supported):

  • Cancel all subtasks if one of them fails (ShutdownOnFailure)

  • Cancel all subtasks if one of them succeeds (ShutdownOnSuccess)

We met the first of these inbuilt options in our first example, and so let's move on to introduce the other.

Consider a library method where multiple subtasks are launched (possibly multiple copies of the same subtask), and the first result (from any of the subtasks) will do. The tasks are racing each other to complete, and the rest of the virtual threads should be shut down as soon as the first success occurs, so we should use the ShutdownOnSuccess policy, like this:

<T> T race(List<Callable<T>> tasks, Instant deadline)
        throws InterruptedException, ExecutionException, TimeoutException {

    try (var scope = new StructuredTaskScope.ShutdownOnSuccess<T>()) {
        for (var task : tasks) {
            scope.fork(task);
        }
        return scope.joinUntil(deadline)
                    .result();  // Throw if none of the subtasks completed successfully
    }
}

This has an obvious dual, i.e., when all tasks must run to completion (and where a failure of any subtask should cancel the entire task, so we'll use ShutdownOnFailure again):

<T> List<T> runAll(List<Callable<T>> tasks)
        throws InterruptedException, ExecutionException {

    try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
        List<? extends Subtask<T>> handles =
            tasks.stream().map(scope::fork).toList();

        scope.join()
             .throwIfFailed();  // Propagate exception if any subtask fails

        // Here, all tasks have succeeded, so compose their results
        return handles.stream().map(Subtask::get).toList();
    }
}

Note that this version of the code rematerializes the results into a List, but it's also possible to imagine a version that had a different terminal operation that reduced the results and returned a single value.

We can build more complex structures as well—​the subtasks that we created using forks can themselves create scopes (subscopes). This naturally induces a tree structure of scopes and subtasks, which is useful when we want to condense a final value out of a tree of subtasks.

If, however, the main point of our code is to operate via side effects, then it is possible to use a StructuredTaskScope<Void>—i.e., use a task scope that returns void, such as in this example:

void serveScope(ServerSocket serverSocket) throws IOException, InterruptedException {
    try (var scope = new StructuredTaskScope<Void>()) {
        try {
            while (true) {
                final var socket = serverSocket.accept();
                Callable<Void> task = () -> {
                    handle(socket);
                    return null;
                };
                scope.fork(task);
            }
        } finally {
            // If there's been an error or we're interrupted, we stop accepting
            scope.shutdown();  // Close all active connections
            scope.join();
        }
    }
}

However, this is arguably often better handled using a fire-and-forget pattern, such as newVirtualThreadPerTaskExecutor(). There are also some small wrinkles with the generics here, ​such as needing to explicitly return null.

One recurring theme in all the patterns that we have met so far is that using these techniques requires applying design thinking and knowledge of the domain and context of the problem being solved. There is no software tool that can tell with 100% accuracy whether a thread is a good candidate for being converted to a vthread—​that is a task for a human software engineer.

Likewise, the restructuring of a task into subtasks and the definition of the relevant scopes requires the programmer to have a good understanding of the domain and any data dependencies between the subtasks.

Let's move on to look at the second of the new APIs that we want to discuss.

Scoped Values

In addition to Structured Concurrency, the new Scoped Values API arrived in Java 21 as Preview. Red Hat contributes to many projects within OpenJDK, including Project Loom, and we are the main contributors to the Scoped Values API. Andrew Haley, one of our Distinguished Engineers, is the overall lead for the Scoped Values JEP.

The Scoped Values API is based on a new class ScopedValue<T> in java.lang, and it represents a binding of a value to a variable within a specific scope. This value is written once and is then immutable on a per-scope basis.

The bound value can be retrieved at any point down any call chain within the scope, but only within the scope in which it was set—​this provides robustness and a form of encapsulation. In particular, there is no need to explicitly pass the scoped value down the call chain.

Scoped values can be thought of as invisible parameters that go into every method and are, in turn, passed to every method they invoke. We sometimes say they are implicitly available, but this is a much more controlled (and more Java-ish) form than, say, Scala's implicit method parameters.

The Scoped Values API can also be thought of as a modern alternative to thread-local variables, but with a number of enhancements, such as immutability. This means that there is no set() method to let faraway code change a scoped value. This also enables possible future runtime optimizations, as the runtime can be certain that a scoped value cannot change.

Some goals of the API are:

  • To share data within a thread and with child threads

  • Controlled and bounded lifetime of values

  • Lifetimes visible from the structure of code

  • Immutability allows sharing by lots of threads

  • Immutability and explicit lifetime is often a better fit

It is not necessary for programmers to move away from ThreadLocal, but scoped values combine well with virtual thread patterns, such as fire-and-forget. It therefore seems quite likely that as scoped values are adopted, then ThreadLocal will be gradually replaced for almost all use cases.

Let's rewrite the virtual thread web server to use scoped values:

public class ServerSV {
    public final static ScopedValue<Socket> SOCKETSV = ScopedValue.newInstance();

    void serve(ServerSocket serverSocket) throws IOException, InterruptedException {
        while (true) {
            var socket = serverSocket.accept();
            ScopedValue.where(SOCKETSV, socket)
                        .run(() -> handle());
        }
    }

    private void handle() {
        var socket = SOCKETSV.get();
        // handle incoming traffic
    }
}

Note that the handle() method now no longer takes a parameter, and instead, the socket is accessed via the scoped value—​this is the implicit availability we discussed above.

This example is very simple, as all we're really doing is to replace the parameter passing with a scoped value—​an almost trivial application. The real power of scoped values is that the call chains and the scoping and subscoping can be arbitrarily complex, and the scoped value will still be available.

Overall, the intent of scoped values is to provide a dynamic scope, a concept that has not been seen in Java before. This approach to scopes is similar to that found in some other languages—​such as shells, Lisp dialects, and Perl.

We can contrast it with the traditional Java form of scoping—​usually called lexical scoping. This is where the scope of a variable is determined by the structure of the code, usually defined by a matching pair of curly braces.

Our dynamic scoping example shows a key pattern in action:

  • Using a static final field as a holder for a scoped value

  • Declaring the ScopedValue instance in class scope

  • Creating the dynamic scope (e.g., runWhere()) within a method

  • Using a lambda to define the scope body (where the call chains will live)

Scoped values are intended to be very useful for passing values like transaction contexts and other examples of ambient context data.

Scoped values interact well with structure concurrency, as they can be constructed for scope and then rebound by subscopes. Any values that are not rebound will be inherited by the subscope. This technique allows for upgrading access control and similar patterns, such as in this example, where we will consider two security access levels:

enum SecurityLevel { USER, ADMIN }

We'll use a scoped value to hold the current security level and another to hold the current request number:

    private static final ScopedValue<SecurityLevel> securitySV = ScopedValue.newInstance();
    private static final ScopedValue<Integer> requestSV = ScopedValue.newInstance();

    private final AtomicInteger req = new AtomicInteger();

    public void run() {
        ScopedValue.where(securitySV, level())
                .where(requestSV, req.getAndIncrement())
                .run(() -> process());
    }

To demonstrate rebinding, let us assume that admin privileges are not available, and so any attempt to use them will result in a fallback to user privileges:

    private void process() {
        var level = securitySV.get();
        if (level == SecurityLevel.USER) {
            System.out.println("User privileges granted for "+ requestSV.get() +" on: "+ Thread.currentThread());
        } else {
            System.out.println("Admin privileges requested for "+ requestSV.get() +" on: "+ Thread.currentThread());
            System.out.println("System is in lockdown. Falling back to user privileges");
            ScopedValue.where(securitySV, SecurityLevel.USER)
                    .run(() -> process());
        }
    }

To conclude this article, we should also point out that classes that represent continuations and other low-level building blocks for virtual threads and other components do exist in Java 21. However, they are in the package jdk.internal.vm and so are not intended for direct use by Java programmers as of this release.