Are App Servers Dead in the Age of Kubernetes? (Part 2)
Welcome to the second in a series of posts on Kubernetes, application servers, and the future. Part 1, Kubernetes is the new application operating environment, discussed Kubernetes and its place in application development. In this part, we explore application servers and their role in relation to Kubernetes.
You may recall from Part 1 that we were exploring the views put forth in Why Kubernetes is The New Application Server and thinking about what those views mean for Java EE, Jakarta EE, Eclipse MicroProfile, and application servers. Is it a curtain call for application servers? Are we seeing the start of an imminent decline in their favor and usage?
Before answering that, we need to discuss the use case for application servers. Then can we decide whether it’s still a valid use case.
What is an application server good for?
Why do we have application servers? What’s their purpose? These are some of the questions we could ask when thinking about their problem space.
Application servers didn’t spring up out of nowhere for no reason. They were an evolution of the CORBA and DCOM models offering separate services for security, transactions, messaging, etc. All these services being separated required lots of communication between them.
Application servers were born out of the need to bring all these services under a single process. In addition to co-locating the services, they provided a framework on top making it easier to combine the capabilities of the services.
Sound familiar? Indeed it does. CORBA and DCOM are architecturally similar to what we call microservices today.
Everything you need to grow your career.
With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development.SIGN UP
Is now the death of application servers?
Kubernetes is not the death knell for application servers as we know them today. Application servers have always evolved—and will continue to evolve—as hardware and software improve. Continual improvements are being made in developer productivity. Kubernetes, Docker and, now, service mesh are another step in the evolution that necessitates a shift in application servers. It doesn’t make them irrelevant.
Application servers reborn
If anything, the advent of Kubernetes as an operating environment will lead to another transformation of application servers. Like a Phoenix from the ashes, application servers will transform themselves, as happened in the past many times.
The first post talked about taking into account both what Kubernetes provides and what our application or microservice needs. It noted that Kubernetes is a fine application server for deployments that don’t interact with lots of other services. Conversely, most Java EE and MicroProfile applications are not so isolated that they can be treated in such a manner.
Let’s take a look at why Java EE and MicroProfile applications are usually less isolated. Whether applications are in a single process or they are distributed across a network, they consist of many services working together towards a business objective—though they likely didn’t start that way.
Applications created with a single business objective quickly grow as business needs change over time. Or an application developed by one person needs to be altered for wider usage. There are many reasons that applications need to grow and adjust. Complicating it is that in the beginning, we usually don’t know an application will grow.
Kubernetes as an application server
Planning for a simple application using Kubernetes as the application server quickly leads to problems as the application expands beyond the initial goals, requiring frameworks to integrate and provide functionality above that offered by Kubernetes itself.
Applications usually interact with other applications, services, and systems or with pretty much anything outside themselves. Doing so guarantees an application will need things such as:
- Integration with messaging systems for asynchronous or offline processing
- Transactions within itself and across other invocations
- Fine-grained security controls, as opposed to coarse-grained security controls
These are all crucial things provided by application servers and fat JARs for applications and microservices today, whether you’re using WildFly or Thorntail to deploy into. These concerns aren’t going away, and they’re not offered by Kubernetes or other projects that build on top of it, such as Red Hat OpenShift and Istio.
Coming in Part 3
The final part of this series will bring to a close our analysis. It will answer the following question for Kubernetes and application servers: Can they co-exist?
To learn more, visit our Join the Red Hat Developer Program (it’s free) and get access to related cheat sheets (e.g. containers), books (e.g. microservices), and product downloads that can help you with your microservices and/or container application development.