Cloud

Serverless is an advanced cloud deployment model that aims to run business services on demand, enabling enterprises to save infrastructure costs tremendously. The benefit of serverless is an application designed and developed as abstract functions regardless of programming languages. This article describes how the serverless and function models have evolved since they were unleashed upon the world with AWS Lambda and what to look forward to with Red Hat OpenShift serverless logic.

The 3 phases of serverless evolution

As serverless technologies evolve, we at Red Hat created the evolutionary scale to help our customers better understand how serverless has grown and matured over time. The three phases of serverless evolution are as follows:

  • Serverless 1.0 

At the beginning of the serverless era, the 1.0 phase, serverless was thought of as functions with tiny snippets of code running on demand for a short period. AWS Lambda made this paradigm popular, but it had limitations in terms of execution time, protocols, and runtimes.

  • Serverless 1.5

With the increase in popularity of Kubernetes running microservices on container platforms, the serverless era also moved forward to the 1.5 phase. This phase augmented serverless traits and benefits by deploying polyglot runtimes and container-based functions. The serverless 1.5 phase also delivered an abstraction layer to manage serverless applications using the open Kubernetes serverless community project, Knative. Red Hat was one of the founding members of the Knative project and continues to be one of the top contributors to that community. But this was not the end of the journey.

  • Serverless 2.0

We are now approaching the new serverless 2.0 phase. This phase involves more complex orchestration and integration patterns combined with some level of state management. Serverless functions are often thought of as stateless applications. But serverless workflows are designed for complex orchestrations of multiple services and functions, typically while preserving the state. The adoption of serverless increases as organizations perform more complex orchestrations. Consequently, the OpenShift Serverless team implements serverless workflows, utilizing our command of the business process automation space.

The future of serverless workflows

The Serverless Workflow project is an open source specification that enables developers to design workflows running serverless functions using a standard domain-specific language (DSL). For increased flexibility, Serverless Workflows also allow developers to define business logic triggered by multiple events and services such as CloudEvents, OpenAPI, AsyncAPI, GraphQL, and gRPC.

Red Hat is one of the project maintainers of the Serverless Workflow project. We have been actively involved in innovation and contribution since the earliest days of the Cloud Native Computing Foundation (CNCF) project.

We are about to release a new feature of OpenShift Serverless called the serverless logic in developer preview. This feature allows developers to design workflows with serverless deployment and function development capabilities based on Knative and Kogito. Kogito augments function orchestration and automation to implement serverless workflows at scale on Red Hat OpenShift.

 

Stay tuned to the official @rhdevelopers Twitter stream and this blog for more details about our exciting new capability as we approach the release of Red Hat OpenShift Serverless Logic. We will also provide tutorials.

In the meantime, learn more about OpenShift Serverless, and try it out by setting up your free and easy-to-use Red Hat Sandbox environment.

Last updated: October 31, 2023