In today's fast-paced business environment, efficient and agile application development is critical for success. One tool that can help achieve this is Red Hat OpenShift Serverless. In this article, we will explore what OpenShift Serverless is and how it can simplify application development for microservices and functions.
All hyperscalers (AWS, Azure, Google Cloud Platform, IBM) have their own serverless offerings as Function-as-a-Service (FaaS). They all provide almost similar functionalities. These have enabled a variety of use cases, but these are far from ideal for enterprise computing needs. Some shortcomings are:
- Limited in terms of CPU, memory, and execution time.
- Limited to no orchestration.
- Cold start latency.
- Vendor lock-in.
Then comes the serverless container, which aims to solve most of these shortcomings. Serverless containers, as name suggests, bring the best of both the paradigms:
- Serverless: enable abstracting application from underlaying infrastructure helping enterprises to innovate faster .
- Containers: Applications can be packaged as OCI compliant container that can run anywhere removing vendor lock-in.
Serverless computing continued to gain momentum as global serverless computing market size was crossed USD 7.6 billion in 2020 and is anticipated to exhibit a CAGR of 22.7% to reach USD 21.1 billion by the end of 2026.
In the serverless container world, OpenShift Serverless plays a significant role by bringing together the benefits of serverless computing, with the containerization technology provided by Kubernetes. It is an add-on to the OpenShift platform which takes the development efficiency to the next level by providing a streamlined experience for creating, deploying, and scaling microservices, functions, and event-driven applications.
OpenShift Serverless architecture
OpenShift Serverless is based on the upstream Knative project that provides primitives to create, build, and deploy microservices and functions. Figure 1 shows the architecture of OpenShift Serverless.
The main components of OpenShift Serverless architecture are:
- Knative serving: Enables developers to create cloud-native applications using serverless architecture. It provides custom resource definitions (CRDs) which developers can use to deploy serverless containers, scale number of pods etc.
- Knative eventing: Provides the infrastructure for building and deploying event-driven applications. It enables developers to define event sources and sinks and provides a mechanism for routing events to functions and applications or other event sinks.
Execution
The first step is to install RedHat OpenShift Serverless Operator on the cluster. From the Operator Hub, select OpenShift Serverless operator and install with the default options. The status of the Operator installed can be seen by clicking the Operators → Installed Operators menu on the OpenShift console. See Figures 2 and 3.
Next, we have to install Knative Serving and Knative Eventing CRDs.
Procedure
Below are the steps to install Knative Serving by using the default settings in the KnativeServing custom resource (CR):
- In the Administrator perspective of the OpenShift Container Platform web console, we can navigate to Operators → Installed Operators.
- Then change the Project drop-down at the top of the page and set to Project: knative-serving.
- Next, click the Create Knative Serving option and in the Create Knative Serving page, we can install Knative Serving using the default settings by clicking Create.
We can install Knative Eventing CR in the same way, using the default settings. The only change is the project, which is set to Project: knative-eventing, and from the provided APIs we can select the Create Knative Eventing option to install Knative eventing.
The next step is to install the Knative CLI (kn
) on the bootstrap machine that we are using to access the Red Hat OpenShift Container Platform cluster. The Knative CLI also supports interactions with OpenShift container platform. We can follow the official link to download and install Knative CLI. To verify the installation, we can run:
$ kn
Now we can create our first serverless container using the Knative CLI in Node.js as below:
$ kn func create -c
The Knative CLI then will ask few details about the function we are going to create:
Function name: serverless-demo-func
Runtime: node
Trigger: http
Output:
root@ip-XX-X-X-XX:/home/ubuntu/shubhayu/serverless-demo# kn func create -c
? Function Path: serverless-demo-func
? Language Runtime: node
? Template: http
Command:
kn -l node
Created node function in /home/ubuntu/shubhayu/serverless-demo/serverless-demo-func
After providing above information, the CLI auto-generates the code for Serverless Container project named serverless-demo-func
in the path given.
Once done, we'll first create a namespace in OpenShift Container Platform and then use the following commands to build and deploy the serverless application to the newly created namespace:
$ oc new-project serverless-demo
$ kn func build
$ kn func deploy -c
Output:
root@ip-XX-X-X-XX:/home/ubuntu/shubhayu/serverless-demo# oc new-project serverless-demo
Now using project "serverless-demo" on server "https://api.ocp-ai-cloud.ocp-ai-cloud.nextzlabs.com:6443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app rails-postgresql-example
to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.43 -- /agnhost serve-hostname
After deployment, we can log in to the OpenShift console, change the project to serverless-demo
that we have created earlier. Then, from the Developer → Topology view, we should see the serverless application, as shown in Figure 4.
If we hit the route URL, it triggers the autoscaler to instantiate the pod and serve the request. The pod will terminate after 30 seconds of inactivity. See Figure 5.
This is how we can run serverless application on OpenShift Serverless platform. Beyond autoscaling for HTTP requests, we can trigger the serverless containers using a variety of events such as Kafka messages, file upload to storage, timers for recurring jobs, and 100+ event sources like Salesforce, ServiceNow, e-mail, etc.
Conclusion
To conclude, below are key features of OpenShift Serverless platform:
- Any programming language or runtime can be chosen, including Java, Python, Go, Node.js, etc.
- Immutable revisions: New features can be deployed by performing canary, A/B or blue-green testing with gradual traffic rollout with no sweat and following best practices.
- Automatic scaling: No need to configure number of replicas. Scale to zero when not in use and autoscale to thousands during peak, with built-in reliability and fault tolerance.
- Built for hybrid cloud: Portable serverless applications can be running anywhere OpenShift runs, be it on-premises or on any public cloud.
- Event-driven architectures: Can build loosely coupled and distributed applications connecting with a variety of built-in event sources.