Knative Cookbook: Building Effective Serverless Applications with Kubernetes and OpenShift
Serverless architecture has recently taken center stage in cloud-native application deployment: Enterprises started to see the benefits that serverless applications bring to them, such as agility, rapid deployment, and resource cost optimization. As with any other new technology, there were multiple ways to approach and employ serverless technologies, such as Function-as-a-Service (FaaS) and Backend-as-a-Service (BaaS)—that is, running your applications as ephemeral containers—with the ability to scale up and down automatically.
Knative was started with the simple goal of having a Kubernetes-native platform to build, deploy, and manage your serverless workloads. Kubernetes solves a lot of cloud-native application problems, but with a fair bit of complexity, especially from the perspective of deployment. To make a simple service deployment with Kubernetes, a developer has to write a minimum of two YAMLs (such as a Deployment service) and then perform the necessary plumbing work to expose the service to the outside world. This complexity causes an application developer to spend more time crafting the YAMLs and other core platform tasks rather than focusing on the business need.
Let me explain this issue with an example. Say that I want to deploy a hello world kind of application and expose the service. First, I need to create a deployment, so here I created a deployment called
Next, I need to expose this deployment as a service named
myboot (for example, an Application-as-a-Service):
But wait, do I need to write these elaborate YAMLs every time I want to deploy my application as a service? Unfortunately, I did, until Knative was born. Knative solves these Kubernetes problems by providing all essential middleware primitives via a simpler deployment model. On Knative you can deploy any modern application workload, such as monolithic applications, microservices, or even tiny functions. Knative can run in any cloud platform that runs Kubernetes, which gives enterprises more agility and flexibility in running their serverless workloads without relying on cloud vendor-specific features.
Let me show you how simpler your deployment is when you deploy the same
myboot as a Knative service:
Though the simpler deployment model is a key feature offered by Knative, there is much more to Knative than that. This is where we—as part of the Red Hat Developer Program—thought it would be awesome to help developers dive into Knative quickly and easily, which gave birth to the Knative Tutorial. This tutorial not only provides the getting started experience for Knative but also prepares you for the next level.
The fact there are many ways to do serverless has resulted in confusion among developers, with the following questions being raised immediately:
- What implementation should I choose: FaaS or BaaS?
- What is the quickest way to get started?
- What are the use cases for which I can apply serverless technology?
- How do I measure the benefits?
- What tools I should use to develop serverless applications?
We had the same set of questions when we started to explore serverless technology as part of writing the Knative Tutorial. The problems and challenges that we faced during our research became the crux of the Knative cookbook.
As the Knative project picked up momentum, it became obvious that there should be a guide book that could provide practical applications of Knative and typical how-to scenarios. Seeing this need is what caused Burr Sutter and me to create the Knative Cookbook, which covers the following topics:
- Installing Knative into your Kubernetes cluster.
- Auto-scaling to zero.
- Scaling up to handle request spikes.
- Responding to external event stimuli in a serverless way.
- Using Apache Kafka with Knative Eventing.
- Using Kubernetes, Knative, Kafka, and Kamel for 4K cloud-native computing.
Download the Knative Cookbook here, and we hope that this ebook will be of great assistance on your Knative journey!