The evolution of serverless and FaaS: Knative brings change
Are serverless and Function as a Service (FaaS) the same thing?
No, they’re not.
Wait. Yes, they are.
Frustrating, right? With terms being thrown about at conferences, in articles (I’m looking at myself right now), conversations, etc., things can be confusing (or, sadly, sometimes misleading). Let’s take a look at some aspects of serverless and FaaS to see where things stand.
What is serverless anyway?
That’s simple: It’s computing without a server. It’s magic.
Actually, no; that’s not true. Of course, there is a server (or many servers) involved. One of the tenants of serverless is that the developer need not be concerned with server “stuff.” No need to fuss over RAM usage and scaling and so on. Simply (now that is a loaded word, “simply”) deploy your code and it works.
Here’s the official definition from the Cloud Native Computing Foundation:
“Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.”
An important point is the last sentence about executing, scaling, and billing on demand. Until recently—that is, until Knative appeared on the scene—a microservice ran 24×7 and was not, by the above definition, serverless.
Since serverless and FaaS have traditionally been used as interchangeable terms, they were considered one and the same. This much was constant: They both did not describe a typical microservice that is available all the time. In fact, discussions (arguments?) surfaced about the use of FaaS and/or microservices, with some even going so far as to claim that all microservices should, in fact, be serverless functions. While that may seem extreme to some, it has almost become a moot point. Why?
Everything you need to grow your career.
With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development.SIGN UP
Enter Knative serving
At the time when Google announced the availability of Knative, Kubernetes-based platform, FaaS offerings were typically based on small pieces of source code that ran as functions. Compiling them wasn’t often necessary. OpenWhisk—an open source FaaS platform—uses the command
wsk action create... to turn, for example, a Node.js file into a function (they’re called “Actions” in OpenWhisk parlance). One small file and one command and you have a function. You don’t even need to code any HTTP handlers or routes; it’s all built into the platform. And there’s one and only one route. The function has one entry and exit point; it’s not a complex RESTful service with multiple URIs.
Knative disrupted all that by making any service available as a function, in that Knative allows a service to scale to zero after a configured period of time. In other words, the service stops running, which means no CPU cycles, no disk activity, and no billing activity during its idle time. That’s not a small thing. It is the essence of serverless functions, and suddenly a RESTful service that handles, say, four different routes could now be considered a serverless function.
True, scaling to zero has its own challenge: how to minimize restart time. That’s another article altogether.
So, by definition, any service that can scale to zero and respond on demand is serverless (or FaaS or serverless function).
But wait, there’s more
Knative also brings other functionality to the developer: building, eventing, and serving are the three parts of Knative. I briefly discussed serving, but building and eventing are important as well.
Eventing, for instance, allows you to fire off services by using events. Put events into a queue and you have a truly event-driven architecture application. If you’ve ever built apps on a message-based platform (for example, Windows desktop applications) you’re familiar with the idea of events and messages “flying all around,” making a system work. When done right, it’s a beautiful symphony of harmonious code.
(OK, that last part was a bit over the top, but you get the idea.)
Knative leverages advanced and fast technologies, including Istio service mesh and gRPC. Although a developer probably won’t need to be aware of these things, someone does, and it does matter. In short, Knative is more of a platform rather than simply an implementation. It gives you a broad and robust foundation for your own implementation of functions.
Which one should I use?
There are reasons for both Knative and, say, something like OpenWhisk. Knative is speeding the evolution (and, likely, the adoption) of serverless/FaaS solutions, while existing technologies such as OpenWhisk remain useful. Further, it remains to be seen if and how traditional FaaS platforms embrace Knative.
As with any technology, it’s up to you to determine what mix of the two is best for you. Armed with knowledge and enabled with opportunities to test these technologies for zero cost, you’re in a good position to choose and move forward. As the technology evolves, so will your solutions.
To learn more, visit our Join the Red Hat Developer Program (it’s free) and get access to related cheat sheets (e.g. containers), books (e.g. microservices), and product downloads that can help you with your microservices and/or container application development.