Until recently, Knative Serving used Istio as its default networking component for handling external cluster traffic and service-to-service communication. Istio is a great service mesh solution, but it can add unwanted complexity and resource use to your cluster if you don't need it.
That's why we created Kourier: To simplify the ingress side of Knative Serving. Knative recently adopted Kourier, so it is now a part of the Knative family! This article introduces Kourier and gets you started with using it as a simpler, more lightweight way to expose Knative applications to an external network.
Let's begin with a brief overview of Knative and Knative Serving.
What is Knative?
Knative is a Kubernetes-based platform for deploying and managing serverless workloads. It is split into two projects:
- Knative Serving focuses on triggering containers using HTTP traffic.
- Knative Eventing focuses on triggering containers using events.
This article addresses how Kourier works with Knative Serving.
Knative Serving
Knative Serving provides the following features: deployment, autoscaling, and resource conservation. When it comes to deployment, Knative Serving simplifies how we deploy applications to Kubernetes and adds the concept of revisions. Revisions are an immutable service configuration taken when a service is created or modified. This feature lets you quickly roll back changes and gives Knative advanced traffic management, like blue-green deployments and mirror traffic.
For autoscaling, Knative Serving automatically scales your containers based on the defined max concurrent requests for that service. For example, if the desired max concurrency is set to one, a Kubernetes pod is spun for each new request.
When it comes to resource conservation, Knative Serving ensures that dormant applications are scaled to zero and ready for the next request.
Here is an example of a simple application deployment using Knative Serving:
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: helloworld-go namespace: default spec: template: spec: containers: - image: docker.io/jmprusi/helloworld-go env: - name: TARGET value: "Go Sample v1"
As you can see, this is a "Hello, world" application. We only need to define the container image and the required environment variables. We do not need to define the service, deployment, or selectors. See the Knative API specification for more details.
What is Kourier?
Like Istio, Kourier is a lightweight ingress based on the Envoy gateway with no additional custom resource definitions (CRDs). It is composed of two parts:
- The Kourier gateway is Envoy running with a base bootstrap configuration that connects back to the Kourier control plane.
- The Kourier control plane handles Knative ingress objects and keeps the Envoy configuration up to date.
How Kourier works
Kourier does the following:
- Reads the ingress objects created by Knative Serving.
- Transforms these objects into an Envoy configuration.
- Exposes the configuration to the Envoys that it manages.
Figure 1 shows in more detail how Kourier works with Knative Serving to expose Knative applications to the network.
When a new service is deployed in Knative Serving, it creates an Ingress
object that contains information about how the service should be exposed. An ingress object includes the following elements:
- Hosts, paths, and headers: These elements are matched against the same elements included in incoming requests. When there's a match, we know that the request should be proxied to the Knative service associated with the ingress object.
- Splits: We use splits to divide incoming traffic between different revisions of a deployed service.
- Visibility: Defines whether the service should be accessible from within the cluster or from outside.
- Transport Layer Security (TLS): Specifies the Red Hat OpenShift secret that contains the certificate and the key needed to expose the service with HTTPS.
Kourier subscribes to changes in ingresses that are managed by Knative Serving. Kourier is notified every time an ingress is created, deleted, or modified. When that happens, Kourier analyzes the information in the ingress and transforms the information into objects in an Envoy configuration. Envoy configurations can be complicated, but clusters and routes are two of the objects that they include. A cluster is a collection of Internet Protocol addresses (IPs) that belong to the same service. Routes contain all of the information used to match a given request: the host, path, headers, and so on. A route can also specify which cluster proxy the request should go to when there is a match.
After an ingress has been transformed into objects in an Envoy configuration, we can use the Envoy xDS APIs to expose that configuration to the Envoys in the cluster. Kourier manages the cluster.
When Knative scales up the number of pods in a service, Kourier automatically selects the IP of the newest pod and starts proxying traffic to it. Similarly, when the number of pods is reduced, Kourier is notified and stops sending requests to pods that are scheduled to be deleted.
It is important to note that in all of this process, Kourier does not create any custom resources that only it can understand. Instead, Kourier only works with objects managed by Knative Serving (the ingresses), and objects managed by Kubernetes—like endpoints, secrets (the ones that include the TLS configuration), and so on. This is why we say that Kourier is a "Knative-native" ingress.
Kourier's integration with Knative
As well as being Knative-native, Kourier is a Knative-conformant ingress. By that, we mean that all of the features of Knative Serving work well when using Kourier. These include traffic splitting between different revisions of a service, TLS, timeouts, retries, automatic endpoint discovery when a service is scaled, and more.
To ensure that Kourier is conformant and that it supports every new feature added to Knative Serving, we have configured a continuous integration (CI) system that runs the Knative Serving conformance test suite. As you can see from the test grid, Kourier is one of the few Knative ingress implementations that consistently passes all of the tests in the suite.
Using the Envoy external authorization filter with Kourier
It is possible to configure Kourier to use the Envoy external authorization filter for traffic authorization. For every incoming request, Kourier will contact an external service to check whether the request should be authorized. If it is authorized, Kourier will proxy the request to the appropriate service. If it is not, it will return an error to the caller. One way to use this feature would be to build a service based on the Open Policy Agent (OPA) framework. OPA supports Envoy's external authorization protocol and allows us to define authorization rules using a high-level language designed specifically for writing authorization policies.
Get started with Kourier
If you are using OpenShift, you can find the Red Hat OpenShift Serverless Operator in your Operator catalog. The Serverless Operator automatically installs Kourier. (For installation details, see Installing the OpenShift Serverless Operator.)
Alternatively, you can install Kourier in a cluster that is already running Knative Serving. In this case, just enter:
$ kubectl apply --filename https://github.com/knative/net-kourier/releases/download/v0.15.0/kourier.yaml
Note: Replace the version number (in this case, v0.15.0) with the version that you want to install.
Here is the code that configures Knative to use Kourier:
kubectl patch configmap/config-network \ --namespace knative-serving \ --type merge \ --patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
That should be enough to get started. For more complex installations, see the installation instructions in the Knative documents.
Conclusion
As we mentioned at the beginning of this article, Knative was recently adopted by Kourier, so it's one of the currently supported implementations for Knative Serving. Check out Kourier's GitHub repository under the Knative project.
Last updated: February 5, 2024