So now we have a service that does everything we want and we have a really nice front end for testing that service any old time. That’s great, but we’re here to talk serverless. That means we need to deploy the service to Knative. Knative will monitor the demand on the service and scale it down to zero or up to whatever is necessary to handle the load.
If you prefer video to text, here's a detailed explanation of what we'll do here:
Background
We assume you've been through the first two videos in this series, but that's not required. When we deploy the service to Knative, we use a pre-built Docker image of the code, so you don't have to build it yourself or even really understand what it does. But you'll need to clone the repos if you haven't already. Here's the repo for the image manipulation code:
And you can find Don Schenck's React front end here:
Kubernetes, Istio, and Knative
We’ll use three things to do serverless computing: Kubernetes, which provides the base cluster we use for everything else; Istio, a service mesh that provides routing, among many other things; and Knative, which manages our service.
Knative has three parts:
Knative Build, which, as you would expect, builds your code
Knative Eventing, which lets you configure events that should trigger your code
Knative Serving, which manages the number of pods running your service and routes traffic to it as appropriate.
For our purposes here, we’re supplying a pre-built container image of the service code, so we don’t need Knative Build. And we’re going to invoke the service directly instead of setting up an event infrastructure, so Knative Eventing won’t be in the picture either. We’ll focus on Knative Serving.
So.
All you need to do is install a major piece of infrastructure inside of another major piece of infrastructure inside of another major piece of infrastructure. Easy, right? Well, thanks to our friend Kamesh Sampath (like Don, awesome, talented, and extraordinarily helpful), it actually is easy.
For a Kubernetes environment, we’ll use Red Hat’s Container Development Kit (CDK). The CDK provides minishift
, a single-node version of Red Hat OpenShift, an open-source, enterprise-class Kubernetes implementation. If you’re a member of the Red Hat Developer Program (if you’re not, you should be, it’s free), you can download a copy at developers.redhat.com/products/cdk/overview.
Along with minishift
, we’ll be using the oc
command to control the cluster and work with Istio and Knative.
With the CDK installed, it’s time to get to know Kamesh’s wonderful Knative tutorial. Be aware that the tutorial is currently under development (Knative is at v0.3.0, after all), but we’ll keep the instructions here up-to-date as things change. Go through the following steps to get everything you need on your machine:
Open redhat-developer-demos.github.io/knative-tutorial in a new browser tab. This contains the text of Kamesh’s tutorial. The complete tutorial with all of the code is available at bit.ly/knative-tutorial, if you'd like to start there.
Get Kamesh’s code via
git clone https://github.com/redhat-developer-demos/knative-tutorial
Define the
knative-tutorial
directory asTUTORIAL_HOME
(export TUTORIAL_HOME=~/Developer/knative-tutorial
, for example)Switch to the
knative-tutorial/work
directoryGet the install script via
git clone https://github.com/openshift-cloud-functions/knative-operators
Switch to the
knative-operators
directory
From that directory there is a wondrous script in the path ./etc/scripts/install-on-minishift.sh
. Believe it or not, running this script installs everything you need. But first, you need to edit the file and change or delete this line:
minishift config set openshift-version v3.11.0
You can either change the version to v3.11.43
or make it simpler by deleting this line altogether. The CDK is currently on OpenShift v3.11.43, and it makes sure that everything is in sync. Just be aware that if you’re using the latest CDK, requiring v3.11.0
will fail.
With the script updated, run ./etc/scripts/install-on-minishift.sh
. This script takes care of just about everything, including getting Kubernetes / OpenShift up and running, installing Istio, and installing Knative Build, Knative Eventing, and Knative Serving. Be patient; installing this much stuff takes a while, and for a number of steps, all of the components of the previous step must be up and running before the step can even begin. Just be aware that Kamesh’s script is at least one, maybe two orders of magnitude faster than you or I could type the dozens of commands required to install all this stuff.
Once the script is done, run the following commands:
eval $(minishift oc-env)
eval $(minishift docker-env)
oc login -u admin -p admin
This points oc
to your cluster and puts it on your system path, sets up the Docker environment used by minishift
, and logs you in as the admin.
Deploying your service to Knative
Your cluster is up and running and you’re logged in as the admin, so run these three commands:
oc new-project knativetutorial
oc adm policy add-scc-to-user privileged -z default
oc adm policy add-scc-to-user anyuid -z default
These commands create a new project named knativetutorial
and add privileged Security Context Constraints (SCCs) to the default service account.
At a command line, switch to the image-overlay
directory you cloned earlier. It contains the file service.yaml
:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: overlayimage
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/dougtidwell/imageoverlay:v1
This YAML file does two important things: 1) It gives our service a name, overlayimage
, and 2) It tells Knative where to find the container image that includes the code for the service. Here’s how to deploy our service to Knative:
oc apply -n knativetutorial -f service.yaml
This applies the file service.yaml
to the project (n
amespace) knativetutorial
. Simple as that. If you go to the console, you’ll see your new service in action. Run the console
command:
minishift console
This opens the cluster’s console in a new browser tab. You’ll get an error that the site is using a self-signed certificate; ignore that, of course, and go ahead to the page. Next you’ll be asked to log in. Use admin
as your username and password. Once you see the main panel, click the small View All link beneath the Create Project button in the upper right-hand corner:
As you would expect, there’s a project called knativetutorial
:
Click the project to see its resources:
overlayimage-00001
is the application we deployed via the YAML file.
Now click the Applications menu and choose Pods:
Now you’ll see all the pods associated with the deployed application. Unfortunately, I dragged my feet in capturing the screen, because by the time I went looking for pods, they were already being terminated:
The pods were being terminated because I hadn’t invoked the service. Which is as it should be. As it monitors the system, Knative scales the number of pods to zero if there’s no demand for the service. When someone does invoke the service, the number of pods will be nonzero.
Invoking your service from the command line
Now let’s invoke the service via curl
and check the results. Keep in mind that what we’re doing here is invoking the containerized version of our code as managed by Knative, running alongside Istio, and running inside a Kubernetes cluster. But first, we need to get the IP address of the service. If you’ve worked with Istio before, you know that’s complicated. Istio has an ingress gateway that controls who can access the services it manages, and we have to find the IP address of the gateway, the port, and other details before we can get to our service.
So here we go. Don’t try to type this, just cut and paste:
IP_ADDRESS="$(minishift ip):$(kubectl get svc istio-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}'/overlayImage)"
$(minishift ip)
simply returns the IP address of the cluster, and /overlayImage
is the endpoint of the service. The kubectl
command in the middle is what gets the port number. On my system, $IP_ADDRESS
is 192.168.99.100:31380/overlayImage
. Your mileage, of course, may vary.
There’s one more complication, and it’s a big one: Knative uses the HTTP Host
header to route requests to its services. The value of that header is typically the name of the service (overlayimage
) followed by the name of the namespace (knativetutorial
) and example.com
. So Knative looks at this header:
Host: overlayimage.knativetutorial.example.com
And determines that this request should be routed to the service named overlayimage
in the namespace knativetutorial
. That’s the default value of the Host
header, anyway. To know for sure what the value should be, run the command oc -n knativetutorial get services.serving.knative.dev
:
doug@dtidwell-mac:~/Developer/image-overlay $ oc -n knativetutorial get services.serving.knative.dev
NAME DOMAIN LATESTCREATED LATESTREADY READY REASON
overlayimage overlayimage.knativetutorial.example.com overlayimage-00001 overlayimage-00001 True
As you can see, the domain for the overlayimage
service is overlayimage.knativetutorial.example.com
. Now we finally have our service deployed and we have all the information we need to invoke it via curl
. We have to specify the same parameters we used earlier, with the addition of -H “Host: overlayimage.knativetutorial.example.com”
and $IP_ADDRESS
instead of http://localhost:8080/overlayImage
. Here’s the complete command:
curl -X "POST" -H "Host: overlayimage.knativetutorial.example.com" -H "Content-Type: application/json" -H "Accept: application/json" -d @sampleInput.txt $IP_ADDRESS
Just as you did earlier, you’ll want to redirect the output from this command to a file, edit the JSON to leave behind only the contents of the imageData
field, then use the base64
command to decode it and look at the modified image. It’s a hassle, but it lets you make sure the code works.
Invoking your service from the React front end
You should be thrilled by what you’ve accomplished so far. You took some code and deployed it to Knative running inside a Kubernetes cluster. Knative manages that code without requiring it to be deployed on its own server.
But it would be really cool to invoke the service from Don's React front end code, the web application we looked at in part 2. Using React, you can make sure the service works without having to deal with command lines and hand-editing files and all of that tedium.
However, this is where things get complicated. (Well, actually, this is where they get more complicated.) Don’s code uses an XMLHttpRequest, an XHR, to invoke the image manipulation service without forcing a reload of the page. That’s how a modern web application works, after all. Unfortunately, an XHR can’t set the value of the HTTP Host
header. In an XHR the value of Host
will always be the URL where the request is being sent. Perfectly reasonable behavior, but it prevents us from invoking the service.
The elegant way to fix this (as we understand it) is to set up some Istio wildcard routing rules and some DNS settings and basic Kubernatorial magic so that the default Host
value matches the domain of the service. Unfortunately, yr author wasn’t able to get this to work. I’ll keep working on it and will publish a video and an article when I get it working.
We’re all programmers here, so I turned to the time-honored tradition of abandoning elegance for now and writing a hack that gets around the problem. No shame in that. (Not much, anyway.) What I did was write a proxy that can fix the Host
header. Don’s React application (or any application that uses XHRs) sends the request to my proxy, which sets the Host
header correctly and submits the request to the service. When the results come back, the proxy sends them back to whoever made the request.
The knative-proxy project
First, clone the proxy repo:
Let’s take a look at the code. The first things I had to handle were the CORS issues (cross-origin resource sharing) inherent in a web app loading content from a third-party site. The flow there is that Don’s code uses the HTTP OPTIONS
verb to check the attributes of our service. If the headers returned by the proxy are what Don’s code is looking for, it responds with the POST
request. Here’s what the code for the OPTIONS
verb looks like:
reassureTheClient = function(req, res) {
res.header('Access-Control-Allow-Origin', req.header('Origin'));
res.header('Access-Control-Allow-Methods', 'POST, OPTIONS');
res.header('Access-Control-Allow-Headers',
'Content-Type, Authorization, Content-Length,
X-Requested-With');
res.sendStatus(200);
}
This tells the caller that the allowed origin for the request is the caller. We set the value for that header to be the value of the Origin
header from the requester. In other words, if jane.example.com
asks who is allowed to request data from this function, we respond with an Access-Control-Allow-Origin
header with the value jane.example.com
. That tells the caller “We have nothing but love for jane.example.com
.” That makes Jane happy, so she responds with the POST
request.
The code that handles the POST
request is pretty straightforward, but it needs to be configured with two values: The URL of the service and the required value of the Host header. Here’s how we handle those parameters in the code:
const url = process.env.PROXY_URL ||
'http://192.168.99.100:31380/overlayImage';
const headerValue = process.env.PROXY_HOST ||
'overlayimage.knativetutorial.example.com';
In other words, if the PROXY_URL
and PROXY_HOST
variables are set in the current environment, we use those values. Otherwise, there are two default values that look suspiciously like the values we use in this example.
With those variables set, we use the popular nodefetch
library to invoke the service:
nodeFetch(url, {
timeout: 90000,
method: 'post',
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Host': headerValue
},
body: JSON.stringify(req.body)
})
.then(response => response.json())
.then((responseJson) => {
res.set({'Access-Control-Allow-Origin': req.header('Origin'),
'Cache-Control': 'no-cache, no-store, must-revalidate'})
.json(responseJson)
.status(200);
});
First of all, we set the timeout value to 90000
milliseconds. We’d like the proxy to give Knative plenty of time to get a new pod up and running if necessary. Next we set the headers of the request. As we saw with curl
, we need to set the Content-Type
, Accept
, and Host
headers. The values of the first two never change (we always use JSON data), and we get the value of the third from the configuration.
The body of the request to the Knative service is the body of the request from Don’s code or whoever called the proxy. From there we have a couple of promises to handle the response. We set the Access-Control-Allow-Origin
header again, we tell the caller not to cache the results, and we respond with a status code of 200
.
The last little bit is to tell Express which methods handle the POST
and OPTIONS
verbs for the /overlayImage
endpoint and then start the proxy:
app.route('/overlayImage')
.post(modifyImage)
.options(reassureTheClient);
app.listen(port);
Go to the terminal window where you’re running the proxy. Set the two variables as appropriate:
export PROXY_URL=http://192.168.99.100:31380/overlayImage
export PROXY_HOST=overlayimage.knativetutorial.example.com
Run npm install
if you haven’t before, then npm start
to get the proxy up and running.
By default the proxy runs at http://localhost:8888/overlayImage
, so we need to configure Don’s code to send requests to that URL. Switch to the terminal window where you’re running Don’s code and set the REACT_APP_OVERLAY_URL
environment variable:
export REACT_APP_OVERLAY_URL=http://localhost:8888/overlayImage
Now type npm start
to restart Don’s code. This should open a browser tab with the Compile Driver application running in it. If it doesn’t, open http://localhost:3000
.
The picture of success
Look into the camera and click the “Take picture” button. Here’s what success looks like:
You can see the modified image at the bottom. At the top of the page you see a very happy programmer. It works! (And if you look over my left shoulder, you’ll see the model of the Compile Driver from the first video.)
The JavaScript console is on the right side. The error messages have to do with markup issues (<div>
can’t be a child of <p>
, for example). You can also see traces of the interactions between Don’s code and the proxy.
What's next?
That brings us to the end of our discussion of the Compile Driver. There are several things you can do to keep growing your knowledge of serverless computing and Knative:
- First and foremost, follow Kamesh's fantastic Knative tutorial at bit.ly/knative-tutorial. Kamesh is actively developing and improving the tutorial all the time. He is a committer on Knative and other open source projects, and his tutorial lets you take advantage of his expertise and experience.
- If you didn't go through Part 2 of this tutorial, it's worth your time.
- Keep your eyes peeled for more serverless content here at the Red Hat Developer Program.
- Finally, if for some unimaginable reason you haven't joined the Red Hat Developer Program yet, by all means drop everything and do that now. It's free and it gives you access to our products. (The CDK, for example.)
We hope you enjoyed these materials; we had a lot of fun putting them together. And we’d love to hear your comments, suggestions, even PRs for our repos. We’re coderland@redhat.com: let us hear from you!