Serverless computing in action

Now that we've explained the scenario, let's take a look at the service itself. We'll cover the image manipulation code and how it works, we'll test it, then we'll use the code with Don Schenck's React front end.

If you're more of a watcher than a reader, here's the video version of this content: 

The image manipulation code

The image manipulation code is at the heart of the Compile Driver Photo Booth. We nailed a high-definition webcam to a pole and stuck it next to the ride:

webcam

As riders fall past the camera, it takes a picture of the thrilled parkgoers and sends that data to the Coderland Swag Shop. The Swag Shop sends the data to Knative, which invokes the service and returns the results. Guests can then purchase a copy of the modified photo.

The code itself is a SpringBoot application, as we’ll see in a minute. SpringBoot makes it easy for us to build web applications. Because of the volume of data sent by the camera, we’ll need to use a POST request to get data to and from the service. SpringBoot makes that easy with annotations that define a method that handles POST requests along with its endpoint and data formats. We’ll use other annotations to handle data marshaling and other necessary things.

Serverless code typically uses JSON as its input and output format. Since we’re obviously dealing with binary data here, we’ll have to base 64 encode and decode that data as it moves along. The other architectural issue is that we need to work with JSON in a Java application. For that we’ll use the Jackson JSON library, and we’ll set up a helper class that lets us completely ignore any issues related to parsing or creating JSON data.

Building and running the code

The service we deploy to Knative is a SpringBoot application and the front end is a React application that requires Node.js. Once the code is built, the simplest way to test it is via curl on the command line. (You can test the code in other ways, but they’re more involved.) You’ll need to install Java, Maven, Node, and curl before you can build and run the code. Assuming you’ve got those things behind you, we’ll run the code before we dissect it.

The first step, of course, is to clone the repo:

Github code
Github Code

Next, we need to build the code. That’s easy: just switch to the image-overlay directory and run this command:

​​​​​​​mvn clean package

Maven builds the code, runs some tests, and packages everything up as a JAR file in the target directory. With the JAR file built, run it with the java command:

java -jar target/imageOverlay-1.0.0.jar

This starts the code on localhost:8080/overlayImage. Now it’s time to run curl. Switch to another terminal prompt and get ready for some typing, because we need to specify a lot of things on the curl command:

  • This is a POST request

  • We’re sending JSON (ContentType: application/json)

  • We only accept JSON in response (Accept: application/json)

  • We’re sending in a data file that’s included in the repo (sampleInput.txt)

  • The URL of the service, localhost:8080/overlayImage

In addition, you’ll want to redirect the output of the command to a file so you can work with the results, so add something like > results.txt to the end. Brace yourselves; here’s the complete command:

curl -X "POST" -H "Content-Type: application/json" -H "Accept: application/json" -d @sampleInput.txt localhost:8080/overlayImage > results.txt

For the options here, X specifies the type of HTTP request, H defines an HTTP header, and d defines the input data, with the at-sign indicating that this is a filename.

So run the curl command, then edit results.txt. What you need to do is delete everything except the value of imageData. (Keep only the stuff between the double quotes.) When you’re done, there should be nothing more than the value of imageData. No double quotes, no field names, no curly braces. Overwrite the file by saving it as results.txt.

Finally, run base64 to decode the image:

base64 -D results.txt -o results.jpg

This decodes the results.txt file and writes the output of the conversion to the file results.jpg. If you’re comfortable with JSON data, edit sampleInput.txt, change the greeting field at the end of the file, go through this process again, and you’ll see a new image with a different greeting at the top.

The code of the service

Now that you know how to build and run the code, we’ll go through how it works. Here’s the basic flow of the application:

  • Use various SpringBoot annotations to set up the application.

  • Get the data from the POST request. Again, we use the Jackson library to do that for us.

  • Create a BufferedImage from the data, then create a canvas (a Graphics2D object) for it.

  • Draw the BufferedImage onto the canvas.

  • Get the Coderland logo. It’s stored as a resource inside the application’s JAR file, so it’s easy to find and load.

  • Once we have the logo, we draw it onto the canvas on top of the original image.

  • Get the message from the JSON data and use Font and FontMetrics objects to figure out how much space the message needs.

  • Draw the message text centered on the canvas.

  • Get the date stamp, figure out how much space it needs, and draw it centered on the Canvas as well.

  • Convert the finished image on the canvas to binary data.

  • Convert the binary data into base 64

  • Put the data into the JSON structure and return it to the caller.

(You could argue that any list with that many items shouldn't be called a "basic flow." That's a good point, but the code really isn't that complicated. As you'll see.) 

Here are the first annotations we use. This is code from com.example.imageOverlay.ImageOverlayApplication.java:

@SpringBootApplication
public class ImageOverlayApplication {

 @RestController
 class ImageOverlayController {

We define this as a @SpringBootApplication and say that the ImageOverlayController class is a @RestController. That makes it easy for us to say that our code handles POST requests for a particular endpoint:

​​​​​​​   @PostMapping(path = "/overlayImage",
                consumes = "application/json",
                produces = "application/json")
   public Image incomingImage(@RequestBody Image image)
     throws IOException {

Here we’re saying that the method incomingImage handles POST requests at the /overlayImage endpoint. As we discussed before, it uses JSON as its input and output format. And we use the @RequestBody annotation to say that an Image object (more on that class next) is the body of the POST request.

Before we get into the actual image processing, let’s look at the Image class. It has six member fields:

  • imageData - The base 64-encoded data of the image

  • imageType - Either JPG or PNG, case-insensitive

  • greeting - The message to write at the top of the image

  • language - The language to use for the date string (en for English, for example)

  • country - The country to use for the date string (US for the United States)

  • dateFormatString - A string that defines the date format. “MMMM d, yyyy” is the default value.

The first three fields are the most useful, while the last three are just hardcoded parameters that we moved into the JSON data. Changing the last three fields to be pl, PL, and "d MMMM yyyy" generates the string "17 lutego 2019." (Dzień dobry to our friends in Poland, btw.) You’re far more likely to change the greeting field.

Note: We’re aware that example.com.imageOverlay.Image is a terrible name for a class and we feel just awful about it. But not so awful that we spent the time to think of a better name.

Those are the member fields of the Image class. But why have the class in the first place? It’s because we can use the Jackson JSON library with it. When we set up the Image class with @JsonProperty annotations, Jackson automatically converts our JSON input into a Java object and later automatically converts our Java object into the JSON output we need. We don’t have to parse anything. Here’s the annotation for the imageData field:

@JsonProperty("imageData")
   public String getImageData() {
       return imageData;
   }

   public void setImageData(String imageData) {
       this.imageData = imageData;
   }

This tells the Jackson library that when it’s parsing JSON data to create an Image object, the JSON field imageData maps to the member field imageData in the Java class. The same mapping works in reverse when we ask Jackson to convert the Java object back into JSON. (The names don’t have to be the same, but it’s a lot less confusing if they are.)

So that sets up the basics for working with JSON data and handling POST requests as they come in from the network. Now it’s time to move on to the actual image manipulation code. We’ll go through this pretty quickly. Knowing how to build a REST application and handle JSON data are useful skills you’ll use in lots of places; working with images, probably less so. We start by creating a BufferedImage object from the imageData:

BufferedImage baseImage =
       ImageIO.read(Base64.getDecoder().
                    wrap(new StringBufferInputStream(imageData)));
 
     int imageTypeCode = imageType.equalsIgnoreCase("png") ?
       BufferedImage.TYPE_INT_ARGB : BufferedImage.TYPE_INT_RGB;
 
     BufferedImage targetImage =
       new BufferedImage(baseImage.getWidth(),
                         baseImage.getHeight(),
                         imageTypeCode);
       
     Graphics2D canvas = (Graphics2D) targetImage.getGraphics();
     canvas.drawImage(baseImage, 0, 0, null);
     AlphaComposite alphaChannel =
       AlphaComposite.getInstance(AlphaComposite.SRC_OVER, 1.0f);
     canvas.setComposite(alphaChannel);

Once we have the BufferedImage, we have to look at the imageType field to see if this image has an alpha channel for transparency. (PNGs have one, JPEGs do not.) Next we create targetImage. This is the output image. We get the canvas for this image (a Graphics2D object) and draw the original image (baseImage) on the blank canvas. Now we’re ready to draw things on top of the original image, so we set up the alpha channel.

From here, we load the overlay image that contains the Coderland logo:

BufferedImage logoImage =
       ImageIO.read(ImageOverlayApplication.class.
               getResourceAsStream("/static/images/overlay.png"));

The location of the image is based on the classpath, which is set inside the JAR file generated by Maven. The actual location of the file in the repo is src/main/resources/static/images/overlay.png, but the root of the path you use in Java starts with the static directory. At any rate, if you wanted to replace the Coderland logo with something else, you’d replace that image, rebuild the JAR, and run it.

Drawing the logo over the target image is as simple as calling the drawImage() method. However, we do a little math to make sure the logo is vertically centered on the image, whatever its dimensions are:

int centerX = 0;
     int centerY = (baseImage.getHeight() -
                    (int) logoImage.getHeight()) / 2;
     canvas.drawImage(logoImage, centerX, centerY, null);

Now it’s time to draw the text of the greeting and the date stamp. To do that, we’ll set the font of the canvas and then use Font and FontMetrics objects to figure out exactly how much space the text needs. That way we can horizontally center the text on the image. As a final touch, we draw the text in black two pixels down and two pixels to the right of where the text should go, then draw the text in white on top of the black text. This gives us a nice shadow effect that makes the text easier to read if the background of the photo is lighter. Here’s the code:

canvas.setFont(new Font(Font.SANS_SERIF, Font.BOLD, 48));
     FontMetrics fontMetrics = canvas.getFontMetrics();
     Rectangle2D rect =
       fontMetrics.getStringBounds(greeting, canvas);
 
     centerX = (baseImage.getWidth() - (int) rect.getWidth()) / 2;
     centerY = 100; 

     canvas.setColor(Color.BLACK);
     canvas.drawString(greeting, centerX + 2, centerY + 2);
     canvas.setColor(Color.WHITE);
     canvas.drawString(greeting, centerX, centerY);

Some opportunities for improvement here:

  • I originally wrote this code to use Overpass, Red Hat’s official font. If the code couldn’t find that font, it used Font.SANS_SERIF as a fallback. However, when you deploy a service to Knative, you deploy it as a container image. That means the container image has to contain the Overpass font. If you figure out how to modify the Dockerfile in the repo to add Overpass to the image, we’d love to see how you did it. (We’d really love it if you sent us a PR.)

  • No matter the size of the image, the margins and offsets for the text and the size of the logo never change. That could be more elegant.  

  • As you can see in the listing above, we set the font size to 48. Which may be too big for some images or greetings. If the greeting is the first few lines of the Gettysburg Address, for example, it’s not going to fit:

doug-coderland

It would have been clever of me to get the width of the text and shrink down the font size until the requested text fits on the image. We’ll call it an exercise for the reader. (You’ll likely call it laziness on the part of the programmer.) Moving right along, the next step is to get the date using the locale (language and location) and dateFormatString specified in the JSON data:

SimpleDateFormat sdf =
       new SimpleDateFormat(dateFormatString,
                            new Locale(language, location));
     String dateString = sdf.format(new Date());

Once we have the date string, we draw it at the bottom of the image just like we drew the greeting at the top.

At last we have the updated image. The Coderland logo, the greeting, and the date stamp are all overlaid on the image, so it’s time to create some JSON and send it back. To do that, we put the data in a ByteArrayOutputStream, use Java’s Base64 utility classes to encode it, then create a new Image object. (That’s our Image class that represents the JSON data. As we said, it’s an awful name for a class.)

Finally, we return that Image object.

ByteArrayOutputStream overlaidImage =
       new ByteArrayOutputStream();
     ImageIO.write(targetImage, imageType, overlaidImage);
     overlaidImageData = (Base64.getEncoder().
                encodeToString(overlaidImage.toByteArray()));
 
     Image updatedImage =
       new Image(overlaidImageData, "JPG", greeting,
                 language, location, dateFormatString);
     return updatedImage;

Notice that throughout the proceedings, no JSON data was seen. Jackson does a great job of handling all the details for us, letting us focus on image processing instead of JSON parsing.

Now you know how to build and run the code and you understand how it works. You also no doubt noticed that it’s really tedious to send the code some data, take the response apart, and see what the generated image looks like. Fortunately, we’ve got a much, much, much simpler way to test the code.

The React front end

I was talking with The Awesome Don Schenck as I was putting this material together, and he had a great idea. (To make things less verbose, I’ll use “Don” as an abbreviation for “The Awesome etc.”) Why not create a React web application that accesses the webcam on your machine, then sends the image from the webcam to the service whenever you click a button? That, of course, is a splendid notion, and we’re lucky Don offered to write the web app for us.

To get started, switch to another terminal window and clone Don’s repo:

Next type npm install and npm start to set up and run Don’s code. The code should open a tab in your system’s default browser. If it doesn’t, go to localhost:3000. You’ll see a display something like this:

photo-booth1

As you can see from the poorly lit image above, the browser is capturing my image through my laptop’s webcam. (Sorry, it was late and I didn’t feel like making a pretty picture for you.) Make sure the image manipulation service is running, click the “Take Picture” button, and you’ll see the transformed image at the bottom of the page:

The image captured from the webcam has been modified and is now displayed at the bottom of the screen. (You can also see what I meant earlier when I said it would be nice to change the margins and text size based on the size of the image.)

Sadly, yr author is woefully unqualified to discuss the wonders of Don’s code. When talking to Don, with characteristic humility he said the code wasn’t any big deal, that it was basic React and nodefetch and the react-webcam library. I do believe that anyone with React skills will find the code straightforward. Here’s hoping Don will have the time to explain his code in more detail soon. We’ll be thrilled to replace this paragraph with that content.

What's next

The obvious next step is to move on to Part 3. In that article you'll take the code and deploy it to Knative. Then you'll see how to monitor the service in the OpenShift console, how to invoke it from the command line, and finally how to use Don's front end with the serverless code managed by Knative.

And have we mentioned it brings us joy when you send us your comments and questions? Reach out to us at coderland@redhat.com, we'd love to hear from you. 

Last updated: January 12, 2024