secure coding - simple

If your organization is adopting event-driven and serverless architectures, you're probably evaluating the Apache Kafka and Knative projects to help create your next generation of applications. Apache Kafka provides a robust, high-performance, high-availability event-streaming platform. Knative provides a platform for managing serverless workloads on Kubernetes.

Applications built as Knative services are designed to scale up in response to incoming events that are delivered via HTTP. A serverless application that scales based on incoming HTTP requests may seem like it would be incompatible with the persistent connection model that Apache Kafka uses. Knative provides a neat solution to this problem via event sources, specifically the KafkaSource for Apache Kafka.

In this article, you'll learn how to create an event-driven architecture that uses Apache Kafka and Knative. You can use the free quota provided by the Developer Sandbox for Red Hat OpenShift and Red Hat OpenShift Streams for Apache Kafka to follow along. There is no need to provision your own Kubernetes and Kafka infrastructure when using these services.

A sample application for Kafka and Knative

My previous articles on Apache Kafka used Shipwars, a browser-based game that can be run on Red Hat OpenShift, to demonstrate key concepts. This game features a bonus round where players can rapidly click to score extra points. It's possible to cheat in this round using Chrome's JavaScript DevTools. In this article's sample application, you'll use Knative to analyze events generated by the Shipwars backend to spot potential cheaters.

Figure 1. Architecture for the Shipwars cheat detection feature.

Figure 1 is a high-level architectural diagram for this example. The game server sends events to a managed Kafka cluster. A separate auditing cluster processes the events in real time to flag suspicious bonus payloads. These flagged payloads can be published to a Knative broker as a new event for downstream services to process and issue a ban, or to email a support team to trigger an investigation. A more detailed view, showing the specific Knative resource types used in the example, is shown in Figure 2.

Figure 2. Knative resources used to implement the demo.

In the next few sections, you'll get a brief overview of the instructions for deploying the sample application. You can find detailed deployment instructions in the application's GitHub repository.

Create a managed Kafka instance and topic

Before getting started make sure you've signed up for a Red Hat account, so you can try the Red Hat OpenShift Streams for Apache Kafka service for free. Next, download the Red Hat OpenShift Application Services command-line interface (rhoas) and use the following commands to create a Kafka instance:


rhoas login 
rhoas kafka create --name shipwars --wait

Your managed Kafka instance should be ready within five minutes. Once it's ready, select it and create a topic named shipwars-bonuses using the command-line interface (CLI).


rhoas kafka use
rhoas kafka topic create --name shipwars-bonuses --partitions 3

Link your Kafka instance and OpenShift project

Sign in to the Developer Sandbox using your Red Hat account to gain access to your own dedicated OpenShift environment. Find the OpenShift CLI download and login command, as shown in Figure 3.

Figure 3. OpenShift CLI installation and login instructions in the Developer Sandbox.

Install the OpenShift CLI (oc) and log in to the cluster from the CLI using the command provided by the Copy Login Command link at the top of the page:


oc login --token=<your-token> --server=<sandbox-api-url>

Now that you're logged into the OpenShift cluster, you can create a connection between your OpenShift project and managed Kafka instance using the rhoas CLI. Enter the following command and follow the prompts to complete the process:


rhoas cluster connect --service-type kafka

The managed Kafka instance will be shown in the Topology view once the command completes, as illustrated in Figure 4.

Figure 4. The OpenShift Topology view showing the Kafka connection resource.

Configure Kafka topics and ACLs

Prior to deploying applications that connect to your managed Kafka instance, you need to configure your instance's access control lists (ACLs) with the following commands:


# Login to your OpenShift cluster
oc login --token=<your-token> --server=<your-cluster-api-url>

# Login to RHOAS
rhoas login

# Select the Kafka cluster that's connected to your OpenShift environment
rhoas kafka use

A service account was created when you used the rhoas cluster connect command earlier. This account will be used by containers running in your OpenShift project to authenticate with your managed Kafka instance using the Simple Authentication and Security Layer (SASL) framework. You'll provide the service account with produce/consume permissions for Kafka topics, starting with shipwars.


# Obtain the service account ID from the OpenShift cluster
export CLIENT_ID=$(oc get secret rh-cloud-services-service-account -o jsonpath='{.data.client-id}' | base64 --decode)

# Provide consume permissions to this service account for applications
# in the "knative-consumer" consumer group

rhoas kafka acl grant-access --producer --consumer \
--service-account $CLIENT_ID --topic-prefix shipwars --group knative-consumer

You can review the permissions for your Kafka instance using the rhoas CLI, or, as shown in Figure 5, in the OpenShift Streams for Apache Kafka UI.

Figure 5. Reviewing permissions in the OpenShift Streams for Apache Kafka UI.

Deploy a Knative broker and service

A Knative broker provides an event mesh for collecting and routing events to your serverless applications. The sample application's cheat-detection service will post results for each processed bonus event, in CloudEvent format, to the Knative broker. A Knative trigger can be used to route these results to other serverless applications for further processing within the Kubernetes cluster that has Knative installed. Red Hat OpenShift Serverless is based on Knative, and OpenShift Developer Sandbox has the OpenShift Serverless Operator installed and configured, so we can use this functionality.

Deploy a broker using the YAML included in the sample application's GitHub repository:


oc apply -f openshift/broker.yml

You can verify that the broker has entered the ready state using oc's get brokers command. Once the broker is ready, deploy the serverless cheat detection application using the following command:


# The cheat detection service will HTTP POST results to this URL
export BROKER_URL=$(oc get brokers -o jsonpath='{.items[0].status.address.url}')

# Deploy the serverless function
oc process -f openshift/knative.service.cheats.yml 
-p BROKER_URL=$BROKER_URL | oc create -f -

Verify that the components in your OpenShift Topology view match those in Figure 6. The Knative service will initially scale up some pods, but will scale back to zero after 30 seconds of inactivity.

Figure 6. Topology view showing a Knative broker and a Knative service.

Sourcing events

A KafkaSource creates a persistent connection to the Kafka cluster that will fetch records from a Kafka topic and HTTP POST the records to a chosen URI or Knative service.

Create a KafkaSource to send records from the shipwars-bonuses topic to the cheat-detection service:

  1. Click the +Add link on the left in the OpenShift Developer Sandbox.
  2. Select Eventing > Event Source > Kafka Source.
  3. Complete the Form view by entering the following information:
    • Bootstrap Servers: Auto-complete using your linked managed Kafka bootstrap URL.
    • Topics: shipwars-bonuses.
    • Consumer Group: knative-consumer.
    • SASL: Enable SASL and use the client ID and the client secret from rh-cloud-services-service-account as the User and Password.
    • TLS: Enable TLS. Leave the other TLS options at the defaults.
    • Sink Resource: Choose the cheat-detection Knative service.
  4. Submit the form to create the KafkaSource.

Your Topology view will update to reflect that the KafkaSource is bridging events from your managed Kafka instance to the cheat-detection service, as shown in Figure 7.

Figure 7. The KafkaSource consumes messages from Kafka and sends them via HTTP POST to the cheat-detection service.

Testing your application

The GitHub repository for this example includes an application that can generate bonus payloads and send them to Kafka. Deploy this service on the Developer Sandbox using oc:


# Build and deploy the Node.js producer
oc new-app https://github.com/evanshortiss/shipwars-cheat-detection \
-l "app.openshift.io/runtime=nodejs" \
--context-dir=bonus-producer \
--name bonus-producer

# Expose a HTTP endpoint to access the producer
oc expose svc/bonus-producer

While the application deployment is in progress, drag the arrow from the Node.js application to the Kafka connection shown in the Topology view, as illustrated in Figure 8. This injects the Kafka connection details into the Node.js application container.

Figure 8. Use drag and drop to create a service binding that injects the Kafka connection into the Knative service.

Make an HTTP request to the application's /bonus HTTP endpoint. Here's an example using cURL:


export BONUS_HOST=$(oc get route bonus-producer -o jsonpath='{.spec.host}')
curl http://$BONUS_HOST/bonus

A cheat-detection container will start immediately after the HTTP request is complete, as shown in Figure 9. This is because the KafkaSource receives the bonus payload from the shipwars-bonuses topic and forwards it via HTTP to the cheat-detection service.

Figure 9. The complete architecture showing the data flow between the services.

You can use the oc logs command to view the new CloudEvents generated by the cheat-detection pod and sent to the Knative broker. You should get output that looks like the listing below, showing that these consist of a set of HTTP headers and an HTTP body that are sent to the Knative broker you created earlier.


oc logs -l app=cheat-detection-00001 -c user-container -f

{
    body: '{
      "match": "daWmkm97WwSlFs94AbDl9",
      "game": "daWmkm97WwSlFs94AbDl9",
      "by": {
          "username": "Silent Iguana",
          "uuid": "bOAcrCh3kN8GMa00a7PPL"
      },
      "shots": 21,
      "human": false
    }',
    headers: {
      "content-type": "application/json; charset=utf-8",
      "ce-id": "b18e59cd-ef1f-4914-a039-cdeb87a6fcd2",
      "ce-time": "2022-01-26T01:06:27.285Z",
      "ce-type": "audit.pass.bonus",
      "ce-source": "cheat-detector",
      "ce-specversion": "1.0"
    }
}

Conclusion

Knative provides an excellent platform for creating event-driven serverless architectures on Kubernetes. It provides a plethora of source connectors out of the box that simplify integration with existing systems. Another aspect of Knative worth mentioning is the Knative CLI, or kn; in this post we used oc to apply pre-defined Knative resource definitions, but using kn provides an excellent experience when exploring and developing applications using Knative.

You used Red Hat's managed cloud services to build this article's example application. Hopefully, this has shown you how these services provide a cloud-native platform that you can use to build and deploy applications for which Red Hat takes care of the infrastructure.

If you want to dig deeper, take a look at the example application's GitHub repository and try deploying the included email alerting service, along with the triggers that will subscribe it to fraudulent bonus payloads.

Comments