distributed-transaction-patterns-microservices

Dapr (Distributed Application Runtime) provides an event-driven, portable runtime for building distributed microservices. The project is useful for both stateless or stateful applications on the cloud and at the network edge. A new open source project from Microsoft, Dapr embraces a diversity of languages and development frameworks. The project is a natural fit for Kubernetes and Red Hat OpenShift. This article shows you how to install Dapr and walks you through the process of building a sample application on Kubernetes.

Dapr components

The basic architecture of Dapr consists of building blocks, which in turn contain components (Figure 1). The building blocks offer distributed system capabilities such as publications and subscriptions, state management, resource bindings, and distributed tracing. Each building block exposes a public API that is called from your code. Table 1 describes the different types of Dapr building blocks.

Dapr offers APIs through building block, each containing multiple components to implement the APIs.
Figure 1: Components in a Dapr building block.
Table 1. Dapr building blocks.
Component Function
Service-to-service invocation Perform direct, secure, service-to-service method calls.
State management Create long-running stateless and stateful services.
Publish and subscribe Provide secure and scalable messaging between services.
Resource bindings and triggers Trigger code through events from a large array of input. Output binding to external resources including databases and queues.
Actors Encapsulate code and data in reusable actor objects as a common micro-service design pattern.
Observability See and measure the message calls across components and network services.
Secrets Securely access secrets from your applications.

Components encapsulate the implementations for a building block's API. Components include Ceph, PostgreSQL, MySQL, Redis, and MongoDB. Many of the components are pluggable, so that one implementation can be swapped out for another. Each component has an interface definition.

The building blocks integrate with application code and different cloud service providers, as shown in Figure 2. The architecture for Dapr on Red Hat OpenShift and Kubernetes is shown in Figure 3.

Dapr can work with multiple platforms.
Figure 2: Dapr architecture with cloud providers.
Dapr integrates as pods with Kubernetes and OpenShift.
Figure 3: Dapr components with Kubernetes and Red Hat OpenShift.

How to install Dapr

Follow these steps in this section to install Dapr in your OpenShift cluster.

First, download Dapr from GitHub:

$ wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash

Log in to your OpenShift cluster as an administrator. Check your login status:

$ oc whoami

If you are using a Helm repository to install Dapr, you need to add the Dapr URL to the Helm repository:

$ helm repo add dapr https://daprio.azurecr.io/helm/v1/repo

"dapr" has been added to your repositories

Then you can update the Helm repository to get the latest changes:

$ helm repo update

Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "dapr" chart repository

Update Complete. ⎈ Happy Helming!⎈

Create a new namespace in OpenShift:

$ oc create namespace dapr-system namespace/dapr-system created

Install Dapr to the new namespace that you just created:

$ helm install dapr dapr/dapr --namespace dapr-system

NAME: dapr LAST DEPLOYED: Mon Apr 6 12:48:40 2020 NAMESPACE: dapr-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing Dapr: High-performance, lightweight serverless runtime for cloud and edge Your release is named dapr. To get started with Dapr, we recommend using our samples page: https://github.com/dapr/samples For more information on running Dapr, visit: https://dapr.io

The following Dapr pods will be created:

  • dapr-operator: Manages components and services endpoints for Dapr (state stores, pub-subs, etc.).
  • dapr-sidecar-injector: Injects Dapr into annotated pods.
  • dapr-placement: Used for actors only. Creates mapping tables that map actor instances to pods.
  • dapr-sentry: Manages transport layer security and acts as a certificate authority.

Check your pods' status to make sure that they are in the running state:

$ oc get pods -n dapr-system -w

NAME READY STATUS RESTARTS AGE dapr-operator-7d9668fd8b-skmbh 1/1 Running 0 68s dapr-placement-7dbcc6bf59-zxx4s 1/1 Running 0 68s dapr-sentry-756f7799fd-xm57l 1/1 Running 0 68s dapr-sidecar-injector-7849f77c4b-l789t 1/1 Running 0 68s

Build a sample application in Node.js

This section shows how to get Dapr running in Red Hat OpenShift and deploy a Node.js application that handles retail orders. Another application, written in Python, generates messages containing the orders. The Node.js application subscribes to messages and persists them, as shown in Figure 4.

The example application offers GET and POST endpoints, mediated by Dapr, and persistent state stores.
Figure 4: GET and POST endpoints, mediated by Dapr, and persistent state stores.

Figure 5 shows the Dapr components for the producer and consumer applications.

The Dapr APIs communicate with the example's two apps.
Figure 5: Dapr and the example's two apps.

The implementation uses Redis as a state store for data persistence.

To access the code, clone the following repository:

$ git clone https://github.com/dapr/samples.git

$ cd samples/1.hello-world

The JavaScript code for the order POST endpoint follows. The application persists the order information in Redis:

app.post('/neworder', (req, res) => {
  const data = req.body.data;
  const orderId = data.orderId;
  console.log("Got a new order! Order ID: " + orderId);
  const state = [{
  	key: "order",
  	value: data
  	}];
  fetch(stateUrl, {
   	method: "POST",
   	body: JSON.stringify(state),
   	headers: {
      	"Content-Type": "application/json"
   	}
   })

.then((response) => {
   	if (!response.ok) {
        	throw "Failed to persist state.";
   	}
   	console.log("Successfully persisted state.");
   	res.status(200).send();
   })
   .catch((error) => {
    	console.log(error);
    	res.status(500).send({message: error});
	});
});

The code for the order GET endpoint follows. The application retrieves the latest order information from Redis:

app.get('/order', (_req, res) => {
  fetch(`${stateUrl}/order`)
  .then((response) => {
 	if (!response.ok) {
    	throw "Could not get state.";
 	}
 	return response.text();
   }).then((orders) => {
 	res.send(orders);
   }).catch((error) => {
 	console.log(error);
 	res.status(500).send({message: error});
   });
});

Deploy the application on Red Hat OpenShift

Install Redis in OpenShift from a Helm chart as follows:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

$ helm install redis bitnami/redis
NAME: redis
LAST DEPLOYED: Mon Apr 6 12:58:12 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Extract the secret from the default namespace for Redis:

$ oc get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode pbOe0CgPsu

Now update the redis.yaml file in the deployment directory. Change redisHost to redis-master:6379 and redisPassword to the value extracted in the previous step:

apiVersion: dapr.io/v1alpha1

kind: Component

metadata:

name: statestore

spec:

type: state.redis

metadata:

- name: redisHost

value: redis-master:6379

- name: redisPassword

value: Pnkkpan9gs

Create the Redis resource in OpenShift:

$ oc apply -f deploy/redis.yaml
component.dapr.io/statestore created

Then, create the Node.js and Python applications:

$ oc apply -f deploy/node.yaml
service/nodeapp created
deployment.apps/nodeapp created

$ oc apply -f deploy/python.yaml
deployment.apps/pythonapp created

To examine your Dapr pods in OpenShift, you can run the following commands:

$ oc get pods -n dapr-system -w
NAME READY STATUS RESTARTS AGE dapr-operator-7c6799878d-v4stm 1/1 Running 14 54m
dapr-placement-76c99b79bb-n6sfw 1/1 Running 0 54m
dapr-sentry-5644b86cf9-8fjpv 1/1 Running 0 54m
dapr-sidecar-injector-84c5578f8d-d6dp4 1/1 Running 0 54m
nodeapp-548959b4b9-5rdnl 2/2 Running 0 21m
pythonapp-79c9b55c8f-p7gng 2/2 Running 0 18m
redis-master-0 1/1 Running 0 52m redis-slave-0 1/1 Running 1 52m redis-slave-1 1/1 Running 0 49m

You can test the Dapr application by viewing the logs coming out from the pod and looking at what is consuming the messages:

$ oc logs pod/nodeapp-548959b4b9-5rdnl -c node

Node App listening on port 3000! Got a new order! Order ID: 1 Successfully persisted state Got a new order! Order ID: 2 Successfully persisted state Got a new order! Order ID: 3 Successfully persisted state

You can also expose a route from the nodeapp:

$ oc expose svc nodeapp
route.route.openshift.io/nodeapp exposed

Finally, invoke the nodeapp order endpoint to confirm the successful persistence:

$ curl nodeapp-default.apps-crc.testing/order {"orderID":"42"}

You have just successfully deployed a Dapr application on OpenShift. You might want to update the sample code to fit your scenario by forking the GitHub repository.

Conclusion

Dapr with Kubernetes or Red Hat OpenShift enables the easy development of event-driven, stateful microservices. Dapr also provides consistency and portability via standard open APIs. It is an open source project that works well with numerous programming languages and development frameworks.

Comments