< Summit 2020

Accelerating Applications: Developing Multi-Microservice Solutions

Intermediate | Application development

Chris Bailey

Talk Text

Chris Bailey (00:04):

Hi. In this session, we're going to talk about what it takes to develop multi-microservice solutions. So, rather than single microservices and how to package those and get those running in OpenShift, we're going to be talking about some of the new technology that's available to make multiple microservices combined with services like databases connect to each other and build multi-microservice solutions.

 

Chris Bailey (00:30):

So, cloud-native development itself is not straightforward, but it's fairly well understood. It's about building applications that follow 12-factor principles, adding things like lightness and readiness checks so Kubernetes and OpenShift will restart it or take it out of workflow if it's not ready. It's about adding metrics and observability, packaging it in a container, adding Kubernetes YAML, and then deploying it to OpenShift.

 

Chris Bailey (01:02):

But when we talk about cloud-solution development, we have to go beyond the single microservice. That involves how to do things like service discovery: how you get two microservices to bind to each other dynamically so they locate where the URL is and how to use that second service. It's also about how you add configuration that spans multiple microservices, whether that's shared capabilities like a Kafka or a messaging engine or whether it's how you set configuration that crosses the entire application and its application level. It's also about how you do change control and multi-environment support; if you need to deal with a development environment and integration environment, preproduction and production. And then how you have combined observability that's not about one microservice but that's actually at the application level.

 

Chris Bailey (01:57):

So, let's start by actually looking at what it takes to build and deploy a single microservice. So, let's assume we're building a Node.js application using the express framework. So, you have the express framework. You'd add connect middlewares and connect route handlers, and then you'd write your Node.js application code. And that gives you a Node.js express application. You then need to add things like liveness and readiness probes, as I said, so that OpenShift will restart it or take it out of load balancing if it's not ready. You then add metrics and observability, so maybe a slash metrics end point for Prometheus and Grafana. And you'd add things like JSON format logging, so it can be used in the logging frameworks. Then you dockerize it: You add a docker file and a docker container so that you've got a Node.js runtime. And finally you have microservice A, so you have your microservice.

 

Chris Bailey (03:00):

Now, the next thing you need to do is add a set of Kubernetes YAML to deploy it to Kubernetes or OpenShift. And that's probably a deployment: It's a service, it's an ingress, so you can actually connect to externally, maybe horizontal pod auto scaling, and a service monitor, which is how you register it with Prometheus and Grafana for metrics. And once you've done that, you can deploy it onto the OpenShift platform, and you have running microservice A. And because of some of the things you've added, that's going to show up in the OpenShift topology viewer. You're going to get application metrics: hopefully things like open tracing and logging. So, you've got visibility of that deployed microservice. But as I said, that's just one microservice. And we want to talk about how you build multi-microservice solutions. So, there's a few technologies which are emerging that's going to help you do that.

 

Chris Bailey (03:58):

The first of those is the Runtime Component Operator, which is currently in beta and available from the OperatorHub. So, what the Runtime Component Operator does is—the first thing it will do for you is simplify all of that Kubernetes YAML. So, rather than having to worry about deployments and services and ingresses and auto-scaling and service monitors independently, you can actually, with the Runtime Component Operator installed into the platform, worry about just one file. Let's call that app deploy.yaml. So, what the app deploy.yaml does is, it provides an overall definition of your deployment, and the Runtime Component Operator itself will then create all of those additional configuration files for the deployment, the service and the ingress, etc. for you. So, if we look at it, first of all: "expose." So, if you set "expose" to "true," that will set up an ingress for you.

 

Chris Bailey (04:59):

If you're on OpenShift, it'll also set up a route for you. So, it detects which platform it's on, and it creates the right resources in order to expose that service. You can also set up a liveliness and a readiness probe. That's done directly, and that sets the URL. And frequently it should be checked and so on. You can also set up monitoring. So, by saying, "I want monitoring, and I'm just going to label myself," this creates that service-monitor definition and covers that Prometheus and Grafana is looking for. And between all of those, that sets up everything that's required for it to integrate with all of the visibility and observability that's in OpenShift. But it doesn't actually stop there. You can also set Knative to serving. And if you do this, it'll deploy it as a serverless Knative Serving service. And that actually generates a completely different set of YAML definitions under the covers rather than creating a service or creating a Knative Service and so on.

 

Chris Bailey (06:01):

So, whether you want a server and a number of replicas or you want serverless just becomes a single flag that's applied in the configuration. But again, at this point, we're still talking about a single microservice. And we want to talk about multi-microservice solutions. So, let's expand the view and go from having just microservice A to having microservice B as well and have both of those connect to a back-end microservice called microservice C. So, for this to happen, the two front-end microservices A and B need to know where microservice C is in order to connect to it. Now, if we look at the default URLs you'll get for each of these—microservice A and microservice B, they'll be found at the "name of the microservice" dot "sbc for service" dot in this case "storefront dev" (that's the namespace, which is deployed in .cluster.local) and then the port number that the service is exposed on. In this case, they're 3,000, because they're Node.js ,and that's the default Node.js express port.

 

Chris Bailey (07:11):

So, if you wanted to connect to those services, you need to know that URL. Now, the back-end service has the same format, but this one runs on port 9080. And that's because in this example, it's a Java service, and that's the default port for that Java framework. So, for A and B to connect to C, they need to know the address, microservicec.svc.storefrontdev.cluster.local9080. And that's not something that needs to be baked into the service itself. So, what we do is, we simplify that and make it dynamic using microservice binding inside the runtime component operator. So, to add microservice C so that it can be connected to by something else, you add a set of configuration to the file, which is "provides." So, you say, "This service provides an end point of type open API" (it's really a REST end point), "and its context is slash."

 

Chris Bailey (08:16):

So, you want to connect just on the root URL on that service. And that's all you need to provide, because it already knows about the port that it's on and its location. Now, what that does is, it ends up creating the secrets inside the deployment for microservice C. And it takes that URL address in microservicec.svc.storefrontdev.cluster.local9080, and it embeds it into the secret. And that means there's now a secret that could be consumed by another microservice. So, the next thing that happens is—let's look at what we would do in the deployment definition for microservice A to consume it. Well, you add a set of configuration called consumes that says it wants to consume a service of type open API, so a REST service called microservice C. And that's all that's required.

 

Chris Bailey (09:12):

What that's going to do is, then it's going to take the secret from microservice C and inject it into microservice A, so dynamic configuration is injected into microservice A to know where microservice C is. And that means if we move the services so that address changes or microservice C chooses to change which port it runs on, microservice A doesn't need to know about this, because it's all dynamically configured.

 

Chris Bailey (09:41):

So, that is important for a few reasons. And one of those is about having multi-namespace support as you move through those environments or maybe development integration, preproduction, production. So, let's assume we want to have storefront staging. So, I want to move my application from dev to staging. Well, the effect of that is microservice C's address changes. It's still called microservice C, but that middle-value storefront dev, that changes the storefront staging, because it's now in the stage in namespace. And previously microservice A would need to know about that address change. But what happens here is, the secret is created for microservice C with that new address. And that new address is then consumed by microservice A and microservice B. So, there's no code changes required. There's no configuration changes required. Just deploying the same thing into a different namespace will automatically work, and they'll automatically discover each other and provide that connection between the microservices.

 

Chris Bailey (10:49):

Now, that's just microservices talking to microservices. What do we talk to about talking to actual services like Postgres? Well, that's where the Service Binding Operator comes in, which is currently in alpha. And again, you can store it into OperatorHub and use it inside OpenShift. So, what the Service Binding Operator does is, it's very, very similar. So, I installed the Service Binding Operator, and with a binding-enabled service like Postgres, what you can do is, you can create a serving service-binding-request definition. And what that's doing is, it's saying, "Find the results called PSQL" (so, Postgres) "of type database" (so it looks for a deployment of this type) and basically request that a secret is generated that contains its location and credentials. So, it takes the location and puts it into the secret. It also takes any username or password credentials that are going to be required by the consumer and also puts that into the secret.

 

Chris Bailey (12:02):

And then finally we tell it where to put that secret. So, we set an application selector that says, "I want that to be injected into the deployment for microservice C." And that's how all of those credentials are then automatically injected into microservice C so that it will dynamically bind to the Postgres database. And the effect of this is that we have the same ability to move between environments and not break our configuration to the database. So, let's look at the next part of that. So, we've talked about how we can dynamically bind microservices, and we can dynamically bind services and that allows us to move between environments without anything breaking.

 

Chris Bailey (12:47):

Now, the way we control moving between environments and we control overall applications' environments is through GitOps with Kustomize, which again is something that we're working on and is emerging with tooling and conventions for OpenShift that are being built, and you can see that work in the Git projects below. What GitOps does is, GitOps is about having a Git repository that is the single source of truth for your application and everything that's deployed into the namespace. It provides the application-level configuration, so where you put configuration that isn't specific to one microservice but is actually at the application or the environment level.

 

Chris Bailey (13:30):

It deals with how you move applications and promote applications between different environments. So, again, the sense of development: integration, preproduction, production. It has change control, because Git is a source-control repository. And again, as I said, it has that promotion of how we take the new version of a microservice that the development team has just built and put it into development. Once it passes in development, and you're happy to move it to integration or preproduction, how you do that promotion and how you eventually decide that we want to put this into production.

 

Chris Bailey (14:08):

So, this is designed to be owned by the operations team, and it's designed such that developers only work on their own microservice projects. Those microservice projects will get built, and at the end of the build, it will promote the fact that there is a new built microservice into the GitOps repository. And from the GitOps repository, there is then continuous deployment, so as changes occur in the GitOps repository, those are then deployed into our OpenShift environment.

 

Chris Bailey (14:40):

Now, source gets built. That gets propagated into the GitOps repository. That then gets deployed into OpenShift. So, there's a continuous flow of changes that are checked into source repositories that are successfully built that could then get promoted that then get deployed into development. Now, if I wanted to have multiple spaces—so, I wanted to have development and staging and production—and I wanted this ability to promote through those environments, then what I would actually do is, I would have three repositories. Each repository represents an environment that you want to deploy into. So, to have three environments, I have a GitOps repository for dev, one for staging, one for production. And each of those has a continuous deploy to its own target namespace. Because we want to automatically put things into development when those individual microservices are built and a new version is available, those automatically will get promoted into the development namespace, and from there, they get deployed into dev.

 

Chris Bailey (15:52):

Now, the way that I decide that I want to move something from dev to staging is using a CLI tool that's part of that project. And this CLI lets me do a promote from dev to staging of a named service in the actual usage of the CLI dev and staging URLs that you would use. But what this will do is, it will take microservice A as deployed in the dev space and say, "I want to move that so that it can now be in staging as well and deploy it into the staging space." So, the effect of that is the microservice is moved from dev to staging, and then it's deployed into the staging space.

 

Chris Bailey (16:37):

This gives us the ability to continually promote service, but having everything that developers do show up in dev immediately. Now let's look at what the format is of the GitOps repository. So, the GitOps repository is structured to be effectively a hierarchy. At the top of the hierarchy, you have the overall environment. And this is where you can have environment-level configuration, and it uses a Kubernetes technology called Kustomize. And what Kustomize does, it lets you have overlays. So, you can overlay configuration that is specific to this environment. And the first thing you do in an overlay is set the namespace for all of the resources. So, the overlay for dev sets the namespace to "storefront dev," and that overlays the namespace used by all of the resources inside the GitOps repository.

 

Chris Bailey (17:36):

Now let's look at the next bit, which is the application. So, there's an application entry, and this is where we have application-level configuration. So, for example, this is the application configuration from the Kubernetes SIG Apps project that describes how you declare which Kubernetes resources form part of a logical application. And that goes in at the app level. And then finally you have your microservices themselves. And these are going to be those instances of our applications. But you can also have an overlay patch, and this is what Kustomize is providing for us. So, in this case, we're overlaying a service called Catalog to say, "I want to have two replicas rather than one that's set by the developer in the developers' configuration."

 

Chris Bailey (18:29):

So, that's a set of technologies that you can use together to build solutions. The Runtime Component Operator makes it easier for you to deploy a single microservice and connect to other microservices. Then you have the Service Binding Operator, and that lets you connect your microservices to services. And then you have GitOps as a way of configuring and controlling all of those components together. Now what I'm going to show you is a tech-preview technology called Solution Blueprint Builder that actually brings all of that together and makes it easy for you to set up a new multi-microservice and service-solution deployment.

 

Chris Bailey (19:15):

So, Solution Builder is actually a UI tool that provides a palette of components that you can use to build a solution. So, you can build, say, a catalog microservice—so, I'm defining I want to catalog "microservice"—and I'm going to connect that to a back-end database called elastic. I'm then going to create a microservice that's going to sit in front of it called a web BFF, a web back end to front end. And that's going to bind to the catalog service. So, at the moment I've got two microservices and a database. And I can actually declare that I want as many back-end services as I want and the things that they connect to. And I could decide I wanted a mobile BFF as well. And from that mobile BFF, I connect to all the back-end services, so that's my mobile back end, and I have a web back end that connects to all of those services.

 

Chris Bailey (20:12):

And from that I can then say, "I would like to generate and deploy and set up this solution." To do that, what I do is, I set the properties. So I give it an application name, and I give it a version and so on. And I then set up the Git repository organization that I want my Git repositories to be in. And then I set it off to build. And this is actually going to set up a multi-microservice deployment for me and get it running in OpenShift to the point that developers can now start developing. Now I'll show you a demo of that working.

 

Chris Bailey (20:53):

So, this is the solution-build UI with the palette that I referred to. Now, this palette provides reactive microservices, REST microservices, mobile applications, Postgres databases, and so on—so, a set of components that you can use. So, I could start off by saying, "I want to have one microservice, which I'm going to call front end. And that front-end microservice I'm going to declare as being something that I want to create using, say, Node.js. I can then, once I have my Node.js front end declared—say I want to have a second microservice. And that second microservice is going to sit behind it, and I'll call that back end. So, I have a front-end microservice and a back-end microservice, and I'm going to make back-end Node.js as well. I can then declare I want to add a binding between the two and just wire the two together.

 

Chris Bailey (22:03):

And this says that the front end is now connected to the back end. Finally, I'm going to say, "I want to have a database." I'm going to call that back-end DB. And I'm going to say that I need back end to connect to back-end DB. So, I've got a two microservice solution and the database. And that's a very, very simple example of a solution. I also have the option of saying, "Rather than creating from a blank canvas, I want to create from existing ones." This is an example of a reactive microservice, or I can do the same thing with REST. And that's that example I showed you before in the slides. Now I can start by editing it. I can remove the mobile pieces because I don't want them. I can take my web BFF, and I can say my web BFF is going to be a Node.js, whereas all of my back-end microservices, all of these are going to be Open Liberty, so they're running Java.

 

Chris Bailey (23:02):

I can then apply that project-level configuration to call it storefront 1.0, and I'm going to set the Git organization that I would like all of my source projects and my GitOps repository to be set into. And I'm going to declare that I want to have a dev environment set up ahead of time, but I won't have staging or production. So, just a dev environment to begin with. I can then give it my Git user ID and token and hit "generate." This is now going to set up a set of Git source projects and a GitOps project that represents everything that I just declared. So, it's starting to build that, and it will track the process of building those and checking those into the Git organization for me.

 

Chris Bailey (23:53):

Now, once that's happened, I have my Git organization prepopulated with a set of projects, each of which have got template source code in that's ready to be built and deployed. And it includes the deployment configuration. So, the web BFF is just Node.js, and the older microservice is Java. So, this is Java with Maven using a framework called Open Liberty. Those have all been set up and the configuration inside them for the bindings that they need to their dependencies. And I also have a GitOps project that represents the overall deployment. Now, in OpenShift, we can have a set of pipelines for this, and these pipelines can automatically start to build those microservices, because they're real microservices. All of the template code is there, and the binding is there. As each of these builds—so, the web BFF has just built—and that's kicked off a GitOps deploy pipeline, because at the end of the web BFF build pipeline, it automatically promotes the change into the GitOps project.

 

Chris Bailey (25:03):

So, if I go into the services directory, I can see that the web BFF was updated 26 seconds ago, and that's because there was a new web BFF instance that I wanted deployed into dev space. So, that's—the web BFF is now running in OpenShift, having updated the dev GitOps deploy to say that was the new version available. And as each of these other microservices build, the same process will happen. So, the older microservice is now built. That's kicking off the GitOps pipeline again, to do continuous deploy. And we can see with the order, microservice definition is updated in the GitOps repository. Once all of these have finished building and deploying, we will actually have a full deployment up and running in the dev space in OpenShift that represents that full definition that I created in Solution Blueprint Builder. So, we're now almost done, just the last deployment to happen.

 

Chris Bailey (26:06):

And now we should have all of those five microservices built. If we look in the GitOps repository, they've all been promoted from the build-push-promote pipeline. And all of those new versions are now in GitOps and deployed into OpenShift. And we can go to the OpenShift topology viewer, and this gives us a visual representation of what has been deployed. And you'll also notice that you have the arrows that show those dependencies. So, the web BFF have arrows that point to each of the back-end Java microservices. And that happens because when we set up the binding between the web BFF in each of the back ends using the Runtime Component Operator, we also set up a set of labels, which OpenShift uses to represent that in the UI. And the Service Binding Operator's doing the same thing between each of the backing microservices and the Postgres databases. So, this gives us our full deployment.

 

Chris Bailey (27:13):

Now that we have all of this, I can select one of those individual microservices and it will show me what's actually deployed and configured for that microservice. And this is just part of the standard OpenShift UI. I can also go to the application navigator. So, the Kubernetes application navigator provides the set of capabilities that looks at the logical application level. So, this is the setting of that application CR that we had in the GitOps repository. And it knows that all of these resources and components are part of the storefront application. So, I can select one of them, and it knows it's running Java Open Liberty, and it knows the source repository for that particular microservice and the commit level that it's used up. So, I can click on that, and I can go back to the original source code for what's deployed into the OpenShift cluster. And I can also take a set of actions.

 

Chris Bailey (28:22):

So, it has the ability to run actions. And I can look at the number of replicas, or I can look at the source code project, or I can look at the information for the application stack that it's running on. And I can also have a dashboard, because each of these is already set up with metrics for Prometheus and Grafana. I can already have an application dashboard that knows about all of those components. And because it's just a Grafana dashboard, I can start to add my own queries and my own visualizations of the data that I would like to know we're buying as an application owner. Now, all of this has been set up as part of that definition in Solution Builder. It's set up the source repositories; it's set up the GitOps repositories. You can then have build-and-deploy pipelines. You can have visualizations of this in dashboards. All of this can be done end to end as part of setting up the project.

 

Chris Bailey (29:19):

And at this point, we have the two things that people need to start doing real development. You have the GitOps repository, which allows any operations team to apply operational configuration and overlays to things that the numbers are replicas and they can promote to staging and production. And our developers who are working on, say, the web BFF can start by checking out the web BFF project. And from there, they can then start doing their development, checking things in. But all of the build-and-deploy pipelines, they've already been set up. All of the configuration and the service binding has already been set up.

 

Chris Bailey (29:59):

So, that provides an overview of a set of emerging technologies that we are providing. That means it's now very, very easy not just to build an individual microservice but actually a multi-microservice solution, and one that starts to use real services behind it as well. Hopefully this is all very interesting. Thank you for watching.

Show More Show Less

The move to microservice architectures brings new levels of agility and enables the rapid development and evolution of individual microservices. At the same time this introduces new complexity as applications become a composition of multiple components potentially spread across multiple development squads. This particularly affects initial setup of the multiple projects and infrastructure for CI/CD and deploy environments, project and cross-component configuration for integration, and the enablement of application level observability and operations.

In this session you’ll learn about the tools and capabilities that IBM and Red Hat are providing to removing the complexity of building multi-microservice solutions on top of RedHat OpenShift.

 

Chris Bailey head shot

Chris Bailey

IBM

Chief Architect, Cloud Native Solutions, IBM Cloud Paks for Applications at IBM