You’ve heard of microservices. You’ve heard of OpenShift. You’ve heard of Kubernetes. Actually, you may already have considerable experience with each of these three concepts and tools.

But do you know how to combine all of them in order to deploy microservices effectively? If not, this article is for you. Below, I’ll explain how microservices, OpenShift and Kubernetes fit together, and provide an overview of how you can leverage the orchestration tools provided by OpenShift and Kubernetes in order to build and deploy microservices at scale.

What are microservices?

First things first: Let’s define what "microservice" means for our purposes.

Different people use this term to refer to different things. Some, especially in the media, seem to treat microservices as merely a synonym for containers. But that’s a simplistic interpretation.

Or, if you’re old school, you may think of the monolithic kernel vs. microkernel debate (which was made famous by the Torvalds-Tanenbaum flame war of 1992) when you hear someone mention microservices. Microkernels are a similar concept, but they’re not what microservices are all about today.

Personally, I think Martin Fowler’s definition of microservices sums them up best. In his telling, a microservice is a type of service that has five main features:

  • Each instance of the service is independently deployable.
  • Each part of the service is scalable.
  • The service is composed of modular parts.
  • Each part of the service can be implemented in different ways (using different programming languages, for instance), but all parts are compatible with one another.
  • Each service can be managed by a different team.

These characteristics distinguish microservices from monolithic architectures. Within the latter, multiple software functions are fused into a single process, which is neither modular, nor scalable, nor able to be deployed easily across multiple hosts.

How does OpenShift handle services?

To understand how microservices fit into OpenShift, you have to understand the basics of the OpenShift architecture, and how OpenShift manages the apps running on it.

In essence, an OpenShift cluster is composed of the following parts, which all interact with one another in various ways:

  • Nodes: These are the physical or virtual servers that host OpenShift. You can (and should, if you want scalability and high reliability) have multiple nodes in your OpenShift cluster.
  • Pods: Just like on Kubernetes, OpenShift pods are groups of containers (or a single container, in some cases) that work together. Each pod has a unique internal IP address, and can access multiple ports on that address.
  • Containers: These are the things that run inside pods. You can have multiple containers of different types inside a single pod; for example, you could have a web server container running alongside a data container to store information for the web server. Each container can be assigned to a different port on its pod’s IP address.
  • Services: These are what you get when you put multiple pods together to deliver a single app, such as a web server complete with both logic and data functions.
  • Routes: Each OpenShift service receives an internal IP address and port number. If you want to expose a service to the external network or the public Internet, you use routes. Routes translate between the internal and external networking configuration of your service. Routes are kind of like good old NAT servers, if you want to think of them that way.

These are the essentials that you need to understand about the OpenShift architecture to get started with microservices. For the longer version of all of the above, check out the OpenShift documentation.

How Kubernetes makes microservices easy on OpenShift

OK, so where does Kubernetes come in?

The short answer is quietly. By that I mean that Kubernetes does most of its work in the background, without requiring much effort on the part of administrators. That’s the beauty of using Kubernetes to manage your OpenShift microservices.

More specifically, Kubernetes automatically performs several key functions, which ensure that your microservices run smoothly. Those functions include:

    • Load balancing. Kubernetes automatically decides, based on policies that you define ahead of time, which of your OpenShift nodes need pods placed upon them.
    • High availability. Kubernetes automatically detects when a node fails and reassigns the pods from the failed node. The result is automatic failover, which provides high availability for your apps, even when parts of your infrastructure are hit with problems like the network going down or a loss of power.

 

  • Scaling. By automating the placement of pods according to policies set by a replication controller, Kubernetes also ensures that your services remain scalable as demand fluctuates.

On the scaling note, it’s also worth bearing in mind that Kubernetes lets you scale different parts of your services independently. For example, if you have a container-based web server running on OpenShift and you need to increase the instances of the web server itself, but don’t require more of the data containers running alongside it, Kubernetes can scale up just the web server part of the service.

To put it another way, the scalability of your OpenShift cluster is granular. That reduces complexity and optimizes resource usage.

Conclusion

By now, the advantages of pairing microservices with a Kubernetes-powered OpenShift cluster should be clear. You get an infrastructure that is agile from top to bottom. You also maximize resource efficiency, and minimize the amount of administrative effort required to keep things running.

About Christopher Tozzi

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO.

Last updated: March 16, 2018