The world is moving to microservices, where applications are composed of a complex topology of components, orchestrated into a coordinated topology.

Microservices have become increasingly popular as they increase business agility and reduce the time for changes to be made. On top of this, containers make it easier for organizations to adopt microservices.

Increasingly, containers are the runtimes used for composition, and many excellent solutions have been developed to handle container orchestration such as: Kubernetes/OpenShift; Mesos and its many frameworks like Marathon; and even Docker Compose, Swarm and SwarmKit are trying to address these issues.

But at what cost?

We’ve all experienced that moment when we’ve been working long hours and think “yes, that feature is ready to ship”. We release it into our staging environment and bang, nothing works, and we don’t really know why. What if you could consistently take the same topology you ran in your development workspace, and run it in other, enterprise grade, environments such as your staging or production, and expect it to always JUST WORK?

We have a simple vision - that developers should have a development runtime with an application topology that is identical to production.  We need environmental fidelity between all stages of the deployment pipeline. This sounds quite generic, so let’s start by defining what we mean.

The application topology defines how each component of the application interacts with the other components, as well as external services and users. In a microservice application that typically means:

  • network services - defined as protocols (e.g. http) exposed on an ip/port, and their consumers, allowing the service to connect to other microservices, databases, distributed caches, messaging services etc.
  • volumes that the components/services reads from/writes to
  • environment variables that configure the component/service

Ideally, from the perspective of a component, the environment (e.g the dev team server, the QA system with mock data, staging with real data and horizontal scaling, production fully scaled out to meet real end-user SLAs) provides a perfect abstraction of the underlying system on which the component will execute. This allows you to move the component between systems without experiencing any functional (i.e. broken user experience) or non-functional (i.e. unexpected performance characteristics) changes. Environmental fidelity simply suggests that we provide enough similarity in the environments that from the perspective of the component they appear to be identical.

How do we achieve this?

Each container orchestration technology tries to solve the problem differently and has its own tools and workflow. This is fine until you realise that at different phases of your software development lifecycle different orchestration platforms are being used. For instance, the developers in your organisation use Docker Compose to describe the application topology in the development workspace. However, the operations team uses Kubernetes/OpenShift or Mesos and Marathon combination in production because Kubernetes/OpenShift are based on learnings from Google’s Borg and Omega, which have a solid decade of battle ground experience behind them and are exceptionally fit for the job. This is a very real use case and a real problem in hand --- these problems provide the motivation behind this article.

The Details

To elaborate on the problem, the developer writes a containerized component, depending on various services in the application and does what is necessary to develop and test the application. The tooling around the container platform fetches or builds container images and then runs the containers and thus the services.

Once the developer commits the code and it passes various smoke tests etc, it goes through the formal QA verification process (on a system that looks like production running e.g. OpenShift, Kubernetes or Mesos) before finally getting passed to the team handling the production systems who will deploy the application on their production cluster. Of course, there should be a high degree of automation that will take care of this deployment. But someone has to create and maintain the various artifacts required by Kubernetes/OpenShift/Mesos.

What if there was a way to describe your application topology such that you have to describe it only once? A file that describes container clusters such that it can be consumed by various orchestration tools. Such a file eliminates the need for maintaining multiple artifacts (compose file, pod.yml, rc.yml etc). The ideal goal would be to develop the specification format in the open and in an inclusive fashion. Building such a format, though demanding, is worth the dividends it will pay in the long run.

For the sake of moving forward with this discussion, let's just say that we have a way to describe application topology. What then? How will this be consumed?

The technical goal should be to make the topology consumable by tools at various stages of application lifecycle. Developers of microservices should be able to code and debug their application using their IDE. CD pipelines should be able to bring up the application to build, test or deploy the microservice(s). Similarly, operations should be able to use this specification file to bring up a production cluster e.g. a Kubernetes/OpenShift or Mesos-Marathon cluster.

Obviously, such a solution will require a great deal of thought and engineering effort to build tooling around it. Along with that, it will also require a buy-in from the various stakeholders: Kubernetes; OpenShift; Mesos-Marathon; Docker, Inc.; etc.

Design Time

There have been a lot of tools that have cropped up recently around containers - from Kubernetes to Atomic App, and you may be wondering why we need a new one!

We find it very helpful when thinking about this problem to distinguish between distribution concerns and design time considerations (those concerning us in this blog posts). We think that things like the ports you need to expose, the volumes you need to mount, and the environment variables your component needs set in order to work are part of what you set at design time. At deployment time you care, for example, about the values of the environment variables, what get’s mounted into the volumes, and how you wire the exposed ports on the container into the network. Kubernetes’ Helm, Docker Inc’s Distributed Application Bundles and, of course, Atomic App, are all aimed at helping you solve the distribution problem.

When we’re thinking about design time, we have a couple of guiding principles:

  1. In order to generate wide adoption by tools, it’s important the file descriptor doesn’t change too frequently,  so you have to live with what you have defined until the new one arrives.
  2. The file descriptor should be flexible enough to expect unforeseen possibilities, and whilst remaining backward compatible.

The rest of the post will try to adhere to these two principles.

Further, we think that storing the config in the code base is very important - meaning that we’re going to be checking this file, running diffs and doing code review against it. We think it’s best to select a human-readable format. YAML is a great choice for this.

Common Container Orchestration Concepts

The goal is to orchestrate containers efficiently across various orchestration providers. There are many commonalities between providers, but there are also some differences.

Here is an non-exhaustive list of concepts/directives that are common to different container orchestration system:

  • image : The image from which the container will start.
  • build : Builds a container based on the given input.
  • environment : The environment variables to be set in a container.
  • service : A group of multiple containers that define a single micro-service and possibly share resources
  • command : The command to override the default command in the container.
  • ports : The ports on the containers that need to be exposed for relevant protocol (tcp, udp) and their mappings with the host machine.
  • volumes : The shared volumes mounted inside the container.

A complete list of mappings needs to be worked out and thought through.

What do we have now?

We have a very minimalist proof-of-concept (PoC) specification that we would like to share with the community and get some feedback on how we can improve upon this. We call it OpenCompose.

Here's the OpenCompose repository. We have also a PoC implementation that can consume the spec file and produce complete Kubernetes or OpenShift artifacts. The PoC is based on the Kompose tool.

Services can be defined in a filed called "services.yml". The services can then be deployed using Kompose's OpenCompose implementation.

https://gist.github.com/surajssd/be637bc69ed6460e7f1c829ec0faf34c

Points to Ponder

The diversity in orchestration providers means that there are some concepts that cannot be mapped completely between the various tools. Storage volumes are a great example. Both the docker project and Kubernetes support various types of storage volumes, e.g. Flocker, Amazon EBS etc. One has to use (or create) docker volume plugins to get docker to work with external storage system.

Kubernetes has built-in support for various types of storages as well. What if one of the types of storage volume is not supported by all the orchestration platforms? How should the cluster specification look like in such a case? What should the implementation tools do? Should the tools ignore unsupported directives on respective platforms?

Also, there may be no mapping for some of the concepts from one orchestration platform to another. Should the unmapped concepts be ignored by the respective platforms in that case? One potential solution is to have extension capabilities in the specification.

What does that mean? The specification can support extensions of known type, where type is one of the orchestration platforms. Tools for the respective platforms can choose to do something useful with this or completely ignore it. The extension capability has to be given a lot of thought and importance because it will ensure that the specification is flexible and portable, across various platforms, as per the above mentioned principles. Again, this needs to thought through because it can lead to bloating.

Open Questions

Here are a few more points that need to be thought through whilst coming up with the specification. The specification should -

  1. Allow the developer to express that the containers should be co-located: In order to ensure the correct functioning of a service or ensure that it performs and is scalable, it is necessary that certain containers are co-located & co-scheduled. In pre-container terms, the processes would have executed on the same VM or bare metal. The developer needs a way to express the colocation of services. Docker compose does not provide this capability, instead requiring the use of filters in Swarm. Kubernetes uses pods to express colocation.
  2. Allow the developer to express minimum capabilities required to run: In order to ensure correct functioning of a service the developer may want to specify certain required/recommended capabilities (e.g. memory, CPU, disk space). For example, a Java process may require a minimum heap size. Docker Compose allows you to pass options through to the container for memory and CPU. Kubernetes implements CPU and memory limits for pods and resource quotas.
  3. Allow specification file format innovation: In order to allow implementations and users to innovate, the specification should allow additional elements to be added to the file such that an implementation may expose additional capabilities, such as features not covered by the specification, or new features proposed for a later revision of specification, or instructions passed to an extension model.
  4. Allow the developer to express replication capabilities: In order to allow a developer to indicate that a container can be replicated 0 … N times for horizontal scaling, and still have the application function correctly.
  5. Allow the developer or application delivery team to overlay additional elements: Different stages of the software development lifecycle may be handled by different people, or by the same person wearing different hats. In order to support this we should support an overlay model.

Conclusion

There is a definite need for a solution to address this problem space. A standard specification, developed in the open in an inclusive fashion, that addresses the orchestration problem is a potential solution and perhaps a necessity. A solution that is flexible, portable and extensible; that works for various orchestration platforms; and that works at various stages of development - from development to build to test to production - will be beneficial to all.

There is a need to spend good amount of thought and engineering effort for such a specification and it is necessary for the various stakeholders - Docker, Kubernetes, OpenShift, Mesos, … - to collaborate and make this successful.

Let’s collaborate here! Also, we will be at KubeCon, so let's talk there and take this forward.

Last updated: March 17, 2023