I've been working with some of our teams recently on microservices and how we can assist our customers and communities with best practices and recommendations, whether they're Java EE developers, Vert.x coders, writing Node.js applications or something else. If you've read any of my previous articles then you'll know I have a few thoughts on microservices, and yet there are many things I still feel I need to get straight in my own head. That's why I love talking with the teams we have, because they're always challenging my thought processes and pushing the frontiers of where our industry needs to go.
It was during a few of these conversations that something I hadn't realised had been bothering me started to become a little clearer. For a long time I've been thinking about microservices, how they relate to SOA and distributed systems, DevOps etc. As I mentioned at the start, we have a lot of projects and products which can be used to assist in the development of a (micro) service based architecture. So far, most of what I've been reading outside of Red Hat has been about building microservices and applications from collections of them, from scratch. It's also true to say that has probably been the focus of much of our development work. Greenfield development; re-architecting systems and building up from scratch.
Yet microservices didn't start out that way. If you go and read some of the original literature, especially from Netflix, the idea of microservices (or "fine grained SOA" as Adrian Cockcroft put it originally) was about taking existing systems and refactoring them into components (services) which could be independently developed, versioned and released. The idea was that you do this work of building microservices in situ (brownfield development). And whilst I've known and understood this, I've always assumed that the processes, tools, approaches etc. to building microservices in this manner were identical to greenfield. Until recently, that is.
It's fair to say that you have to start somewhere, and some of our teams have been looking at the brownfield approach to microservices - it makes a lot of sense. It's pragmatic because most people will be coming at them from the perspective of existing applications, just like Netflix. It doesn't require you to boil the ocean and you can take things one step at a time. But - and this is what I hadn't really appreciated until now - it also allows for some significant simplifications to the infrastructure you have to develop in order to support these microservices.
In the examples we've been looking at, you've got existing components in a monolithic application. Since the team were coming at this from a Java EE perspective, these are currently represented as multiple WAR (Web Archive) or JAR (Java Archive) files shipped in a single EAR (Enterprise Archive); however, this is an example that has relevance outside of a single programming language or framework. The aim in this example would be to separate out the individual components into microservices so that we don't have to recreate the entire EAR each time something changes in a single WAR.
So far so good, and not really any different if you were coming at this from a greenfield perspective. What changes, or at least can be simplified, is what we need to do in the newly distributed case. In this kind of scenario we're not trying to be a generic distributed system. We don't necessarily need a name service (or something like it) to locate varieties of services. We don't necessarily need SLAs and rich contract languages; in fact, because we know the API for the service(s) and we're very specifically trying to improve the agility of the development processes, we could hard code the API into the "client" (the rest of the application). The realisation to me, which is pretty obvious in hindsight (and one that others in the team already had), is that much of this could simply be hard coded. Binding addresses, or the underlying network, could be leveraged, e.g., use REST/HTTP, with a URL for the service name/address - hence DNS for binding.
Yeah, yeah. I know this only goes so far and at some point you will need some of the complexities we have in traditional distributed systems, but we're not there yet. The whole point of the exercise from the team was precisely this: "Where do I (a developer) start embracing microservices, and how can I do it easily without having to develop or install too much additional infrastructure to make this a success?" And I think this short cutting / hard coding / relying on the infrastructure works to a degree.
I think the team delivered on the remit. But what got me noodling on this was whether they were developing a new category of microservices? Much of what we read about microservices talks about centralised logging, event driven approaches, orchestration and deployment technologies such as Kubernetes. Yet, none of these is essential if you're looking at microservices just to increase agility in a very specific application with pre-defined components/services. You could do it all with hard coding and a little bit more automation.
Does it constitute a new category of microservice, as I had thought originally? I don't think so. It's an evolutionary approach, no different really to pre-CORBA or pre-Java EE applications which were often written by hand - with hard coded addresses, interfaces, etc. Then as the complexity of distributed applications grew, developers needed more help from the infrastructure and tools. So they're definitely microservices - as they still tick the right boxes such as making teams more agile, having independent lifecycles etc - they're just a lot more focussed, perhaps a lot more pragmatic.