Istio Dark Launch: Secret Services

“Danger is my middle name” is great for spies and people of mystery, but when it comes to deploying software, boring is better. By using Istio with OpenShift and Kubernetes to ease your microservices into production, you can make deployment really, really boring. That’s good.

[This is part seven of a ten-week series about Istio, Service Mesh, OpenShift, and Kubernetes. Part six can be found here.]

Boring Is Good

Not to worry, dear DevOps person; there are some exciting things in store for you. It’s just that the end result, thankfully, is boring. You want the fun of setting things in motion and then the routine of watching it just work.

When it comes to deploying software, anything you can do to minimize the risk is worth evaluating. Running parallel is a very powerful and proven way to test your next version, and Istio allows you to use “secret services”—an unseen version of your microservice—without interfering with production. The cool-sounding term for this is “Dark Launch” (which is enabled by another cool-sounding idea, “traffic mirroring”). Feeling mysterious yet?

Notice I’ve used the term “deploy” instead of “release.” You should be able to deploy, and use, your microservice as often as you wish. It should be able to accept and process traffic, produce results, and contribute to logging and monitoring. Yet, it does not necessarily need to be released into production. Deploying and releasing software are not always the same. Deploy as wanted; release when it’s ready.

But Learning This Is Exciting

Consider the following Istio route rule that directs all HTTP requests to version 1 of the “recommendation” microservice (note: all examples are from our Istio Tutorial GitHub repo) while mirroring the requests to version 2:

Notice the mirror: tag near the bottom. This defines the request mirroring. Yes, it’s really that simple. Now, while your production system (v1) is processing the requests, mirrored (exact duplicates) requests are asynchronously sent to v2. This allows you to see v2 in action, with real-world data and real-world volume, without disrupting production: an exciting way to get a, hopefully, boring result.

A Little Drama

Note that any requests that affect a data store need to be considered in your v2 code. While the request mirroring is transparent and easy to implement, how you handle it is still of concern. I guess there’s a bit of drama after all.

Short and Sweet

This is the shortest blog post in this ten-part series because, well, it’s so easy to implement. Notice, once again, we can implement this feature— Dark Launch/Request Mirroring—without any changes to our source code.

What If?…

What if instead of mirroring your requests, you could intelligently route just some (perhaps one percent or a certain group of users) of them to v2? You could see if it works before, maybe, expanding the percentage of requests it handles. That’d be great; if it failed, you could quickly bail out and return to v1. If it succeeds, you could continue to shift more and more workload to v2 until it reaches 100 percent of the requests. Kind of like, oh, I dunno … a canary in a coal mine?

That’s a mystery until next week.

 

To learn more, visit our Linux containers or microservices Topic pages.

To learn more, visit our Join the Red Hat Developer Program (it’s free) and get access to related cheat sheets (e.g. containers), books (e.g. microservices), and product downloads that can help you with your microservices and/or container application development.

For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Share