"Danger is my middle name" is great for spies and people of mystery, but when it comes to deploying software, boring is better. By using Istio with OpenShift and Kubernetes to ease your microservices into production, you can make deployment really, really boring. That's good.
[This is part seven of my ten-week Introduction to Istio Service Mesh series series about Istio, Service Mesh, Red hat OpenShift, and Kubernetes. My previous article was Part 6: Istio Chaos Engineering: I Meant to Do That.]
Boring Is Good
Not to worry, dear DevOps person; there are some exciting things in store for you. It's just that the end result, thankfully, is boring. You want the fun of setting things in motion and then the routine of watching it just work.
When it comes to deploying software, anything you can do to minimize the risk is worth evaluating. Running parallel is a very powerful and proven way to test your next version, and Istio allows you to use "secret services"—an unseen version of your microservice—without interfering with production. The cool-sounding term for this is "Dark Launch" (which is enabled by another cool-sounding idea, "traffic mirroring"). Feeling mysterious yet?
Notice I've used the term "deploy" instead of "release." You should be able to deploy, and use, your microservice as often as you wish. It should be able to accept and process traffic, produce results, and contribute to logging and monitoring. Yet, it does not necessarily need to be released into production. Deploying and releasing software are not always the same. Deploy as wanted; release when it's ready.
But Learning This Is Exciting
Consider the following Istio route rule that directs all HTTP requests to version 1 of the "recommendation" microservice (note: all examples are from our Istio Tutorial GitHub repo) while mirroring the requests to version 2:
mirror: tag near the bottom. This defines the request mirroring. Yes, it's really that simple. Now, while your production system (v1) is processing the requests, mirrored (exact duplicates) requests are asynchronously sent to v2. This allows you to see v2 in action, with real-world data and real-world volume, without disrupting production: an exciting way to get a, hopefully, boring result.
A Little Drama
Note that any requests that affect a data store need to be considered in your v2 code. While the request mirroring is transparent and easy to implement, how you handle it is still of concern. I guess there's a bit of drama after all.
Short and Sweet
This is the shortest blog post in this ten-part series because, well, it's so easy to implement. Notice, once again, we can implement this feature— Dark Launch/Request Mirroring—without any changes to our source code.
What if instead of mirroring your requests, you could intelligently route just some (perhaps one percent or a certain group of users) of them to v2? You could see if it works before, maybe, expanding the percentage of requests it handles. That'd be great; if it failed, you could quickly bail out and return to v1. If it succeeds, you could continue to shift more and more workload to v2 until it reaches 100 percent of the requests. Kind of like, oh, I dunno ... a canary in a coal mine?
That's a mystery until next week.
All articles in the "Introduction to Istio" series:
- Part 1: Introduction to Istio Service Mesh
- Part 2: Istio Route Rules: Telling Service Requests Where to Go
- Part 3: Istio Circuit Breaker: How to Handle (Pool) Ejection
- Part 4: Istio Circuit Breaker: When Failure Is an Option
- Part 5: Istio Tracing & Monitoring: Where Are You and How Fast Are You Going?
- Part 6: Istio Chaos Engineering: I Meant to Do That
- Part 7: Istio Dark Launch: Secret Services
- Part 8: Istio Smart Canary Launch: Easing into Production
- Part 9: Istio Egress: Exit Through the Gift Shop
- Part 10: Istio Service Mesh Blog Series Recap