Danger is great for spies, but when it comes to deploying software, boring is better. You can make deployment really boring by using Istio with OpenShift and Kubernetes to ease your microservices into production, and that's a good thing.
This is part seven of my series about Istio, Service Mesh, Red Hat OpenShift, and Kubernetes.
Check out the articles in this series:
- Part 1: Introduction to Istio Service Mesh
- Part 2: Istio Route Rules: Telling Service Requests Where to Go
- Part 3: Istio Circuit Breaker: How to Handle (Pool) Ejection
- Part 4: Istio Circuit Breaker: When Failure Is an Option
- Part 5: Istio Tracing & Monitoring: Where Are You and How Fast Are You Going?
- Part 6: Istio Chaos Engineering: I Meant to Do That
- Part 7: How Istio makes microservices production and deployment easier
- Part 8: Istio Smart Canary Launch: Easing into Production
- Part 9: Istio Egress: Exit Through the Gift Shop
- Part 10: Istio Service Mesh Blog Series Recap
Istio dark launch with secret services
When it comes to deploying software, anything you can do to minimize the risk is worth evaluating. Running parallel is a very powerful and proven way to test your next version, and Istio allows you to use secret services—an unseen version of your microservice—without interfering with production. The cool-sounding term for this is dark launch which is enabled by another cool-sounding idea, traffic mirroring.
I used the term deploy instead of release because you should be able to deploy and use your microservice as often as you wish. A microservice should be able to accept and process traffic, produce results, and contribute to logging and monitoring, not necessarily released into production. The process of deploying and releasing software is not always the same. You can deploy when you want to and release when it's ready.
Consider the following Istio route rule that directs all HTTP requests to version 1 of the recommendation microservice while mirroring the requests to version 2.
Note: All examples in this series, including Figure 1, are from our Istio Tutorial GitHub repo.
Notice the mirror:
tag near the bottom. This defines the request mirroring. Yes, it's really that simple. Now, while your production system (v1) is processing the requests, mirrored (exact duplicates) requests are asynchronously sent to v2. This allows you to see v2 in action, with real-world data and real-world volume, without disrupting production.
Note: You should take into account any requests affecting a data store in your v2 code. While the request mirroring is transparent and easy to implement, how you handle it is still of concern.
What's Next?
This is the shortest blog post in this ten-part series because implementation is so easy. Once again, we can implement this dark launch/request mirroring feature without any changes to our source code.
What if instead of mirroring your requests, you could intelligently route some of them to v2 (perhaps one percent or a certain group of users)? You could see if it works before expanding the percentage of requests it handles. That would be great. If it fails, you could quickly bail out and return to v1. If it succeeds, you could continue to shift more workload to v2 until it reaches 100 percent of the requests.
Let's explore this more in the next article, Istio Smart Canary Launch: Easing into Production.
Last updated: October 26, 2022