Istio Service Mesh Blog Series Recap
The past nine weeks of blog posts have introduced, explained, and demonstrated some of the many features of the Istio service mesh when combined it is with Red Hat OpenShift and Kubernetes. This, the final post in this series, is a recap.
Week one was an introduction to the concept of a service mesh. The concept of a Kubernetes sidecar container was explained and diagrammed, and it was the beginning of a constant theme throughout the blog posts: You don’t have to change your source code.
Week two presented the most basic, core aspect of Istio: the route rules. Route rules open the door to the rest of Istio’s features, because you are able to intelligently direct traffic to your microservices based on YAML files that are external to your code. Also in this post, the Canary Deployment pattern was hinted at.
Week three featured Istio’s ability to implement Pool Ejection, used in concert with the Circuit Breaker pattern. Being able to remove a pod from load balancing based on poor performance (or non-performance) is a powerful feature of Istio, and this blog post demonstrated that point.
Week four brought the Circuit Breaker to light. Having hinted at it the previous week, this post provided a more detailed explanation of the Circuit Breaker and Istio’s implementation of the pattern. Again, without changing source code, we saw how to direct traffic and handle network faults by YAML configuration files and some terminal commands.
Week five highlighted the tracing and monitoring that is built into, or easily added to, Istio. Tools such as Prometheus, Jaeger, and Grafana were combined with OpenShift’s scaling to show how you can manage your microservices architecture with ease.
Week six switched from monitoring and handling errors to creating errors: fault injection. Being able to inject faults into your system without changing source code is an important part of testing. Test undisturbed code means you can be assured that you didn’t add any “testing code” that may have, itself, caused a problem. Important stuff.
Week seven took a dark turn. Well…a turn to the Dark Launch, a deployment pattern where you can deploy code and test it with production data while not disrupting your system. Using Istio to split traffic is a valuable tool you may use often. Being able to test with live, production data without affecting your system is the most telling test.
Week eight built on the Dark Launch and showed how to use the Canary Deployment model to ease new code into production while reducing your risk. Canary Deployment (or “Canary Release”) isn’t new, but being able to implement it by some simple YAML files is, thanks to Istio.
Week nine, finally, demonstrated how to use Istio to allow access to services outside of your clusters with Istio Egress. This expands the power if Istio to include the whole web.
Try It Yourself
The past nine weeks haven’t been deep dives, nor were they intended to be. The idea was to introduce concepts, generate interest, and encourage you to give Istio a try for yourself. Between zero cost, the Red Hat Developer OpenShift Container Platform, and our Istio tutorial, plus other assets available on our Service Mesh micro site, you have all the tools you need to start exploring OpenShift, Kubernetes, Linux containers, and Istio with zero risk. Don’t wait: grab the tools and start today.
All articles in the “Introduction to Istio” series:
- Part 1: Introduction to Istio Service Mesh
- Part 2: Istio Route Rules: Telling Service Requests Where to Go
- Part 3: Istio Circuit Breaker: How to Handle (Pool) Ejection
- Part 4: Istio Circuit Breaker: When Failure Is an Option
- Part 5: Istio Tracing & Monitoring: Where Are You and How Fast Are You Going?
- Part 6: Istio Chaos Engineering: I Meant to Do That
- Part 7: Istio Dark Launch: Secret Services
- Part 8: Istio Smart Canary Launch: Easing into Production
- Part 9: Istio Egress: Exit Through the Gift Shop
- Part 10: Istio Service Mesh Blog Series Recap
Learn more about Istio and how a Service Mesh can improve microservices on developers.redhat.com.