istio

Looking up a hash table library for caching in the 3scale Istio adapter

Looking up a hash table library for caching in the 3scale Istio adapter

You have probably already heard about the service mesh concept and one of its leading implementations, Istio. In the 3scale engineering team at Red Hat, we are working on a component to extend the functionality of Istio (and Red Hat’s distribution, Maistra) by integrating some API Management features via the 3scale platform. In this article, I’ll describe this work and some of the decisions made along the way.

Continue reading “Looking up a hash table library for caching in the 3scale Istio adapter”

Share
Manage your APIs deployed with Istio service mesh

Manage your APIs deployed with Istio service mesh

With the rise of microservices architectures, companies are looking for a way to connect, secure, control, and observe their microservices. Currently, a service mesh such as Istio is the best option to reach this goal.

  • Connect: Istio can intelligently control the flow of traffic between services, conduct a range of tests and upgrade gradually with blue/green deployments.
  • Secure: Automatically secure your services through managed authentication, authorization, and encryption of communication between services.
  • Control: Apply policies and ensure that they are enforced and that resources are fairly distributed among consumers.
  • Observe: See what’s happening with rich automatic tracing, monitoring, logging of all your services.

And, as explained in “Distributed microservices architecture: Istio, managed API gateways and, enterprise integration”, a service mesh does not relieve the need for an API management solution. A service mesh manages services and the connections between them, whereas an API management solution manages APIs and their consumers. In this article, I’ll describe how to manage APIs using the Red Hat Integration adapter for Istio.

Continue reading “Manage your APIs deployed with Istio service mesh”

Share
Guru Night at Red Hat Summit: Hands-on experience with serverless computing

Guru Night at Red Hat Summit: Hands-on experience with serverless computing

Millions of developers worldwide want to learn more about serverless computing. If you’re one of the lucky thousands attending Red Hat Summit in Boston May 7-9, you can gain hands-on experience with the help of Burr Sutter and the Red Hat Developer team.

Guru Night is a BYOL (bring your own laptop) event taking place Wednesday, May 8 from 5:00 p.m. to 8:00 p.m. at the Boston Convention and Event Center in ML2 East-258AB. (Doubtless there will be a map to show you where or what ML2 East etc. is; we have no idea.) Head to the signup page and fill out your details now.

TL;DR: Beer and pizza will be served.

We felt compelled to point that out. But read on.

Continue reading “Guru Night at Red Hat Summit: Hands-on experience with serverless computing”

Share
Distributed microservices architecture: Istio, managed API gateways and, enterprise integration

Distributed microservices architecture: Istio, managed API gateways and, enterprise integration

The rise of microservices architectures drastically changed the software development landscape. In the past few years, we have seen a shift from centralized monoliths to distributed computing that benefits from cloud infrastructure. With distributed deployments, the adoption of microservices, and system scaling to cloud levels, new problems emerged, as well as new components that tried to solve the problems.

By now, you most likely have heard that the service mesh or Istio is here to save the day. However, you might be wondering how it fits with your current enterprise integration investments and API management initiatives. That is what I discuss in this article.

Continue reading “Distributed microservices architecture: Istio, managed API gateways and, enterprise integration”

Share
Solving the challenges of debugging microservices on a container platform

Solving the challenges of debugging microservices on a container platform

Microservices have become mainstream in the enterprise. This proliferation of microservices applications generates new problems, which requires a new approach to managing problems. A microservice is a small, independently deployable, and independently scalable software service that is designed to encapsulate a specific semantic function in the larger applicationl. This article explores several approaches to deploying tools to debug microservices applications on a Kubernetes platform like Red Hat OpenShift, including OpenTracing,  Squash, Telepresence, and creating a Squash Operator in Red Hat Ansible Automation.

Continue reading “Solving the challenges of debugging microservices on a container platform”

Share
Observe what your Istio microservices mesh is doing with Kiali

Observe what your Istio microservices mesh is doing with Kiali

The Istio service mesh is a powerful tool for building a service mesh. If you don’t know about Istio yet, have a look at the Introduction to Istio series of articles or download the ebook Introducing Istio Service Mesh for Microservices.

The power of Istio comes with the cost of some complexity at configuration and runtime. To help this, the Kiali project provides observability of the mesh and the services in the mesh. Kiali visualizes the mesh with its services and workloads. It indicates the health of the mesh and shows hints about applied configuration options. You can then drill in on individual services or settings to view details.

This post describes how to use Kiali to observe what the microservices in your Istio service mesh are doing, validate the Istio configuration, and see any issues.

Continue reading “Observe what your Istio microservices mesh is doing with Kiali”

Share
Why Kubernetes is The New Application Server

Why Kubernetes is The New Application Server

Have you ever wondered why you are deploying your multi-platform applications using containers? Is it just a matter of “following the hype”? In this article, I’m going to ask some provocative questions to make my case for Why Kubernetes is the new application server.

You might have noticed that the majority of languages are interpreted and use “runtimes” to execute your source code. In theory, most Node.js, Python, and Ruby code can be easily moved from one platform (Windows, Mac, Linux) to another platform. Java applications go even further by having the compiled Java class turned into a bytecode, capable of running anywhere that has a JVM (Java Virtual Machine).

The Java ecosystem provides a standard format to distribute all Java classes that are part of the same application. You can package these classes as a JAR (Java Archive), WAR (Web Archive), and EAR (Enterprise Archive) that contains the front end, back end, and libraries embedded. So I ask you: Why do you use containers to distribute your Java application? Isn’t it already supposed to be easily portable between environments?

Continue reading “Why Kubernetes is The New Application Server”

Share
Building Container-Native Node.js Applications with Red Hat OpenShift Application Runtimes and Istio

Building Container-Native Node.js Applications with Red Hat OpenShift Application Runtimes and Istio

For developers working on a Kubernetes-based application environment such as Red Hat OpenShift, there are a number things that need to be considered to fully take advantage of the significant benefits provided by these technologies, including:

  • How do I communicate with the orchestration layer to indicate the application is operating correctly and is available to receive traffic?
  • What happens if the application detects a system fault, and how does the application relay this to the orchestration layer?
  • How can I accurately trace traffic flow between my applications in order to identify potential bottlenecks?
  • What tools can I use to easily deploy my updated application as part of my standard toolchain?
  • What happens if I introduce a network fault between my services, and how do I test this scenario?

These questions are central to building container-native solutions. At Red Hat, we define container-native as applications that conform to the following key tenets:

  • DevOps automation
  • Single concern principle
  • Service discovery
  • High observability
  • Lifecycle conformance
  • Runtime confinement
  • Process disposability
  • Image immutability

This may seem like a lot of overhead on top of the core application logic. Red Hat OpenShift Application Runtimes (RHOAR) and Istio provide developers with tools to adhere to these principles with minimal overhead in terms of coding and implementation.

In this blog post, we’re specifically focusing on how RHOAR and Istio combine to provide tools for DevOps automation, lifecycle conformance, high observability, and runtime confinement.

Continue reading “Building Container-Native Node.js Applications with Red Hat OpenShift Application Runtimes and Istio”

Share
Red Hat Summit: Containers, Microservices, and Serverless Computing

Red Hat Summit: Containers, Microservices, and Serverless Computing

You’re in an IT department. How does the rest of the organization see you? As a valuable asset whose code and APIs make a difference in the marketplace, or as a necessary evil that should be trimmed as much as possible? Containers, microservices, and serverless computing can make you more responsive, flexible, and competitive, which in turn makes your organization more effective. And that puts you solidly in the asset column.

After sprinting through the streets of San Francisco from the stage of the opening keynote at Red Hat Summit 2018 (replay available here), Burr Sutter hosted a packed house in Moscone South to talk about these technologies. Containers are widely accepted (see the announcement from Red Hat and Microsoft for an example), microservices are increasingly popular as an approach to modernizing monolithic applications, and serverless computing is emerging as an important new programming model.

Continue reading “Red Hat Summit: Containers, Microservices, and Serverless Computing”

Share
Getting Started with Istio and Jaeger on Your Laptop

Getting Started with Istio and Jaeger on Your Laptop

[Cross posted from the OpenShift blog]

About a year ago Red Hat announced its participation as a launch partner of the Istio project, a service mesh technology that creates an application focused network that transparently protects the applications from abnormalities in environments. The main goals of Istio are enhancing overall application security and availability through many different capabilities such as intelligent routing, circuit breaking, mutual TLS, rating, and limiting among others. Ultimately Istio is about helping organizations develop and deploy resilient, secure applications and services using advanced design and deployment patterns that are baked into the platform.

As part of our investments in making the technology easily consumable to Kubernetes and OpenShift users, Red Hat has created a ton of content:

  • learn.openshift.com: A web-based OpenShift and Kubernetes learning environment where users get to interact through the web browser with a real running instance of OpenShift and Istio service mesh with zero install time and no sign-up required.
  • Istio tutorial: Want to try the web-based scenario yourself from scratch? This Git repo contains instructions on how to set up an environment for yourself.
  • Introducing Istio Service Mesh for Microservices book by Christian Posta and Burr Sutter
  • Blog posts on the OpenShift and Red Hat Developer blogs

Continue reading “Getting Started with Istio and Jaeger on Your Laptop”

Share