Visualizing Istio service mesh with Kiali

Observe what your Istio mesh is doing with Kiali

The Istio service mesh is a powerful tool for building a service mesh. If you don’t know about Istio yet, have a look at the Introduction to Istio series of articles or download the ebook Introducing Istio Service Mesh for Microservices.

The power of Istio comes with the cost of some complexity at configuration and runtime. To help this, the Kiali project provides observability of the mesh and the services in the mesh. Kiali visualizes the mesh with its services and workloads. It indicates the health of the mesh and shows hints about applied configuration options. You can then drill in on individual services or settings to view details.

This post describes how to use Kiali to observe what the microservices in your Istio service mesh are doing, validate the Istio configuration, and see any issues.

Continue reading “Observe what your Istio mesh is doing with Kiali”

Share

The rise of non-microservices architectures

This post is a short summary of my recent experiences with customers that are implementing architectures similar to microservices but with different characteristics in the current post-microservices world.

The microservices architectural style has been around for close to five years now, and much has been said and written about it. Today, I see teams deciding not to strictly follow certain principles of the “pure” microservices architecture and to break some of the “rules.” Teams are now more informed about the pros and cons of microservices, and they make context-driven decisions respecting team experience and organizational boundaries and accept the fact that not every company is Netflix. Below are some examples I have seen in my recent microservices gigs.

Continue reading “The rise of non-microservices architectures”

Share

Kubernetes is the new application operating environment (Part 1)

This is the first in a series of articles that consider the role of Kubernetes and application servers. Do application servers need to exist? Where does the current situation leave developers trying to choose the right path forward for their applications?

Why Kubernetes is the new application server

By now you’ve likely read “Why Kubernetes is The New Application Server” and you might be wondering what that means for you. How does it impact Java EE or Jakarta EE and Eclipse MicroProfile? What about application servers or fat JARs? Is it the end as we’ve known it for nearly two decades?

In reality, it doesn’t impact the worldview for most. It’s in line with the efforts of a majority of vendors around Docker and Kubernetes deployments over the last few years. In addition, there’s greater interest in service mesh infrastructures, such as Istio, and how they can further assist with managing Kubernetes deployments.

Continue reading “Kubernetes is the new application operating environment (Part 1)”

Share

Asynchronous communication between microservices using AMQP and Vert.x

Microservices are the go-to architecture in most new, modern software solutions. They are (mostly) designed to do one thing, and they must talk to each other to accomplish a business use-case. All communication between the microservices is via network calls; this pattern avoids tight coupling between services and provides better separation between them.

There are basically two styles of communication: synchronous and asynchronous. These two styles applied properly are the foundation for request-reply and event-driven patterns. In the case of the request-reply pattern, a client initiates a request and typically waits synchronously for the reply. However, there are cases where the client could decide not to wait and register a callback with the other party, which is an example of the request-reply pattern in an asynchronous fashion.

In this article, I am showcasing the approach of asynchronous request-reply by having two services communicate with each other over Advanced Message Queuing Protocol (AMQP). AMQP is an open standard for passing business messages between applications or organizations. Although this article focuses on the request-reply pattern, the same code can be used to develop additional scenarios like event sourcing. Communicating using an asynchronous model can be very beneficial for implementing the aggregator pattern.

I will be using Apache QPid Proton (or Red Hat AMQ Interconnect) as the message router and the Vert.x AMQP bridge for communication between the two services.

Continue reading “Asynchronous communication between microservices using AMQP and Vert.x”

Share

Eclipse MicroProfile and Red Hat Update: Thorntail and SmallRye

During the last three months, there have been some changes regarding Eclipse MicroProfile at Red Hat. If you haven’t been following the details, this post recaps what’s changed and introduces Thorntail and SmallRye.

Bye-bye WildFly Swarm! Hello Thorntail!

You may have missed this important news. Our MicroProfile implementation changed its name two months ago.

After a lot of feedback from the community, we decided to rename “WildFly Swarm” to Thorntail. While the former name was nice, we found that the “Swarm” term was a bit overloaded in the IT industry and could be confusing. It’s the same for the “WildFly” part; sharing this name with our Java EE application server was a source of confusion for some users, making them think it was a subproject of WildFly.

Continue reading “Eclipse MicroProfile and Red Hat Update: Thorntail and SmallRye”

Share
Red Hat OpenShift

Sabre chooses Red Hat OpenShift for cloud-native DevOps platform

As part of its strategy to re-imagine the business of travel, Sabre Corporation today announced that it will leverage Red Hat OpenShift Container Platform as the foundation for its Next Generation Platform initiative. OpenShift will be the basis of a modern architecture that includes microservices, development and operations (DevOps), and a multi-faceted cloud strategy to lead an industry evolution in the future of retailing, distribution, and fulfillment through innovative technology. OpenShift, built on containers and Kubernetes, is the the industry’s leading enterprise Kubernetes platform for running existing and cloud-native applications in any cloud.

“The Next Generation Platform is the cornerstone of Sabre’s long-term technology strategy,” said Vish Saoji, Sabre CTO. “Red Hat has delivered the enterprise-hardened software environment we need to help drive our technology transformation, and this collaboration allows us to build upon that architecture and execute our plan.”

Continue reading “Sabre chooses Red Hat OpenShift for cloud-native DevOps platform”

Share

Announcing Red Hat Developer Studio 12.0.0.GA and JBoss Tools 4.6.0.Final for Eclipse Photon

Attention desktop IDE users: Red Hat Developer Studio 12.0 and the community edition, JBoss Tools 4.6.0 for Eclipse Photon, are now available. You can download a bundled installer, Developer Studio, which installs Eclipse 4.8 with all of the JBoss Tools already configured. Or, if you have an existing Eclipse 4.8 (Photon) installation, you can download the JBoss Tools package. This article highlights some of the new features in both JBoss Tools and Eclipse Photon, covering WildFly, Spring Boot, Camel, Maven, and many Java related improvements including full Java 10 support.

Developer Studio / JBoss Tools provides a desktop IDE with a broad set of tooling covering multiple programming models and frameworks. If you are doing container / cloud development, there is integrated functionality for working with Red Hat OpenShift, Kubernetes, Red Hat Container Development Kit, and Red Hat OpenShift Application Runtimes. For integration projects, there is tooling covering Camel and Red Hat Fuse that can be used in both local and cloud deployments.

Continue reading “Announcing Red Hat Developer Studio 12.0.0.GA and JBoss Tools 4.6.0.Final for Eclipse Photon”

Share

Smart-Meter Data Processing Using Apache Kafka on OpenShift

There is a major push in the United Kingdom to replace aging mechanical electricity meters with connected smart meters. New meters allow consumers to more closely monitor their energy usage and associated cost, and they enable the suppliers to automate the billing process because the meters automatically report fine-grained energy use.

This post describes an architecture for processing a stream of meter readings using Strimzi, which offers support for running Apache Kafka in a container environment (Red Hat OpenShift). The data has been made available through a UK research project that collected data from energy producers, distributors, and consumers from 2011 to 2014. The TC1a dataset used here contains data from 8,000 domestic customers on half-hour intervals in the following form:

Continue reading “Smart-Meter Data Processing Using Apache Kafka on OpenShift”

Share

Contract-First API Design with Apicurio and Red Hat Fuse/Camel

This is part one of my two-article series that demonstrates how to implement contract-first API design using Apicurio and Red Hat Fuse.  It covers how to create an OpenAPI standard document as the contract between API providers and consumers using Apicurio Studio. It also shows how to quickly create mock tests using Red Hat Fuse which is based on Camel.

There are two common approaches when it comes to creating APIs:

  • Code first (top-down)
  • Contract first (bottom-up)

Continue reading “Contract-First API Design with Apicurio and Red Hat Fuse/Camel”

Share

Why Kubernetes is The New Application Server

Have you ever wondered why you are deploying your multi-platform applications using containers? Is it just a matter of “following the hype”? In this article, I’m going to ask some provocative questions to make my case for Why Kubernetes is the new application server.

You might have noticed that the majority of languages are interpreted and use “runtimes” to execute your source code. In theory, most Node.js, Python, and Ruby code can be easily moved from one platform (Windows, Mac, Linux) to another platform. Java applications go even further by having the compiled Java class turned into a bytecode, capable of running anywhere that has a JVM (Java Virtual Machine).

The Java ecosystem provides a standard format to distribute all Java classes that are part of the same application. You can package these classes as a JAR (Java Archive), WAR (Web Archive), and EAR (Enterprise Archive) that contains the front end, back end, and libraries embedded. So I ask you: Why do you use containers to distribute your Java application? Isn’t it already supposed to be easily portable between environments?

Continue reading “Why Kubernetes is The New Application Server”

Share