Red Hat Logo

About When Not to Do Microservices

Quick interlude to my last blog. As part of my last blog on low-risk monolith to microservice architecture, I made this statement about microservices and not doing them:

“Microservices architecture is not appropriate all the time”.

I’ve had some interesting reactions. Some of it along the lines of “how dare you”. I also poked at that a bit on Twitter a month or so ago

Don't do microservices

Continue reading “About When Not to Do Microservices”

Share

Container Images for OpenShift – Part 4: Cloud readiness

This is a transcript of a session I gave at EMEA Red Hat Tech Exchange 2017, a gathering of all Red Hat solution architects and consultants across EMEA. It is about considerations and good practices when creating images that will run on OpenShift. This fourth and last part focuses on the specific aspects of cloud-ready applications and the consequences concerning the design of the container images.

Continue reading “Container Images for OpenShift – Part 4: Cloud readiness”

Share

Using New Relic in Red Hat Mobile Node.js Applications

Introduction

New Relic is an application-monitoring platform that provides in-depth analytics and analysis for applications regardless of the type of environment where they are deployed, or as New Relic put it themselves:

“Gain end-to-end visibility across your customer experience, application performance, and dynamic infrastructure with the New Relic Digital Intelligence Platform.” – New Relic

You might ask why there’s a use for New Relic’s monitoring capabilities when Red Hat Mobile Application Platform (RHMAP) and OpenShift Container Platform both offer insights into the CPU, Disk, Memory, and general resource utilization of your server-side applications. While these generic resource reports are valuable, they might not offer the detail required to debug a specific issue. Since New Relic is built as an analytics platform from the ground up it is capable of providing unique insights into the specific runtime of your applications. For example, the JavaScript code deployed in Node.js applications is run using the V8 JavaScript engine which has a life-cycle that can have a significant impact on the performance of your application depending on how you’ve written it. Utilizing New Relic’s Node.js module provides a real-time view of V8 engine performance and how they might be affecting the performance of your production application. By using this data, you can refine your application code to reduce memory usage, which in turn can free CPU resources due to less frequent garbage collections. Neat!

Continue reading “Using New Relic in Red Hat Mobile Node.js Applications”

Share

Camel Clustered File Ingestion with JDBC and Spring

Reading a file is a common use for Apache Camel. From using a file to kick off a larger route to simply needing to store the file content, the ability to only read a file once is important. This is easy when you have a single server with your route deployed, but what about when you deploy your route to multiple servers. Thankfully, Camel has the concept of an Idempotent Consumer.

Continue reading “Camel Clustered File Ingestion with JDBC and Spring”

Share

On link modeling, network emulation and its impacts on applications

In this blog post, I’ll guide you through the most important characteristics that define a ‘link’ in packet-switched networks, how they can impact your application, give some examples of real world parameters and how to use NetEm to emulate them.

In every packet-switched network, you will notice characteristics that are intrinsic to them and that varies depends on the communication channels being used. Such characteristics are bandwidth, delay (including jitter), packet loss, packet corruption and reordering.

Continue reading “On link modeling, network emulation and its impacts on applications”

Share

Organizing Microservices – Modern Integration

Microservices is probably one of the most popular buzz words among my fellow developer friends, and I do like the concept of being flexible, agile and having simply having more choices. But as a person that worked in the software integration space for years, I started to see some resemblance of the old ESB days.

Looking at the problem from ten thousand feet up. A decade ago, we had to come up with a better way of organizing the spaghetti connection in between systems, stop duplicating effort on the same piece of business logic. That is when service-oriented architecture (SOA) became popular. By modularizing services, sharing them among others systems, and organize ways of communication, routing of data. And ESB is one implementation of that, maybe not necessarily how it should be done.

Continue reading “Organizing Microservices – Modern Integration”

Share

Getting started with Kompose

We have written about Kompose earlier here , when it was as young as 0.1.0. This blog post will showcase where Kompose stands now.

Kompose is a tool that converts a Docker Compose file to Kubernetes or OpenShift artifacts. Kompose was originally started as an onboarding tool for Kubernetes users by Skippbox (now part of Bitnami). It went on to receive early contributions from both Google and Red Hat.

Kompose has graduated from the Kubernetes Incubator, we reached the epic milestone of 1.0.0, and so it’s now officially part of the Kubernetes Community Project.

Continue reading “Getting started with Kompose”

Share