Red Hat Developer Program

Istio Route Rules: Telling Service Requests Where To Go

OpenShift and Kubernetes do a great job of working to make sure calls to your microservice are routed to the correct pods. After all, that’s one of the raison d’être for Kubernetes: routing and load balancing. What if, however, you want to customize the routing? What if you want to run two versions at the same time? How do Istio Route Rules handle this?

Continue reading “Istio Route Rules: Telling Service Requests Where To Go”


Structured application logs in OpenShift

Logs are like gold dust. Taken alone they may not be worth much, but put together and worked by a skillful goldsmith they may become very valuable. OpenShift comes with The EFK stack: Elasticsearch, Fluentd, and Kibana. Applications running on OpenShift get their logs automatically aggregated to provide valuable information on their state and health during tests and in production.

The only requirement is that the application sends its logs to the standard output. OpenShift does the rest. Simple enough!

In this blog I am covering a few points that may help you with bringing your logs from raw material to a more valuable product.

Continue reading “Structured application logs in OpenShift”


Beta Testing in the Ever-Changing World of Automation

Beta testing is fundamentally all about the testing of a product performed by real users in a real environment. There are many tags we use to refer to the testing of similar characteristics, such as User Acceptance Testing (UAT), Customer Acceptance Testing (CAT), Customer Validation, and Field Testing (more popular in Europe). Whichever tag we use for these testing cases, the basic components are more or less the same. To discover and fix potential issues, this involves the user and front-end user interface (UI) testing, as well as the user experience (UX) related testing. This always happens in the iteration of the software development lifecycle (SDLC) where the idea has transformed into design and has passed the development phases, while the unit and integration testing has already been completed.

Continue reading “Beta Testing in the Ever-Changing World of Automation”


Create a scalable REST API with Falcon and RHSCL

APIs are critical to automation, integration and developing cloud-native applications, and it’s vital they can be scaled to meet the demands of your user-base. In this article, we’ll create a database-backed REST API based on the Python Falcon framework using Red Hat Software Collections (RHSCL), test how it performs, and scale-out in response to a growing user-base.

Continue reading Create a scalable REST API with Falcon and RHSCL


Using Camel-Undertow component supporting http2 connection

This article would help to configure http2 protocol support for the camel-undertow component.

  • Camel’s undertow component use embedded undertow web-container of version undertow-core:jar:1.4.21. This version also supports the http2 connection.
  • I have used camel version 2.21.0-SNAPSHOT from upstream
  • Also, the curl version to test application using camel-undertow component is 7.53.1. This curl version supports –http2 flag for sending an http2 request.
  • I have also used nghttp to test application from linux terminal. However, this article is not about http2 insights.
  • For http2 details, I found articles [1] and [2] helpful.

Continue reading “Using Camel-Undertow component supporting http2 connection”

Red Hat Logo

Building Declarative Pipelines with OpenShift DSL Plugin

Jenkinsfiles have only become a part of Jenkins since version 2 but they have quickly become the de-facto standard for building continuous delivery pipelines with Jenkins. Jenkinsfile allows defining pipelines as code using a groovy DSL syntax and checking it into source version control which allows you to track, review, audit and manage the lifecycle of changes to the continuous delivery pipelines the same way that you manage the source code of your application.

Although the groovy DSL syntax which is called the “scripted syntax” is the more well-known and established syntax for building Jenkins pipelines and was the default when Jenkins 2 was released. Support for a newer declarative syntax is also added since Jenkins 2.5 in order to offer a simplified way for controlling all aspects of the pipeline. Although the scripted and declarative syntax provides two ways to define your pipeline, they both translate to the same execution blocks in Jenkins and achieve the same result.

Continue reading “Building Declarative Pipelines with OpenShift DSL Plugin”