Flexible Images or Using S2I for Image Configuration

Flexible Images or Using S2I for Image Configuration

Container images usually come with pre-defined tools or services with minimal or limited possibilities of further configuration. This brought us into a way of thinking of how to provide images that contain reasonable default settings but are, at the same time, easy to extend. And to make it more fun, this would be possible to achieve both on a single Linux host and in an orchestrated OpenShift environment.

Source-to-image (S2I) has been introduced three years ago to allow developers to build containerized applications by simply providing source code as an input. So why couldn’t we use it to make configuration files as an input instead? We can, of course!

Continue reading “Flexible Images or Using S2I for Image Configuration”

Share
Example of using Ansible to update Container Native Storage

Example of using Ansible to update Container Native Storage

Container Native Storage (CNS) is implemented in OpenShift as pods. These pods are created from a template that is built into OpenShift. After an automated install, we want to make sure we have the latest template, and the latest containers when using the Advanced Installer. While typically this is a multi-step manual process, an Ansible Script makes this a lot simpler.

Continue reading “Example of using Ansible to update Container Native Storage”

Share
Accelerating the development of Node.js using OpenShift

Accelerating the development of Node.js using OpenShift

In this blog entry, I want to introduce a “different” way to work with OpenShift. In the typical way to deploy a Pod to OpenShift, we have available a set of very useful objects we have build/image configurations. This takes the pain from us by hiding the details about image construction but, sometimes we just want to see some code running in the cloud. Or we want to see if our service/application is able to interact with nearby services or we have some code but we don’t want to use a git repo just yet. To solve that problem, I will show the concept of InitContainers, and how by being a little bit creative we achieve some cool stuff like deploying our code inside a running container.

Continue reading “Accelerating the development of Node.js using OpenShift”

Share
Dynamically Creating Java Keystores in OpenShift

Dynamically Creating Java Keystores in OpenShift

Introduction

With a simple annotation to a service, you can dynamically create certificates in OpenShift.

Certificates created this way are in PEM (base64-encoded certificates) format and cannot be directly consumed by Java applications, which need certificates to be stored in Java KeyStores.

In this post, we are going to show a simple approach to enable Java applications to benefit from certificates dynamically created by OpenShift.

Continue reading “Dynamically Creating Java Keystores in OpenShift”

Share
Developing .NET Core 2.0 Web Applications on OpenShift

Developing .NET Core 2.0 Web Applications on OpenShift

Today we’re going to create a .NET Core 2.0 Web Application using the JBoss Developer Studio and the aCute plugin (C# application development). We’ll deploy our application onto an OpenShift instance and continue to modify it while viewing the changes almost instantly. Although the initial setup will be quite involved, it will only need to be done once.

Continue reading “Developing .NET Core 2.0 Web Applications on OpenShift”

Share
Configuring mKahaDB  persistence storage for ActiveMQ

Configuring mKahaDB persistence storage for ActiveMQ

In this post, I wanted to address how to configure mKahaDB persistence storage on ActiveMQ for better management and reducing disk usage.

Default configured KahaDB persistence adapter works well when all the destinations (queues/topics) being managed by the broker have similar performance. However, an enterprise solution where several third parties are involved is never the case.

There are multiple queues or topics and different consumers or listeners listening to these queues/topics. Some consumers might be slower than other consumers. This will grow the message store’s disk usage rapidly. Due to this situation and being single KahaDB all store destinations might perform slow.

Continue reading “Configuring mKahaDB persistence storage for ActiveMQ”

Share
HOW-TO setup 3scale OpenID Connect (OIDC) Integration with RH SSO

HOW-TO setup 3scale OpenID Connect (OIDC) Integration with RH SSO

This step-by-step guide is a follow-up to the Red Hat 3scale API Management new 2.1 version announcement. As many of you will know, this new version simplifies the integration between APIcast gateway and Red Hat Single Sign-On through OpenID Connect (OIDC) for API authentication. As a result, now you can select OpenID Connect as your authentication mechanism besides API Key, App Key pair, and OAuth. Also, the on-premise version adds a new component that synchronizes the client creation on the Red Hat Single Sign-On domain.

Continue reading “HOW-TO setup 3scale OpenID Connect (OIDC) Integration with RH SSO”

Share
Building Declarative Pipelines with OpenShift DSL Plugin

Building Declarative Pipelines with OpenShift DSL Plugin

Jenkinsfiles have only become a part of Jenkins since version 2 but they have quickly become the de-facto standard for building continuous delivery pipelines with Jenkins. Jenkinsfile allows defining pipelines as code using a groovy DSL syntax and checking it into source version control which allows you to track, review, audit and manage the lifecycle of changes to the continuous delivery pipelines the same way that you manage the source code of your application.

Although the groovy DSL syntax which is called the “scripted syntax” is the more well-known and established syntax for building Jenkins pipelines and was the default when Jenkins 2 was released. Support for a newer declarative syntax is also added since Jenkins 2.5 in order to offer a simplified way for controlling all aspects of the pipeline. Although the scripted and declarative syntax provides two ways to define your pipeline, they both translate to the same execution blocks in Jenkins and achieve the same result.

Continue reading “Building Declarative Pipelines with OpenShift DSL Plugin”

Share
Monitoring RHGS

Monitoring RHGS

OK so you watched:

https://www.redhat.com/en/about/videos/architecting-and-performance-tuning-efficient-gluster-storage-pools

You put in the time and architected an efficient and performant GlusterFS deployment. Your users are reading and writing, applications are humming along, and Gluster is keeping your data safe.

Now what?

Well, congratulations you just completed the sprint! Now its time for the marathon.

The often forgotten component of performance tuning is monitoring, you put in all that work up front to get your cluster performing and your users happy, now how do you ensure that this continues and possibly improves? This is all done through continued monitoring and profiling of your storage cluster, clients, and a deeper look at your workload. In this blog we will look at the different metrics you can monitor in your storage cluster, identify which of these are important to monitor, how often to monitor them, and different ways to accomplish this.

Continue reading “Monitoring RHGS”

Share