Dynamically Creating Java Keystores in OpenShift

Introduction

With a simple annotation to a service, you can dynamically create certificates in OpenShift.

Certificates created this way are in PEM (base64-encoded certificates) format and cannot be directly consumed by Java applications, which need certificates to be stored in Java KeyStores.

In this post, we are going to show a simple approach to enable Java applications to benefit from certificates dynamically created by OpenShift.

Continue reading “Dynamically Creating Java Keystores in OpenShift”

Share

Developing .NET Core 2.0 Web Applications on OpenShift

Today we’re going to create a .NET Core 2.0 Web Application using the JBoss Developer Studio and the aCute plugin (C# application development). We’ll deploy our application onto an OpenShift instance and continue to modify it while viewing the changes almost instantly. Although the initial setup will be quite involved, it will only need to be done once.

Continue reading “Developing .NET Core 2.0 Web Applications on OpenShift”

Share

Configuring mKahaDB persistence storage for ActiveMQ

In this post, I wanted to address how to configure mKahaDB persistence storage on ActiveMQ for better management and reducing disk usage.

Default configured KahaDB persistence adapter works well when all the destinations (queues/topics) being managed by the broker have similar performance. However, an enterprise solution where several third parties are involved is never the case.

There are multiple queues or topics and different consumers or listeners listening to these queues/topics. Some consumers might be slower than other consumers. This will grow the message store’s disk usage rapidly. Due to this situation and being single KahaDB all store destinations might perform slow.

Continue reading “Configuring mKahaDB persistence storage for ActiveMQ”

Share

HOW-TO setup 3scale OpenID Connect (OIDC) Integration with RH SSO

This step-by-step guide is a follow-up to the Red Hat 3scale API Management new 2.1 version announcement. As many of you will know, this new version simplifies the integration between APIcast gateway and Red Hat Single Sign-On through OpenID Connect (OIDC) for API authentication. As a result, now you can select OpenID Connect as your authentication mechanism besides API Key, App Key pair, and OAuth. Also, the on-premise version adds a new component that synchronizes the client creation on the Red Hat Single Sign-On domain.

Continue reading “HOW-TO setup 3scale OpenID Connect (OIDC) Integration with RH SSO”

Share
Red Hat Logo

Building Declarative Pipelines with OpenShift DSL Plugin

Jenkinsfiles have only become a part of Jenkins since version 2 but they have quickly become the de-facto standard for building continuous delivery pipelines with Jenkins. Jenkinsfile allows defining pipelines as code using a groovy DSL syntax and checking it into source version control which allows you to track, review, audit and manage the lifecycle of changes to the continuous delivery pipelines the same way that you manage the source code of your application.

Although the groovy DSL syntax which is called the “scripted syntax” is the more well-known and established syntax for building Jenkins pipelines and was the default when Jenkins 2 was released. Support for a newer declarative syntax is also added since Jenkins 2.5 in order to offer a simplified way for controlling all aspects of the pipeline. Although the scripted and declarative syntax provides two ways to define your pipeline, they both translate to the same execution blocks in Jenkins and achieve the same result.

Continue reading “Building Declarative Pipelines with OpenShift DSL Plugin”

Share

Monitoring RHGS

OK so you watched:

https://www.redhat.com/en/about/videos/architecting-and-performance-tuning-efficient-gluster-storage-pools

You put in the time and architected an efficient and performant GlusterFS deployment. Your users are reading and writing, applications are humming along, and Gluster is keeping your data safe.

Now what?

Well, congratulations you just completed the sprint! Now its time for the marathon.

The often forgotten component of performance tuning is monitoring, you put in all that work up front to get your cluster performing and your users happy, now how do you ensure that this continues and possibly improves? This is all done through continued monitoring and profiling of your storage cluster, clients, and a deeper look at your workload. In this blog we will look at the different metrics you can monitor in your storage cluster, identify which of these are important to monitor, how often to monitor them, and different ways to accomplish this.

Continue reading “Monitoring RHGS”

Share