Big Data

Securely connect Quarkus and Red Hat Data Grid on Red Hat OpenShift

Securely connect Quarkus and Red Hat Data Grid on Red Hat OpenShift

The release of Red Hat Data Grid 8.1 offers new features for securing applications deployed on Red Hat OpenShift. Naturally, I wanted to check them out for Quarkus. Using the Quarkus Data Grid extension made that easy to do.

Data Grid is an in-memory, distributed, NoSQL datastore solution based on Infinispan. Since it manages your data, Data Grid should be as secure as possible. For this reason, it uses a default property realm that requires HTTPS and automatically enforces user authentication on remote endpoints. As an additional layer of security on OpenShift, Data Grid presents certificates signed by the OpenShift Service Signer. In practice, this means that Data Grid is as secure as possible out of the box, requiring encrypted connections and authentication from the first request. Data Grid generates a default set of credentials (which, of course, you can override), but unauthenticated access is denied.

In this article, I show you how to configure a Quarkus application with Data Grid and deploy it on OpenShift.

Continue reading “Securely connect Quarkus and Red Hat Data Grid on Red Hat OpenShift”

Share
AI software stack inspection with Thoth and TensorFlow

AI software stack inspection with Thoth and TensorFlow

Project Thoth develops open source tools that enhance the day-to-day life of developers and data scientists. Thoth uses machine-generated knowledge to boost the performance, security, and quality of your applications using artificial intelligence (AI) through reinforcement learning (RL). This machine-learning approach is implemented in Thoth adviser (if you want to know more, click here) and it is used by Thoth integrations to provide the software stack based on user inputs.

Continue reading AI software stack inspection with Thoth and TensorFlow

Share
Kubeflow 1.0 monitoring and enhanced JupyterHub builds in Open Data Hub 0.8

Kubeflow 1.0 monitoring and enhanced JupyterHub builds in Open Data Hub 0.8

The new Open Data Hub version 0.8 (ODH) release includes many new features, continuous integration (CI) additions, and documentation updates. For this release, we focused on enhancing JupyterHub image builds, enabling more mixing of Open Data Hub and Kubeflow components, and designing our comprehensive end-to-end continuous integration and continuous deployment and delivery (CI/CD) process. In this article, we introduce the highlights of this newest release.

Note: Open Data Hub is an open source project and a community Operator for building an AI-as-a-Service (AIaaS) platform on Red Hat OpenShift.

Continue reading “Kubeflow 1.0 monitoring and enhanced JupyterHub builds in Open Data Hub 0.8”

Share
From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift

From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift

Data scientists often use notebooks to explore data and create and experiment with models. At the end of this exploratory phase is the product-delivery phase, which is basically getting the final model to production. Serving a model in production is not a one-step final process, however. It is a continuous phase of training, development, and data monitoring that is best captured or automated using pipelines. This brings us to a dilemma: How do you move code from notebooks to containers orchestrated in a pipeline, and schedule the pipeline to run after specific triggers like time of day, new batch data, and monitoring metrics?

Continue reading From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift

Share
Open Data Hub and Kubeflow installation customization

Open Data Hub and Kubeflow installation customization

The main goal of Kubernetes is to reach the desired state: to deploy our pods, set up the network, and provide storage. This paradigm extends to Operators, which use custom resources to define the state. When the Operator picks up the custom resource, it will always try to get to the state defined by it. That means that if we modify a resource that is managed by the Operator, it will quickly replace it to match the desired state.

Continue reading Open Data Hub and Kubeflow installation customization

Share
Develop and test a Quarkus client on Red Hat CodeReady Containers with Red Hat Data Grid 8.0

Develop and test a Quarkus client on Red Hat CodeReady Containers with Red Hat Data Grid 8.0

This article is about my experience installing Red Hat Data Grid (RHDG) on Red Hat CodeReady Containers (CRC) so that I could set up a local environment to develop and test a Quarkus Infinispan client. I started by installing CodeReady Containers and then installed Red Hat Data Grid. I am also on a learning path for Quarkus, so my last step was to integrate the Quarkus Infinispan client into my new development environment.

Initially, I tried connecting the Quarkus client to my locally running instance of Data Grid. Later, I decided I wanted to create an environment where I could test and debug Data Grid on Red Hat OpenShift 4. I tried installing Data Grid on OpenShift 4 in a shared environment, but maintaining that environment was challenging. Through trial-and-error, I found that it was better to install Red Hat Data Grid on CodeReady Containers and use that for my local development and testing environment.

In this quick tutorial, I guide you through setting up a local environment to develop and test a Quarkus client—in this case, Quarkus Infinispan. The process consists of three steps:

  1. Install and run CodeReady Containers.
  2. Install Data Grid on CodeReady Containers.
  3. Integrate the Quarkus Infinispan client into the new development environment.

Continue reading “Develop and test a Quarkus client on Red Hat CodeReady Containers with Red Hat Data Grid 8.0”

Share
Open Data Hub 0.6.1: Bug fix release to smooth out redesign regressions

Open Data Hub 0.6.1: Bug fix release to smooth out redesign regressions

It is just a few short weeks since we released Open Data Hub (ODH) 0.6.0, bringing many changes to the underlying architecture and some new features. We found a few issues in this new version with the Kubeflow Operator and a few regressions that came in with the new JupyterHub updates. To make sure your experience with ODH 0.6 does not suffer because we wanted to release early, we offer a new (mostly) bugfix release: Open Data Hub 0.6.1.

Continue reading Open Data Hub 0.6.1: Bug fix release to smooth out redesign regressions

Share
Open Data Hub 0.6 brings component updates and Kubeflow architecture

Open Data Hub 0.6 brings component updates and Kubeflow architecture

Open Data Hub (ODH) is a blueprint for building an AI-as-a-service platform on Red Hat’s Kubernetes-based OpenShift 4.x. Version 0.6 of Open Data Hub comes with significant changes to the overall architecture as well as component updates and additions. In this article, we explore these changes.

From Ansible Operator to Kustomize

If you follow the Open Data Hub project closely, you might be aware that we have been working on a major design change for a few weeks now. Since we started working closer with the Kubeflow community to get Kubeflow running on OpenShift, we decided to leverage Kubeflow as the Open Data Hub upstream and adopt its deployment tools—namely KFdef manifests and Kustomize—for deployment manifest customization.

Continue reading “Open Data Hub 0.6 brings component updates and Kubeflow architecture”

Share