Machine Learning

Knowledge meets machine learning for smarter decisions, Part 1

Knowledge meets machine learning for smarter decisions, Part 1

Drools is a popular open source project known for its powerful rules engine. Few users realize that it can also be a gateway to the amazing possibilities of artificial intelligence. This two-part article introduces you to using Red Hat Decision Manager and its Drools-based rules engine to combine machine learning predictions with deterministic reasoning. In Part 1, we’ll prepare our machine learning logic. In Part 2, you’ll learn how to use the machine learning model from a knowledge service.

Continue reading Knowledge meets machine learning for smarter decisions, Part 1

Share
Use Kebechet machine learning to perform source code operations

Use Kebechet machine learning to perform source code operations

One of the first tools we developed to help us with Project Thoth was Kebechet, which we named for the goddess of freshness and purification. As we separated our software into more and more repositories (each of our Python modules is in its own repository on GitHub), we needed help with releasing new versions and keeping all dependent modules up-to-date. In a team of two and with more than 35 repositories, our process was a major time-burner.

Continue reading Use Kebechet machine learning to perform source code operations

Share
AI software stack inspection with Thoth and TensorFlow

AI software stack inspection with Thoth and TensorFlow

Project Thoth develops open source tools that enhance the day-to-day life of developers and data scientists. Thoth uses machine-generated knowledge to boost the performance, security, and quality of your applications using artificial intelligence (AI) through reinforcement learning (RL). This machine-learning approach is implemented in Thoth adviser (if you want to know more, click here) and it is used by Thoth integrations to provide the software stack based on user inputs.

Continue reading AI software stack inspection with Thoth and TensorFlow

Share
Kubeflow 1.0 monitoring and enhanced JupyterHub builds in Open Data Hub 0.8

Kubeflow 1.0 monitoring and enhanced JupyterHub builds in Open Data Hub 0.8

The new Open Data Hub version 0.8 (ODH) release includes many new features, continuous integration (CI) additions, and documentation updates. For this release, we focused on enhancing JupyterHub image builds, enabling more mixing of Open Data Hub and Kubeflow components, and designing our comprehensive end-to-end continuous integration and continuous deployment and delivery (CI/CD) process. In this article, we introduce the highlights of this newest release.

Note: Open Data Hub is an open source project and a community Operator for building an AI-as-a-Service (AIaaS) platform on Red Hat OpenShift.

Continue reading “Kubeflow 1.0 monitoring and enhanced JupyterHub builds in Open Data Hub 0.8”

Share
From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift

From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift

Data scientists often use notebooks to explore data and create and experiment with models. At the end of this exploratory phase is the product-delivery phase, which is basically getting the final model to production. Serving a model in production is not a one-step final process, however. It is a continuous phase of training, development, and data monitoring that is best captured or automated using pipelines. This brings us to a dilemma: How do you move code from notebooks to containers orchestrated in a pipeline, and schedule the pipeline to run after specific triggers like time of day, new batch data, and monitoring metrics?

Continue reading From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift

Share
Developing at the edge: Best practices for edge computing

Developing at the edge: Best practices for edge computing

Edge computing continues to gain force as ever more companies increase their investments in edge, even if they’re only dipping their toes in with small-scale pilot deployments. Emerging use cases like Internet-of-Things (IoT), augmented reality, and virtual reality (AR/VR), robotics, and telecommunications-network functions are often cited as key drivers for companies moving computing to the edge. Traditional enterprises are also looking at edge computing to better support their remote offices, retail locations, manufacturing plants, and more. At the network edge, service providers can deploy an entirely new class of services to take advantage of their proximity to customers.

Continue reading Developing at the edge: Best practices for edge computing

Share
A development roadmap for Open Data Hub

A development roadmap for Open Data Hub

Open Data Hub (ODH) is a blueprint for building an AI-as-a-Service (AIaaS) platform on Red Hat’s Kubernetes-based OpenShift 4.x. The Open Data Hub team recently released Open Data Hub 0.6.0, followed up by a smaller update of Open Data Hub 0.6.1.

We recently got together and discussed our plans and timeline for the next two releases. Our plans are based on the roadmap slide deck that we put together and presented during the Open Data Hub community meeting on April 6.

In this article, we present our roadmap for the next several Open Data Hub releases. We would like to emphasize that the target dates are optimistic, describing what we would like to achieve. With the current state of the world and vacation time coming up, these dates might change.

Continue reading “A development roadmap for Open Data Hub”

Share
Open Data Hub 0.6.1: Bug fix release to smooth out redesign regressions

Open Data Hub 0.6.1: Bug fix release to smooth out redesign regressions

It is just a few short weeks since we released Open Data Hub (ODH) 0.6.0, bringing many changes to the underlying architecture and some new features. We found a few issues in this new version with the Kubeflow Operator and a few regressions that came in with the new JupyterHub updates. To make sure your experience with ODH 0.6 does not suffer because we wanted to release early, we offer a new (mostly) bugfix release: Open Data Hub 0.6.1.

Continue reading Open Data Hub 0.6.1: Bug fix release to smooth out redesign regressions

Share