Data scientists often use notebooks to explore data and create and experiment with models. At the end of this exploratory phase is the product-delivery phase, which is basically getting the final model to production. Serving a model in production is not a one-step final process, however. It is a continuous phase of training, development, and data monitoring that is best captured or automated using pipelines. This brings us to a dilemma: How do you move code from notebooks to containers orchestrated in a pipeline, and schedule the pipeline to run after specific triggers like time of day, new batch data, and monitoring metrics?
Continue reading From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift
Edge computing continues to gain force as ever more companies increase their investments in edge, even if they’re only dipping their toes in with small-scale pilot deployments. Emerging use cases like Internet-of-Things (IoT), augmented reality, and virtual reality (AR/VR), robotics, and telecommunications-network functions are often cited as key drivers for companies moving computing to the edge. Traditional enterprises are also looking at edge computing to better support their remote offices, retail locations, manufacturing plants, and more. At the network edge, service providers can deploy an entirely new class of services to take advantage of their proximity to customers.
Continue reading Developing at the edge: Best practices for edge computing
Open Data Hub (ODH) is a blueprint for building an AI-as-a-Service (AIaaS) platform on Red Hat’s Kubernetes-based OpenShift 4.x. The Open Data Hub team recently released Open Data Hub 0.6.0, followed up by a smaller update of Open Data Hub 0.6.1.
We recently got together and discussed our plans and timeline for the next two releases. Our plans are based on the roadmap slide deck that we put together and presented during the Open Data Hub community meeting on April 6.
In this article, we present our roadmap for the next several Open Data Hub releases. We would like to emphasize that the target dates are optimistic, describing what we would like to achieve. With the current state of the world and vacation time coming up, these dates might change.
Continue reading “A development roadmap for Open Data Hub”
It is just a few short weeks since we released Open Data Hub (ODH) 0.6.0, bringing many changes to the underlying architecture and some new features. We found a few issues in this new version with the Kubeflow Operator and a few regressions that came in with the new JupyterHub updates. To make sure your experience with ODH 0.6 does not suffer because we wanted to release early, we offer a new (mostly) bugfix release: Open Data Hub 0.6.1.
Continue reading Open Data Hub 0.6.1: Bug fix release to smooth out redesign regressions
Open Data Hub (ODH) is a blueprint for building an AI-as-a-service platform on Red Hat’s Kubernetes-based OpenShift 4.x. Version 0.6 of Open Data Hub comes with significant changes to the overall architecture as well as component updates and additions. In this article, we explore these changes.
From Ansible Operator to Kustomize
If you follow the Open Data Hub project closely, you might be aware that we have been working on a major design change for a few weeks now. Since we started working closer with the Kubeflow community to get Kubeflow running on OpenShift, we decided to leverage Kubeflow as the Open Data Hub upstream and adopt its deployment tools—namely KFdef manifests and Kustomize—for deployment manifest customization.
Continue reading “Open Data Hub 0.6 brings component updates and Kubeflow architecture”
When it comes to the process of optimizing a production-level artificial intelligence/machine learning (AI/ML) process, workflows and pipelines are an integral part of this effort. Pipelines are used to create workflows that are repeatable, automated, customizable, and intelligent.
An example AI/ML pipeline is presented in Figure 1, where functionalities such as data extract, transform, and load (ETL), model training, model evaluation, and model serving are automated as part of the pipeline.
Continue reading “AI/ML pipelines using Open Data Hub and Kubeflow on Red Hat OpenShift”
Project Thoth is an artificial intelligence (AI) R&D Red Hat research project as part of the Office of the CTO and the AI Center of Excellence (CoE). This project aims to build a knowledge graph and a recommendation system for application stacks based on the collected knowledge, such as machine learning (ML) applications that rely on popular open source ML frameworks and libraries (TensorFlow, PyTorch, MXNet, etc.). In this article, we examine the potential of project Thoth’s infrastructure running in Red Hat Openshift and explore how it can collect performance observations.
Several types of observations are gathered from various domains (like build time, run time and performance, and application binary interfaces (ABI)). These observations are collected through the Thoth system and enrich the knowledge graph automatically. The knowledge graph is then used to learn from the observations. Project Thoth architecture requires multi-namespace deployment in an OpenShift environment, which is run on PnT DevOps Shared Infrastructure (PSI), a shared multi-tenant OpenShift cluster.
Continue reading “Microbenchmarks for AI applications using Red Hat OpenShift on PSI in project Thoth”
Python has become a popular programming language in the AI/ML world. Projects like TensorFlow and PyTorch have Python bindings as the primary interface used by data scientists to write machine learning code. However, distributing AI/ML-related Python packages and ensuring application binary interface (ABI) compatibility between various Python packages and system libraries presents a unique set of challenges.
The manylinux standard (e.g., manylinux2014) for Python wheels provides a practical solution to these challenges, but it also introduces new challenges that the Python community and developers need to consider. Before we delve into these additional challenges, we’ll briefly look at the Python ecosystem for packaging and distribution.
Continue reading “Python wheels, AI/ML, and ABI compatibility”
Red Hat Summit 2019 is rocking Boston, MA, May 7-9 in the Boston Convention and Exhibition Center. Everything you need to know about the current state of open source enterprise-ready software can be found at this event. You’ll find customers talking about their experiences leveraging open source in their solutions, creators of open source technologies you’re using, and hands-on lab experiences relating to these technologies.
This hands-on appeal is what this series of articles is about. In previous articles, we looked at labs focusing on Red Hat Enterprise Linux, Integration and APIs, and cloud-native app development. In this article, we’ll look at labs in the “Emerging Technology” track.
Continue reading “Red Hat Summit 2019 Labs: Emerging technology roadmap”