
Data Science


Solution Pattern: Edge to Core Data Pipelines for AI/ML
End-to-end AI-enabled applications and data pipelines across the hybrid cloud

How to install KServe using Open Data Hub
Learn a simplified method for installing KServe, a highly scalable and standards-based model inference platform on Kubernetes for scalable AI.

Solution Pattern: Machine Learning and Data Science Pipelines
A practical example to deploy machine learning model using data science...

Getting started with InstructLab for generative AI model tuning
Learn how to fine-tune large language models with specific skills and knowledge

How to integrate and use RStudio Server on OpenShift AI
This guide will walk you through the process of setting up RStudio Server on Red Hat OpenShift AI and getting started with its extensive features.

Red Hat Developer Sandbox: Your Free OpenShift AI Playground
Are you curious about the power of artificial intelligence (AI) but not sure

Implement AI-driven edge to core data pipelines
The Edge to Core Pipeline Pattern automates a continuous cycle for releasing and deploying new AI/ML models using Red Hat build of Apache Camel and more.

AI Lab Recipes
The AI Lab Recipes repository offers recipes for building and running containerized AI and LLM applications to help developers move quickly from prototype to production.

Image mode for Red Hat Enterprise Linux quick start: AI inference
Learn how to build a containerized bootable operating system to run AI models using image mode for Red Hat Enterprise Linux, then deploy a custom image.

Integrated Hybrid Cloud MLOps & Application Platform
A common platform for machine learning and app development on the hybrid cloud.

AI/ML Workloads
Applications based on machine learning and deep learning, using structured and unstructured data as the fuel to drive these applications.

Intel GPUs and OVMS: A winning combination for deep learning efficiency
Learn how Intel Graphics Processing Units (GPUs) can enhance the performance of machine learning tasks and pave the way for efficient model serving.

Red Hat Developers
Join Red Hat Developer for the software and tutorials to develop cloud applications using Kubernetes, microservices, serverless and Linux.

Multilingual semantic-similarity search with Elasticsearch
Discover how to use machine learning techniques to analyze context, semantics, and relationships between words and phrases indexed in Elasticsearch.

Build event-driven data pipelines for business intelligence
Discover how event-driven architecture can transform data into valuable business intelligence with intelligent applications using AI/ML.

Fine-tune large language models using OpenShift Data Science
Walk through the basics of fine-tuning a large language model using Red Hat OpenShift Data Science and HuggingFace Transformers.

Why GPUs are essential for AI and high-performance computing
Learn why graphics processing units (GPUs) have become the foundation of artificial intelligence and how they are being used.

Perform inference using Intel OpenVINO Model Server on OpenShift
In this article, you will learn how to perform inference on JPEG images using the gRPC API in OpenVINO Model Server in OpenShift. Model servers play an important role in smoothly bringing models from development to production. Models are served via network endpoints which expose an APIs to run predictions.

Boost OpenShift Data Science with the Intel AI Analytics Toolkit
Intel AI tools save cloud costs, date scientists' time, and time spent developing models. Learn how the AI Kit can help you.

Use OpenVINO to convert speech to text
OpenVINO helps you tackle speech-to-text conversion, a common AI use case. Learn more.

Learn how to build, train, and run a PyTorch model
Once you have data, how do you start building a PyTorch model? This learning path shows you how to create a PyTorch model with OpenShift Data Science.

OpenShift AI learning
OpenShift AI gives data scientists and developers a powerful AI/ML platform for building AI-enabled applications. Data scientists and developers can collaborate to move from experiment to production in a consistent environment quickly.

Stream processing: Continuous data management in real time
Stream processing lets developers view, analyze, and combine data from a wide

Manage Python security with Thoth's cloud-based dependency resolver
Get a video introduction to Project Thoth's cloud-based Python dependency resolver, then learn how to manage Python dependencies on the Thoth command line.