Data Science
How InstructLab enables accessible model fine-tuning for gen AI
Discover how InstructLab simplifies LLM tuning for users.
Red Hat OpenShift Data Foundation for developers and data scientists
Learn how to deploy and use the Multi-Cloud Object Gateway (MCG) from Red Hat OpenShift Data Foundation to support development and testing of applications and Artificial Intelligence (AI) models which require S3 object storage.
How to train a BERT machine learning model with OpenShift AI
BERT, which stands for Bidirectional Encoder Representations from Transformers
Try OpenShift AI and integrate with Apache Camel
This article explains how to use Red Hat OpenShift AI in the Developer Sandbox for Red Hat OpenShift to create and deploy models.
Download Red Hat Data Grid
An intelligent, distributed caching solution that boosts application performance, provides greater deployment flexibility, and minimizes the overhead of standing up new applications.
Solution Pattern: Edge to Core Data Pipelines for AI/ML
End-to-end AI-enabled applications and data pipelines across the hybrid cloud
How to install KServe using Open Data Hub
Learn a simplified method for installing KServe, a highly scalable and standards-based model inference platform on Kubernetes for scalable AI.
Solution Pattern: Machine Learning and Data Science Pipelines
A practical example to deploy machine learning model using data science...
Getting started with InstructLab for generative AI model tuning
Learn how to fine-tune large language models with specific skills and knowledge
How to integrate and use RStudio Server on OpenShift AI
This guide will walk you through the process of setting up RStudio Server on Red Hat OpenShift AI and getting started with its extensive features.
Red Hat Developer Sandbox: Your Free OpenShift AI Playground
Are you curious about the power of artificial intelligence (AI) but not sure
Implement AI-driven edge to core data pipelines
The Edge to Core Pipeline Pattern automates a continuous cycle for releasing and deploying new AI/ML models using Red Hat build of Apache Camel and more.
AI Lab Recipes
The AI Lab Recipes repository offers recipes for building and running containerized AI and LLM applications to help developers move quickly from prototype to production.
Image mode for Red Hat Enterprise Linux quick start: AI inference
Learn how to build a containerized bootable operating system to run AI models using image mode for Red Hat Enterprise Linux, then deploy a custom image.
Integrated Hybrid Cloud MLOps & Application Platform
A common platform for machine learning and app development on the hybrid cloud.
AI/ML Workloads
Applications based on machine learning and deep learning, using structured and unstructured data as the fuel to drive these applications.
Intel GPUs and OVMS: A winning combination for deep learning efficiency
Learn how Intel Graphics Processing Units (GPUs) can enhance the performance of machine learning tasks and pave the way for efficient model serving.
Multilingual semantic-similarity search with Elasticsearch
Discover how to use machine learning techniques to analyze context, semantics, and relationships between words and phrases indexed in Elasticsearch.
Build event-driven data pipelines for business intelligence
Discover how event-driven architecture can transform data into valuable business intelligence with intelligent applications using AI/ML.
Fine-tune large language models using OpenShift Data Science
Walk through the basics of fine-tuning a large language model using Red Hat OpenShift Data Science and HuggingFace Transformers.
Why GPUs are essential for AI and high-performance computing
Learn why graphics processing units (GPUs) have become the foundation of artificial intelligence and how they are being used.
Perform inference using Intel OpenVINO Model Server on OpenShift
In this article, you will learn how to perform inference on JPEG images using the gRPC API in OpenVINO Model Server in OpenShift. Model servers play an important role in smoothly bringing models from development to production. Models are served via network endpoints which expose an APIs to run predictions.
Boost OpenShift Data Science with the Intel AI Analytics Toolkit
Intel AI tools save cloud costs, date scientists' time, and time spent developing models. Learn how the AI Kit can help you.