Deliver generative AI at scale with NVIDIA NIM on OpenShift AI
Discover how to integrate NVIDIA NIM with Red Hat OpenShift AI to create and deliver AI-enabled applications at scale.
Discover how to integrate NVIDIA NIM with Red Hat OpenShift AI to create and deliver AI-enabled applications at scale.
A practical example to deploy machine learning model using data science...
Explore AMD Instinct MI300X accelerators and learn how to run AI/ML workloads using ROCm, AMD’s open source software stack for GPU programming, on OpenShift AI.
Learn how to apply supervised fine-tuning to Llama 3.1 models using Ray on OpenShift AI in this step-by-step guide.
Learn how to generate word embeddings and perform RAG tasks using a Sentence Transformer model deployed on Caikit Standalone Serving Runtime using OpenShift AI.
In today's fast-paced IT landscape, the need for efficient and effective
Add knowledge to large language models with InstructLab and streamline MLOps using KitOps for efficient model improvement and deployment.
Get an overview of Explainable and Responsible AI and discover how the open source TrustyAI tool helps power fair, transparent machine learning.
In this blog we look at how we use OpenShift AI with Ray Tune to perform
Discover how InstructLab simplifies LLM tuning for users.
Learn how to deploy and use the Multi-Cloud Object Gateway (MCG) from Red Hat OpenShift Data Foundation to support development and testing of applications and Artificial Intelligence (AI) models which require S3 object storage.
BERT, which stands for Bidirectional Encoder Representations from Transformers
This article explains how to use Red Hat OpenShift AI in the Developer Sandbox for Red Hat OpenShift to create and deploy models.
End-to-end AI-enabled applications and data pipelines across the hybrid cloud
Learn a simplified method for installing KServe, a highly scalable and standards-based model inference platform on Kubernetes for scalable AI.Â
A practical example to deploy machine learning model using data science...
Learn how to fine-tune large language models with specific skills and knowledge
This guide will walk you through the process of setting up RStudio Server on Red Hat OpenShift AI and getting started with its extensive features.
Are you curious about the power of artificial intelligence (AI) but not sure
The Edge to Core Pipeline Pattern automates a continuous cycle for releasing and deploying new AI/ML models using Red Hat build of Apache Camel and more.
The AI Lab Recipes repository offers recipes for building and running containerized AI and LLM applications to help developers move quickly from prototype to production.
Learn how to build a containerized bootable operating system to run AI models using image mode for Red Hat Enterprise Linux, then deploy a custom image.
A common platform for machine learning and app development on the hybrid cloud.