
From Podman AI Lab to OpenShift AI
Learn how to rapidly prototype AI applications from your local environment with
Learn how to rapidly prototype AI applications from your local environment with
Learn how to generate word embeddings and perform RAG tasks using a Sentence Transformer model deployed on Caikit Standalone Serving Runtime using OpenShift AI.
In today's fast-paced IT landscape, the need for efficient and effective
Add knowledge to large language models with InstructLab and streamline MLOps using KitOps for efficient model improvement and deployment.
Learn how a platform engineering team streamlined the deployment of edge kiosks by leveraging key automation components of Red Hat Ansible Automation Platform.
With GPU acceleration for Podman AI Lab, developers can inference models faster and build AI-enabled applications with quicker response times.
This blog post summarizes an experiment to extract structured data from
Learn how Red Hat Enterprise Linux AI provides a security-focused, low-cost
Use the Stable Diffusion model to create images with Red Hat OpenShift AI running on a Red Hat OpenShift Service on AWS cluster with NVIDIA GPU enabled.
Get an overview of Explainable and Responsible AI and discover how the open source TrustyAI tool helps power fair, transparent machine learning.
This short guide explains how to choose a GPU framework and library (e.g., CUDA vs. OpenCL), as well as how to design accurate benchmarks.
Learn how to write a GPU-accelerated quicksort procedure using the algorithm for prefix sum/scan and explore other GPU algorithms, such as Reduce and Game of Life.
This article explores the installation, usage and benefits of Red Hat OpenShift Lightspeed on Red Hat OpenShift Local.
Red Hat OpenShift AI is an artificial intelligence platform that runs on top of Red Hat OpenShift and provides tools across the AI/ML lifecycle.
Red Hat OpenShift AI is an artificial intelligence platform that runs on top of Red Hat OpenShift and provides tools across the AI/ML lifecycle.
An in-depth look at a foundational GPU programming algorithm: the prefix sum. The goal is to expose the reader to the tools and language of GPU programming, rather see it only as a way to optimize certain existing subroutines.
Discover LLM Compressor, a unified library for creating accurate compressed models for cheaper and faster inference with vLLM.
Learn how to set up a cloud development environment (CDE) using Ollama, Continue, Llama3, and Starcoder2 LLMs with OpenShift Dev Spaces for faster, more efficient coding.
The first of a four-part series on introductory GPU programming, this article provides a basic overview of the GPU programming model.
This learning exercise explains the requirements for Red Hat OpenShift
Red Hat OpenShift Lightspeed, your new OpenShift virtual assistant powered by
Red Hat OpenShift AI provides tools across the full lifecycle of AI/ML experiments and models for data scientists and developers of intelligent applications.
Discover how InstructLab simplifies LLM tuning for users.
Boost your coding productivity with private and free AI code assistance using Ollama or InstructLab to run large language models locally.