AI-driven data extraction using Apache Camel and LangChain4J
This blog post summarizes an experiment to extract structured data from
This blog post summarizes an experiment to extract structured data from
Learn how Red Hat Enterprise Linux AI provides a security-focused, low-cost
We will use Langchain.js to simplify interacting with the model and will use
This learning exercise will deploy an existing Node.js application based on
Use the Stable Diffusion model to create images with Red Hat OpenShift AI running on a Red Hat OpenShift Service on AWS cluster with NVIDIA GPU enabled.
Get an overview of Explainable and Responsible AI and discover how the open source TrustyAI tool helps power fair, transparent machine learning.
This short guide explains how to choose a GPU framework and library (e.g., CUDA vs. OpenCL), as well as how to design accurate benchmarks.
In this learning exercise, we'll focus on training and deploying your trained
Learn how to write a GPU-accelerated quicksort procedure using the algorithm for prefix sum/scan and explore other GPU algorithms, such as Reduce and Game of Life.
This article explores the installation, usage and benefits of Red Hat OpenShift Lightspeed on Red Hat OpenShift Local.
An in-depth look at a foundational GPU programming algorithm: the prefix sum. The goal is to expose the reader to the tools and language of GPU programming, rather see it only as a way to optimize certain existing subroutines.
In this learning path we dig deeper into using large language models (LLMs) with
Learn how to set up a cloud development environment (CDE) using Ollama, Continue, Llama3, and Starcoder2 LLMs with OpenShift Dev Spaces for faster, more efficient coding.
The first of a four-part series on introductory GPU programming, this article provides a basic overview of the GPU programming model.
This learning exercise explains the requirements for Red Hat OpenShift
Red Hat OpenShift Lightspeed, your new OpenShift virtual assistant powered by
Red Hat OpenShift AI provides tools across the full lifecycle of AI/ML experiments and models for data scientists and developers of intelligent applications.
Discover how InstructLab simplifies LLM tuning for users.
Boost your coding productivity with private and free AI code assistance using Ollama or InstructLab to run large language models locally.
Learn how to prevent large language models (LLMs) from generating toxic content during training using TrustyAI Detoxify and Hugging Face SFTTrainer.
Learn how to deploy and use the Multi-Cloud Object Gateway (MCG) from Red Hat OpenShift Data Foundation to support development and testing of applications and Artificial Intelligence (AI) models which require S3 object storage.
Train and deploy an AI model using OpenShift AI, then integrate it into an application running on OpenShift.
BERT, which stands for Bidirectional Encoder Representations from Transformers
This article explains how to use Red Hat OpenShift AI in the Developer Sandbox for Red Hat OpenShift to create and deploy models.