Open source AI for developers
Explore the benefits of open source AI models and tools and learn how Red Hat OpenShift AI helps you build innovative AI-based applications in this e-book.
Explore the benefits of open source AI models and tools and learn how Red Hat OpenShift AI helps you build innovative AI-based applications in this e-book.
This tutorial demonstrates how to use Jupyter Notebooks within Red Hat OpenShift
This year's top articles on AI include an introduction to GPU programming, a guide to integrating AI code assistants, and the KServe open source project.
Find Kubernetes and OpenShift articles on performance and scale testing, single-node OpenShift, OpenShift Virtualization for VMware vSphere admins, and more.
Join us as we get ready for the holidays with a few AI holiday treats! We will demo AI from laptop to production using Quarkus and LangChain4j with ChatGPT, Dall-E, Podman Desktop AI and discover how we can get started with Quarkus+LangChain4j, use memory, agents and tools, play with some RAG features, and test out some images for our holiday party.
Learn how a developer can work with RAG and LLM leveraging their own data chat for queries.
Download this 15-page e-book to explore 5 key ways OpenShift benefits developers, including integrated tools and workflows and simplified AI app development.
Explore the evolution and future of Quarkus, Red Hat’s next-generation Java framework designed to optimize applications for cloud-native environments.
Download a free preview of Applied AI for Enterprise Java Development (O’Reilly), a practical guide for Java developers who want to build AI applications.
Learn how to use Red Hat OpenShift AI to quickly develop, train, and deploy
Discover how to integrate NVIDIA NIM with Red Hat OpenShift AI to create and deliver AI-enabled applications at scale.
A practical example to deploy machine learning model using data science...
Repo, Red Hat Developer's new mascot, is curious, helpful, and eager to guide
Artificial intelligence (AI) and large language models (LLMs) are becoming
The rapid advancement of generative artificial intelligence (gen AI) has unlocked incredible opportunities. However, customizing and iterating on large language models (LLMs) remains a complex and resource intensive process. Training and enhancing models often involves creating multiple forks, which can lead to fragmentation and hinder collaboration.
OCI images are now available on the registries Docker Hub and Quay.io, making it even easier to use the Granite 7B large language model (LLM) and InstructLab.
Get started with AMD GPUs for model serving in OpenShift AI. This tutorial guides you through the steps to configure the AMD Instinct MI300X GPU with KServe.
Learn how developers can use prompt engineering for a large language model (LLM) to increase their productivity.
Learn how to deploy a coding copilot model using OpenShift AI. You'll also discover how tools like KServe and Caikit simplify machine learning model management.
Explore AMD Instinct MI300X accelerators and learn how to run AI/ML workloads using ROCm, AMD’s open source software stack for GPU programming, on OpenShift AI.
Learn how to apply supervised fine-tuning to Llama 3.1 models using Ray on OpenShift AI in this step-by-step guide.
Understand how retrieval-augmented generation (RAG) works and how users can
Learn how to rapidly prototype AI applications from your local environment with
Learn how to generate word embeddings and perform RAG tasks using a Sentence Transformer model deployed on Caikit Standalone Serving Runtime using OpenShift AI.
In today's fast-paced IT landscape, the need for efficient and effective