OpenShift AI connector for Red Hat Developer Hub (Developer Preview)
Learn how to automatically transfer AI model metadata managed by OpenShift AI into Red Hat Developer Hub’s Software Catalog.
Learn how to automatically transfer AI model metadata managed by OpenShift AI into Red Hat Developer Hub’s Software Catalog.
Integrate Red Hat OpenShift Lightspeed with a locally served large language model (LLM) for enhanced assistance within the OpenShift environment.
Learn how to deploy LLMs on Red Hat OpenShift AI for Ansible Lightspeed, enabling on-premise inference and optimizing resource utilization.
Your Red Hat Developer membership unlocks access to product trials, learning resources, events, tools, and a community you can trust to help you stay ahead in AI and emerging tech.
This learning path explores running AI models, specifically large language
Learn how to scale machine learning operations (MLOps) with an assembly line approach using configuration-driven pipelines, versioned artifacts, and GitOps.
Implement cost-effective LLM serving on OpenShift AI with this step-by-step guide to configuring KServe's Serverless mode for vLLM autoscaling.
Learn how to deploy Model Context Protocol (MCP) servers on OpenShift using ToolHive, a Kubernetes-native utility that simplifies MCP server management.
Deploy DialoGPT-small on OpenShift AI for internal model testing, with step-by-step instructions for setting up runtime, model storage, and inference services.
Walk through how to set up KServe autoscaling by leveraging the power of vLLM, KEDA, and the custom metrics autoscaler operator in Open Data Hub.
Learn how to install Red Hat OpenShift AI to enable an on-premise inference service for Ansible Lightspeed in this step-by-step guide.
Learn how to deploy Red Hat AI Inference Server using vLLM and evaluate its performance with GuideLLM in a fully disconnected Red Hat OpenShift cluster.
Enhance your Python AI applications with distributed tracing. Discover how to use Jaeger and OpenTelemetry for insights into Llama Stack interactions.
Deploy a Llama language model using Red Hat OpenShift AI. This guide walks you through GPU setup, model deployment, and internal and external testing.
Explore a fashion AI search on Red Hat OpenShift AI with EDB Postgres AI.
As GPU demand grows, idle time gets expensive. Learn how to efficiently manage AI workloads on OpenShift AI with Kueue and the custom metrics autoscaler.
Learn how to implement Llama Stack's built-in guardrails with Python, helping to improve the safety and performance of your LLM applications.
Go beyond performance and accuracy. This guide for technical practitioners details how to implement trust, transparency, and safety into your AI workflows.
Learn how to perform large-scale, distributed batch inference on Red Hat OpenShift AI using the CodeFlare SDK with Ray Data and vLLM.
Enterprise-grade artificial intelligence and machine learning (AI/ML) for
This tutorial shows you how to use the Llama Stack API to implement retrieval-augmented generation for an AI application built with Python.
Tackle the AI/ML lifecycle with OpenShift AI. This guide helps you build adaptable, production-ready MLOps workflows, from data preparation to live inference.
Learn how to use the CodeFlare SDK to submit RayJobs to a remote Ray cluster in OpenShift AI.
Learn how to deploy Open Platform for Enterprise AI ChatQnA application in OpenShift with AMD Instinct hardware.
Learn about the advantages of prompt chaining and the ReAct framework compared to simpler agent architectures for complex tasks.