Artificial intelligence

Red Hat AI
Product Page

Red Hat AI

Accelerate the development and deployment of enterprise AI solutions across the

Configure a Jupyter notebook to use GPUs for AI/ML modeling
Article

The benefits of dynamic GPU slicing in OpenShift

Gaurav Singh +2

Learn how the dynamic accelerator slicer operator improves GPU resource management in OpenShift by dynamically adjusting allocation based on workload needs.

Video Thumbnail
Video

Bobbycar, a Red Hat Connected Vehicle Architecture Solution Pattern - Part 1: Automotive Use Cases

Ortwin Schneider

This Red Hat solution pattern implements key aspects of a modern IoT/edge architecture in an exemplary manner. It uses Red Hat OpenShift Container Platform and various middleware components optimized for cloud-native use. This enterprise architecture can serve as a foundation for an IoT/edge hybrid cloud environment supporting various use cases like over-the-air (OTA) deployments, driver monitoring, AI/ML, and others. Bobbycar aims to showcase an end-to-end workflow, from connecting in-vehicle components to a cloud back-end, processing telemetry data in batch or as stream, and training AI/ML models, to deploying containers through a DevSecOps pipeline and by leveraging GitOps to the edge.

Video Thumbnail
Video

Processing IoT data and serving AI/ML models with OpenShift Serverless

Ortwin Schneider

Explore Knative Serving, Eventing, and Functions through an example use case. You’ll see how to collect telemetry data from simulated vehicles, process the data with OpenShift Serverless, and use the data to train a machine learning model with Red Hat OpenShift AI, Red Hat's MLOps platform. The model will then be deployed as a Knative Service, providing the inference endpoint for our business application.

Java + Quarkus 2
Article

How to build AI-ready applications with Quarkus

Ramy El Essawy +1

Develop AI-integrated Java applications more efficiently using Quarkus. This article covers implementing chatbots, real-time interaction, and RAG functionality.

Featured image for AI/ML
Article

Async-GRPO: Open, fast, and performant

Aldo Pareja +1

Discover Async-GRPO, a new library for reinforcement learning tasks that efficiently handles large models, eliminates bottlenecks, and accelerates experiments.

LLM fine tuning
Article

How to navigate LLM model names

Trevor Royer

Learning the naming conventions of large language models (LLMs) helps users select the right model for their needs.