Artificial intelligence

RHEL AI 3d cube
Product Sub Page

Download Red Hat Enterprise Linux AI

Develop, deploy, and run large language models (LLMs) in individual server environments. The solution includes Red Hat AI Inference Server, delivering fast, cost-effective hybrid cloud inference by maximizing throughput, minimizing latency, and reducing compute costs.

Featured image for Red Hat OpenShift AI.
Article

Protecting your models made easy with Authorino

JooHo Lee

This article demonstrates how to register the SKlearn runtime as a Custom ServingRuntime, deploy the iris model on KServe with OpenDataHub, and apply authentication using Authorino to protect the model endpoints.

Featured image for AI/ML
Article

How to install KServe using Open Data Hub

JooHo Lee

Learn a simplified method for installing KServe, a highly scalable and standards-based model inference platform on Kubernetes for scalable AI. 

Featured image for Deploy Llama 3 8B with vLLM blog.
Article

Deploy Llama 3 8B with vLLM

Mark Kurtz

Llama 3's advancements, particularly at 8 billion parameters, make AI more accessible and efficient.

Featured image for ML with OpenShift
Article

Implement AI-driven edge to core data pipelines

Bruno Meseguer

The Edge to Core Pipeline Pattern automates a continuous cycle for releasing and deploying new AI/ML models using Red Hat build of Apache Camel and more.

Featured image for AI/ML
Article

The road to AI: The fundamentals

Maarten Vandeperre

Explore the fundamental concepts of artificial intelligence (AI), including machine learning and deep learning, and learn how to integrate AI into your platforms and applications.