Red Hat AI

Video Thumbnail
Video

Deploying open source AI agents on OpenShift using OpenClaw

Grace Ableidinger +1

Learn how to run OpenClaw on Red Hat OpenShift with production-grade security and observability. We cover default-deny network policies for blast radius containment, container-level sandboxing with OpenShift, Kubernetes Secrets for credential management, and end-to-end OpenTelemetry tracing with MLflow, so every decision your AI agent makes is isolated, auditable, and safe by default. Whether you're a developer exploring AI agents for the first time or a platform engineer thinking about running agentic workloads at scale, this is the infrastructure story that makes it production-ready.

Red Hat AI
Article

Build resilient guardrails for OpenClaw AI agents on Kubernetes

Cedric Clyburn +2

Learn how to build security hygiene into OpenClaw by using containers for isolation, role-based access control (RBAC) for user access permissions, and secrets for sensitive information. This article explores how to use infrastructure powered by open source technology to help protect these workflows.

ai-ml
Article

Manage AI context with the Lola package manager

Daniele Martinoli +2

Learn how to use Lola, a unified package manager for AI context. Treat your AI context as versioned, auditable code with Lola modules and marketplaces. Improve your AI assistant workflow with this open source tool.

Featured image for agentic AI
Article

Distributed tracing for agentic workflows with OpenTelemetry

Fabio Massimo Ercoli

Learn how to set up distributed tracing for an agentic workflow based on lessons learned while developing the it-self-service-agent AI quickstart. This post covers configuring OpenTelemetry to track requests end-to-end across application workloads, MCP servers, and Llama Stack.

Featured image for vLLM interference article.
Article

Run Gemma 4 with Red Hat AI on Day 0: A step-by-step guide

Saša Zelenović +4

Learn how to deploy and experiment with Gemma 4, the latest open model family from Google DeepMind. This guide covers text, image, and video input, Mixture-of-Experts architecture, and more. Get started with Red Hat AI Inference Server today.

Red Hat AI
Article

Unsloth and Training Hub: Lightning-fast LoRA and QLoRA fine-tuning

Aditi Saluja +2

Learn how to fine-tune large language models in enterprise environments with Training Hub, an open source library for LLM post-training. Discover the benefits of LoRA and QLoRA using Unsloth, including reduced VRAM requirements and faster training times.

ai-ml
Article

Vibes, specs, skills, and agents: The four pillars of AI coding

Rich Naszcyniec

Explore the four pillars of AI coding: vibes, secs, skills, and agents, and learn how they can improve the coding quality and reduce the encoding/decoding gap. Discover the benefits of a spec-driven approach and the importance of modular specs and skills in achieving harmony.

Featured image for vLLM interference article.
Article

Integrate Claude Code with Red Hat AI Inference Server on OpenShift

Alexander Barbosa Ayala

Learn how to integrate Anthropic's Claude Code, an agentic coding tool, using Red Hat AI Inference Server on OpenShift. Keep the inference process private on your own infrastructure while retaining the full Claude Code workflow.

Featured image for Red Hat OpenShift AI.
Article

Run Model-as-a-Service for multiple LLMs on OpenShift

Vladimir Belousov

Learn how to deploy multiple large language models (LLMs) behind a single OpenAI-compatible endpoint on OpenShift using a Model-as-a-Service (MaaS) approach. This guide demonstrates how to build an intelligent routing infrastructure that dynamically inspects the request payload and directs traffic based on the specified model field, reducing GPU waste and simplifying application logic.

Featured image for Red Hat OpenShift AI.
Article

Hybrid loan-decisioning with OpenShift AI and Vertex AI

Harshil Sabhnani

Discover a practical solution pattern for building a modern financial application that makes loan decisions using multiple machine learning systems deployed across hybrid environments.

Event

Red Hat at Devoxx UK 2026

Headed to Devoxx UK 2026? Visit the Red Hat Developer booth on-site to speak to our expert technologists.

Event

Red Hat at Devoxx France 2026

Headed to Devoxx France 2026? Visit the Red Hat Developer booth on-site to speak to our expert technologists.

LLM Compressor v0.10.0 is here
Article

LLM Compressor v0.10: Faster compression with distributed GPTQ

Kyle Sayers +2

LLM Compressor v0.10 introduces Distributed Data Parallel (DDP) for faster compression, memory management, and advanced quantization formats. Make model compression workflows more efficient for large language models.

Red Hat AI
Article

Configure NVIDIA Blackwell GPUs for Red Hat AI workloads

Erwan Gallen +4

Learn how to enable the NVIDIA RTX PRO 4500 Blackwell Server Edition on Red Hat AI for compact, power-efficient AI deployments. This hardware offers inference performance without adding unnecessary operational complexity for Red Hat AI users.