Red Hat AI

Featured image for vLLM interference article.
Article

Beyond the next token: Why diffusion LLMs are changing the game

Alon Kellner +1

This article discusses the benefits of diffusion LLMs, a revolutionary approach to language models that offers a dynamic tradeoff between accuracy and performance. The article covers the architecture, evolution, and real-world statistics of this technology, including examples of open source models like LLaDA 2.X and Mercury 2.

A stylized illustration representing an artificial neural network, set against a dark purple background within a slightly rounded, darker purple square icon shape. The neural network consists of multiple layers of interconnected nodes, depicted as glossy, spherical red orbs. Lines connect these red orbs, forming a complex web. White arrow shapes extend horizontally from the left side, pointing towards the network, suggesting input or data flowing into the system.
Article

Combining KServe and llm-d for optimized generative AI inference

Ran Pollak +1

Learn how to combine KServe and llm-d to optimize generative AI inference, improve performance, and reduce infrastructure costs. This article demonstrates the integration architecture and provides practical guidance for AI platform teams.

Featured image for agentic AI
Article

3 lessons for building reliable ServiceNow AI integrations

Tomer Golan

Learn about critical lessons from building an MCP-powered AI agent for ServiceNow, including how to structure testing environments, best practices for implementing safeguards, and a phased approach to deploying enterprise AI integrations.

Featured image for agentic AI
Article

Deploying agents with Red Hat AI: The curious case of OpenClaw

Nati Fridman +2

Explore how Red Hat AI simplifies agent deployment with OpenClaw, showcasing model inference, safety guardrails, agent identity, and persistent state. Learn about vLLM, Llama Stack, and Models-as-a-Service (MaaS) options, and discover the benefits of agent identity and zero trust with Kagenti and AuthBridge.

Featured image for: micropipenv: Installing Python dependencies in containerized applications.
Article

Build more secure, optimized AI supply chains with Fromager

Lalatendu Mohanty

Learn how Fromager, an open source project, helps protect Python dependencies by rebuilding entire dependency trees from source, providing network-isolated builds, and managing dependencies as a verifiable map. Discover how Fromager ensures supply chain verifiability, ABI compatibility, and customization.

Video Thumbnail
Video

Deploying open source AI agents on OpenShift using OpenClaw

Grace Ableidinger +1

Learn how to run OpenClaw on Red Hat OpenShift with production-grade security and observability. We cover default-deny network policies for blast radius containment, container-level sandboxing with OpenShift, Kubernetes Secrets for credential management, and end-to-end OpenTelemetry tracing with MLflow, so every decision your AI agent makes is isolated, auditable, and safe by default. Whether you're a developer exploring AI agents for the first time or a platform engineer thinking about running agentic workloads at scale, this is the infrastructure story that makes it production-ready.

Red Hat AI
Article

Build resilient guardrails for OpenClaw AI agents on Kubernetes

Cedric Clyburn +2

Learn how to build security hygiene into OpenClaw by using containers for isolation, role-based access control (RBAC) for user access permissions, and secrets for sensitive information. This article explores how to use infrastructure powered by open source technology to help protect these workflows.

ai-ml
Article

Manage AI context with the Lola package manager

Daniele Martinoli +2

Learn how to use Lola, a unified package manager for AI context. Treat your AI context as versioned, auditable code with Lola modules and marketplaces. Improve your AI assistant workflow with this open source tool.

Featured image for agentic AI
Article

Distributed tracing for agentic workflows with OpenTelemetry

Fabio Massimo Ercoli

Learn how to set up distributed tracing for an agentic workflow based on lessons learned while developing the it-self-service-agent AI quickstart. This post covers configuring OpenTelemetry to track requests end-to-end across application workloads, MCP servers, and Llama Stack.

Featured image for vLLM interference article.
Article

Run Gemma 4 with Red Hat AI on Day 0: A step-by-step guide

Saša Zelenović +4

Learn how to deploy and experiment with Gemma 4, the latest open model family from Google DeepMind. This guide covers text, image, and video input, Mixture-of-Experts architecture, and more. Get started with Red Hat AI Inference Server today.

Red Hat AI
Article

Unsloth and Training Hub: Lightning-fast LoRA and QLoRA fine-tuning

Aditi Saluja +2

Learn how to fine-tune large language models in enterprise environments with Training Hub, an open source library for LLM post-training. Discover the benefits of LoRA and QLoRA using Unsloth, including reduced VRAM requirements and faster training times.

ai-ml
Article

Vibes, specs, skills, and agents: The four pillars of AI coding

Rich Naszcyniec

Explore the four pillars of AI coding: vibes, secs, skills, and agents, and learn how they can improve the coding quality and reduce the encoding/decoding gap. Discover the benefits of a spec-driven approach and the importance of modular specs and skills in achieving harmony.

Featured image for vLLM interference article.
Article

Integrate Claude Code with Red Hat AI Inference Server on OpenShift

Alexander Barbosa Ayala

Learn how to integrate Anthropic's Claude Code, an agentic coding tool, using Red Hat AI Inference Server on OpenShift. Keep the inference process private on your own infrastructure while retaining the full Claude Code workflow.

Featured image for Red Hat OpenShift AI.
Article

Run Model-as-a-Service for multiple LLMs on OpenShift

Vladimir Belousov

Learn how to deploy multiple large language models (LLMs) behind a single OpenAI-compatible endpoint on OpenShift using a Model-as-a-Service (MaaS) approach. This guide demonstrates how to build an intelligent routing infrastructure that dynamically inspects the request payload and directs traffic based on the specified model field, reducing GPU waste and simplifying application logic.