LLM Compressor 0.9.0: Attention quantization, MXFP4 support, and more
Explore the latest release of LLM Compressor, featuring attention quantization, MXFP4 support, AutoRound quantization modifier, and more.
Explore the latest release of LLM Compressor, featuring attention quantization, MXFP4 support, AutoRound quantization modifier, and more.
Headed to DeveloperWeek? Visit the Red Hat Developer booth on-site to speak to our expert technologists.
Discover how Red Hat Lightspeed MCP integrates AI to help simplify vulnerability management, prioritize security risks, and automate remediation planning.
This article compares the performance of llm-d, Red Hat's distributed LLM inference solution, with a traditional deployment of vLLM using naive load balancing.
Discover the advantages of using Java for AI development in regulated industries. Learn about architectural stability, performance, runtime guarantees, and more.
Whether you're just getting started with artificial intelligence or looking to deepen your knowledge, our hands-on tutorials will help you unlock the potential of AI while leveraging Red Hat's enterprise-grade solutions.
Learn how Model Context Protocol (MCP) enhances agentic AI in OpenShift AI, enabling models to call tools, services, and more from an AI application.
Take a look back at Red Hat Developer's most popular articles of 2025, covering AI coding practices, agentic systems, advanced Linux networking, and more.
Get started today by downloading Red Hat® build of Podman Desktop for Linux®, MacOS, or Windows.
Use Red Hat Lightspeed to simplify inventory management and convert natural language into inventory API queries for auditing and multi-agent automation.
Discover 2025's leading open models, including Kimi K2 and DeepSeek. Learn how these models are transforming AI applications and how you can start using them.
Learn how to deploy and test the inference capabilities of vLLM on OpenShift using GuideLLM, a specialized performance benchmarking tool.
Learn how to fine-tune a RAG model using Feast and Kubeflow Trainer. This guide covers preprocessing and scaling training on Red Hat OpenShift AI.
This guide covers the essentials of Kubernetes monitoring, including key metrics, top tools, and the role of AI in managing complex systems.
Learn how to implement retrieval-augmented generation (RAG) with Feast on Red Hat OpenShift AI to create highly efficient and intelligent retrieval systems.
Deploy Red Hat OpenShift Data Foundation (ODF), a unified data storage solution
Learn how to implement identity-based tool filtering, OAuth2 Token Exchange, and HashiCorp Vault integration for the MCP Gateway.
Get a step-by-step guide to integrating a custom AI service with Red Hat Ansible Lightspeed.
Automate amazon.ai workflows with Ansible: Deploy Bedrock agents, generate personalized content, and monitor resources with DevOps Guru for auditability.
Learn how to migrate from Llama Stack’s deprecated Agent APIs to the modern, OpenAI-compatible Responses API without rebuilding from scratch.
Most log lines are noise. Learn how semantic anomaly detection filters out repetitive patterns—even repetitive errors—to surface the genuinely unusual events.
Integrating AutoRound into LLM Compressor delivers higher accuracy for low bit-width quantization, lightweight tuning, and compressed-tensor compatibility.
Optimize AI scheduling. Discover 3 workflows to automate RayCluster lifecycles using KubeRay and Kueue on Red Hat OpenShift AI 3.
Run the latest Mistral Large 3 and Ministral 3 models on vLLM with Red Hat AI, providing day 0 access for immediate experimentation and deployment.
Learn how to optimize AI inference costs with AWS Inferentia and Trainium chips on Red Hat OpenShift using the AWS Neuron Operator.