Red Hat Developer Blog
Here's our most recent blog content. Explore our featured monthly resource as well as our most recently published items. Don't miss the chance to learn more about our contributors.
View all blogs & articles
Advancing AI efficiency is more critical than ever, and sparsity has proven...
Quantized LLMs achieve near-full accuracy with minimal trade-offs after 500K+...
Machete, Neural Magic’s optimized kernel for NVIDIA Hopper GPUs, achieves...
Discover LLM Compressor, a unified library for creating accurate compressed...
Explore the integration of FP8 in vLLM. Learn how to receive up to a 2x...
Llama 3's advancements, particularly at 8 billion parameters, make AI more...
Learn about Marlin, a mixed-precision matrix multiplication kernel that...
4-bit and 8-bit quantized LLMs excel in long-context tasks, retaining over...
Sparse fine-tuning in combination with sparsity-aware inference software,...