Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Optimize LLMs with LLM Compressor in Red Hat OpenShift AI

May 20, 2025
Brian Dellabetta Dipika Sikka
Related topics:
Artificial intelligenceSummit 2025
Related products:
Red Hat AIRed Hat OpenShift AI

Share:

    A compressed summary

    • As computational costs for large language models continue to rise, research has demonstrated that LLMs can be compressed and optimized to run with far fewer compute and energy resources, without degradation in model quality.
    • LLM Compressor enables several state-of-the-art compression techniques, ranging from simple quantization and pruning to calibrated compression.
    • Red Hat OpenShift AI allows ML Engineers and Data Scientists to compare and evaluate several model compression techniques in a single, shareable experiment, customized to their use case, model and dataset.

    LLMs continue to make breakthroughs in language modeling tasks. However, as models continue to increase in size and complexity, the computational and memory costs involved in deploying them have become a barrier to their accessibility, even for organizations with access to high-end GPUs. Recent examples include Meta’s Llama 4 Scout and Maverick models, which surpass 100 billion and 400 billion parameters, respectively.

    In order to help further optimize model inference and reduce the cost of model deployment, research efforts have focused on model compression to reduce model size without sacrificing performance. As AI applications mature and new compression algorithms are published, there is a need for unified tooling which can accomplish a variety of compression methods, specific to a user’s inference needs and optimized to run performantly on their accelerated hardware.

    Introduction to LLM Compressor

    LLM Compressor, part of the vLLM project for efficient serving of LLMs, integrates the latest model compression research into a single open-source library enabling the generation of efficient, compressed models with minimal effort. 

    The framework allows users to apply some of the most recent research on model compression techniques to improve generative AI (gen AI) models' efficiency, scalability and performance while maintaining accuracy. With native support for Hugging Face and vLLM, the compressed models can be integrated into deployment pipelines, delivering faster and more cost-effective inference at scale.

    LLM Compressor supports a wide variety of compression techniques:

    • Weight-only quantization (W4A16) compresses model weights to 4-bit precision, valuable for AI applications with limited hardware resources or high sensitivity to latency.
    • Weight and activation quantization (W8A8) compresses both weights and activations to 8-bit precision, targeting general server scenarios for integer and floating point formats. Most recently, Meta’s Llama 4 Maverick FP8 model was quantized using LLM Compressor.
    • Weight pruning, also known as sparsification, removes certain weights from the model entirely. While this requires fine-tuning, it can be used in conjunction with quantization for further inference acceleration.

    While each method has varying data and algorithmic requirements, all can be applied directly using the Red Hat OpenShift AI platform, either through an interactive workbench or within the data science pipeline feature. 

    Integration with Red Hat OpenShift AI

    OpenShift AI empowers ML engineers and data scientists to experiment with model training, fine-tuning and now compression. The OpenShift AI integration of LLM compressor, available as a developer preview feature beginning with v2.20, provides two introductory examples:

    • A workbench image and notebook that demonstrates the compression of a tiny model, runnable on CPU, highlighting how calibrated compression can improve over data-free approaches.

    A data science pipeline that extends the same flow to a larger Llama 3.2 model, highlighting how users can build automated, GPU-accelerated experiments that can be shared with other stakeholders in a single web UI.

    The following video recording demonstrates the data science pipeline in the OpenShift AI dashboard:

    As AI adoption increases, so too does the need to efficiently deploy LLMs. We hope to have given you a feel for how you can run these experiments yourself with LLM Compressor and vLLM within OpenShift AI. We invite you to experiment with the developer preview here.

    Want to learn more about LLM Compressor and vLLM? Check out our GitHub repo for documentation on our compression algorithms, or join the vLLM Slack and connect with us directly in the #llm-compressor Slack channel. We’d love to hear from you.

    Related Posts

    • LLM Compressor is here: Faster inference with vLLM

    • Multimodal model quantization support through LLM Compressor

    • LLM Compressor: Optimize LLMs for low-latency deployments

    • We ran over half a million evaluations on quantized LLMs—here's what we found

    • How well do quantized models handle long-context tasks?

    Recent Posts

    • Cloud bursting with confidential containers on OpenShift

    • Reach native speed with MacOS llama.cpp container inference

    • A deep dive into Apache Kafka's KRaft protocol

    • Staying ahead of artificial intelligence threats

    • Strengthen privacy and security with encrypted DNS in RHEL

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue