Brian Dellabetta
Brian Dellabetta's contributions
Article
LLM Compressor 0.7.0 release recap
Dipika Sikka
+3
LLM Compressor 0.7.0 brings Hadamard transforms for better accuracy, mixed-precision FP4/FP8, and calibration-free block quantization for efficient compression.
Article
Optimize LLMs with LLM Compressor in Red Hat OpenShift AI
Brian Dellabetta
+1
Optimize model inference and reduce costs with model compression techniques like quantization and pruning with LLM Compressor on Red Hat OpenShift AI.

Article
LLM Compressor 0.7.0 release recap
Dipika Sikka
+3
LLM Compressor 0.7.0 brings Hadamard transforms for better accuracy, mixed-precision FP4/FP8, and calibration-free block quantization for efficient compression.

Article
Optimize LLMs with LLM Compressor in Red Hat OpenShift AI
Brian Dellabetta
+1
Optimize model inference and reduce costs with model compression techniques like quantization and pruning with LLM Compressor on Red Hat OpenShift AI.