Mark Kurtz
Mark Kurtz's contributions
Article
GuideLLM: Evaluate LLM deployments for real-world inference
Jenny Yi
+2
Learn how to evaluate the performance of your LLM deployments with the open source GuideLLM toolkit to optimize cost, reliability, and user experience.
Article
Axolotl meets LLM Compressor: Fast, sparse, open
Rahul Tuli
+3
Discover how to deploy compressed, fine-tuned models for efficient inference with the new Axolotl and LLM Compressor integration.
Article
Enable 3.5 times faster vision language models with quantization
Shubhra Pandit
+4
Learn how quantized vision-language models enable faster inference, lower costs, and scalable AI deployment without compromising capability.
Article
Deployment-ready reasoning with quantized DeepSeek-R1 models
Eldar Kurtić
+3
Explore new open source quantized reasoning models based on the DeepSeek-R1-Distill suite that deliver near-perfect accuracy and inference speed improvements.
Article
2:4 Sparse Llama: Smaller models for efficient GPU inference
Eldar Kurtić
+4
Discover Sparse Llama: A 50% pruned, GPU-optimized Llama 3.1 model with 2:4 sparsity, enabling faster, cost-effective inference without sacrificing accuracy.
Article
Multimodal model quantization support through LLM Compressor
Kyle Sayers
+3
Explore multimodal model quantization in LLM Compressor, a unified library for optimizing models for deployment with vLLM.
Article
Compressed Granite 3.1: Powerful performance in a small package
Shubhra Pandit
+2
Open-sourced on Hugging Face, deployment-ready with vLLM, and extensible using LLM Compressor.
Article
2:4 Sparse Llama FP8: SOTA performance for NVIDIA Hopper GPUs
Alexandre Marques
+5
Advancing AI efficiency is more critical than ever, and sparsity has proven to be a cornerstone in this pursuit.

GuideLLM: Evaluate LLM deployments for real-world inference
Learn how to evaluate the performance of your LLM deployments with the open source GuideLLM toolkit to optimize cost, reliability, and user experience.

Axolotl meets LLM Compressor: Fast, sparse, open
Discover how to deploy compressed, fine-tuned models for efficient inference with the new Axolotl and LLM Compressor integration.

Enable 3.5 times faster vision language models with quantization
Learn how quantized vision-language models enable faster inference, lower costs, and scalable AI deployment without compromising capability.

Deployment-ready reasoning with quantized DeepSeek-R1 models
Explore new open source quantized reasoning models based on the DeepSeek-R1-Distill suite that deliver near-perfect accuracy and inference speed improvements.

2:4 Sparse Llama: Smaller models for efficient GPU inference
Discover Sparse Llama: A 50% pruned, GPU-optimized Llama 3.1 model with 2:4 sparsity, enabling faster, cost-effective inference without sacrificing accuracy.

Multimodal model quantization support through LLM Compressor
Explore multimodal model quantization in LLM Compressor, a unified library for optimizing models for deployment with vLLM.

Compressed Granite 3.1: Powerful performance in a small package
Open-sourced on Hugging Face, deployment-ready with vLLM, and extensible using LLM Compressor.

2:4 Sparse Llama FP8: SOTA performance for NVIDIA Hopper GPUs
Advancing AI efficiency is more critical than ever, and sparsity has proven to be a cornerstone in this pursuit.