Alon Kellner's contributions
Article
Beyond the next token: Why diffusion LLMs are changing the game
Alon Kellner
+1
This article discusses the benefits of diffusion LLMs, a revolutionary approach to language models that offers a dynamic tradeoff between accuracy and performance. The article covers the architecture, evolution, and real-world statistics of this technology, including examples of open source models like LLaDA 2.X and Mercury 2.
Article
Building a oversaturation detector with iterative error analysis
Alon Kellner
Learn how we built a simple, rules-based algorithm to detect oversaturation in LLM performance benchmarks, reducing costs by more than a factor of 2.
Article
Defining success: Evaluation metrics and data augmentation for oversaturation detection
Alon Kellner
Learn how we built an algorithm to detect oversaturation in large language model (LLM) benchmarking, saving GPU minutes and reducing costs.
Article
Reduce LLM benchmarking costs with oversaturation detection
Alon Kellner
Oversaturation in LLM benchmarking can lead to wasted machine time and skewed performance metrics. Find out how one Red Hat team tackled the challenge.
Beyond the next token: Why diffusion LLMs are changing the game
This article discusses the benefits of diffusion LLMs, a revolutionary approach to language models that offers a dynamic tradeoff between accuracy and performance. The article covers the architecture, evolution, and real-world statistics of this technology, including examples of open source models like LLaDA 2.X and Mercury 2.
Building a oversaturation detector with iterative error analysis
Learn how we built a simple, rules-based algorithm to detect oversaturation in LLM performance benchmarks, reducing costs by more than a factor of 2.
Defining success: Evaluation metrics and data augmentation for oversaturation detection
Learn how we built an algorithm to detect oversaturation in large language model (LLM) benchmarking, saving GPU minutes and reducing costs.
Reduce LLM benchmarking costs with oversaturation detection
Oversaturation in LLM benchmarking can lead to wasted machine time and skewed performance metrics. Find out how one Red Hat team tackled the challenge.