Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Defining success: Evaluation metrics and data augmentation for oversaturation detection

November 20, 2025
Alon Kellner
Related topics:
Artificial intelligence
Related products:
Red Hat AI

    Oversaturation is a sneaky problem that wastes time, money, and costly GPU cycles when benchmarking large language models (LLMs). In Reduce LLM benchmarking costs with oversaturation detection, we established what oversaturation is and explained why oversaturation detection (OSD) is crucial for controlling our LLM benchmarking budgets.

    Now, we're moving from the problem to the solution. But how do you teach a machine to spot a condition that is difficult to even define? Here's how we built the algorithm.

    Our goal: Don't waste money

    Our goal is simple, but it has two conflicting parts:

    • Catch a "bad" (oversaturated) run. This is a true alert. Every minute we let a "bad" run continue, we are literally burning money on a premium GPU for useless results.
    • Never stop a "good" (undersaturated) run. This is a false alert, and in many ways, it's even worse. We're just invalidating a perfectly good, expensive test, leaving a permanent "hole" in our benchmark and forcing us to run the entire test all over again.

    This creates a high-stakes balancing act. Our algorithm needs to be aggressive enough to catch bad runs, but not so aggressive that it kills good ones.

    The problem of "when": Survival analysis

    The core challenge—"How long until an event happens?"—belongs to a specific field of data science called survival analysis. It started in medicine but is used everywhere from engineering to business. Our specific questions were:

    • "How long will this good run survive before our algorithm mistakenly stops it?"
    • "How long will this bad run survive before our algorithm correctly catches it?"

    A standard metric from this field is the concordance index (C-Index), which measures if our algorithm "makes sense" by checking if "bad" runs were stopped before "good" runs.

    The "1 second versus 1 hour" flaw

    Consider two scenarios:

    • Scenario A: Our algorithm raises a true alert (catches saturation) after 1 minute, but raises a false alert (stops a good run) after 2 minutes.
    • Scenario B: Our algorithm raises a true alert after 1 second, but raises a false alert after 1 hour.

    The standard C-Index treats both scenarios as equally perfect because it only cares about the order. However, for our business goal, Scenario B is definitely better. It saves us an entire hour of wasted GPU time. We needed a metric that rewarded the magnitude of the time gap, not just the correct order.

    Our solution: The Soft-C-Index

    Because no standard metric fit our needs, we built our own: the Soft-C-Index. The Soft-C-Index measures the percentage of "effective" time, not just the correct order. It adds a "soft" component that heavily rewards algorithms for creating a wider gap between catching bad runs early and stopping good ones late.

    The standard C-Index uses a "hard" sign function, while our Soft-C-Index adds a "softness" factor that measures the magnitude of the time difference (Figure 1).

    Formulas for the standard C-Index and Soft-C-Index.
    Figure 1: The standard C-Index (top) only cares about the order (sign). Our Soft-C-Index (bottom) adds a "soft" component to reward algorithms that save more time (magnitude).

    The real "Aha!" moment comes from these next two graphs (Figures 2 and 3). We ran a simulation where our algorithm got progressively "smarter," creating a wider time gap between true alerts and false alerts.

    Two plots illustrating that the standard C-Index metric becomes ineffective once the true and false alerts are separated enough to reach a perfect score of 1.0.
    Figure 2: The standard C-Index. Notice how it flatlines at 1.0 (perfect) almost immediately. It's blind to all the improvements we're making after it gets the basic order right.
    Plot showing the Soft-C-Index curve climbing, indicating that the algorithm improves and saves GPU-minutes as the time gap increases.
    Figure 3: Our Soft-C-Index. The curve keeps climbing! It correctly sees that as the time gap widens, our algorithm is getting better and better, saving us more GPU-minutes.

    This is the key. The Soft-C-Index curve keeps climbing, correctly reflecting that we are saving more money. It's aligned with our business goal: saving GPU minutes.

    Our evaluation method

    Okay, we've invented our metric. Now we need a test environment. Our evaluation method has three key steps:

    1. Get labeled data: Manually label 500+ of our reports as "good" or "bad."
    2. Handle "unraised" cases: Figure out what to do when an algorithm never fires an alert.
    3. Fix biased data: Solve a nasty "cheating" problem we found in our dataset.

    Step 1: Labeling the data

    We started with more than 500 of our benchmark reports, which our experts reviewed one by one, manually labeling them as either undersaturated (good) or oversaturated (bad). How could they tell? By looking at the charts. See Figures 4 and 5.

    Dashboard view of nine performance metrics with stable, horizontal lines for concurrent requests and Time to First Token (ttft), indicating a healthy server load.
    Figure 4: A healthy, "good" report. All metrics are stable and "boring"—which is exactly what you want to see. The concurrent (blue) requests plot is flat, and the Time to First Token (ttft, in yellow) wait time is stable. This server is handling the load perfectly.
    Dashboard view of nine performance metrics showing server oversaturation, indicated by the concurrent requests (blue) line ramping steeply and the time to first token (ttft, yellow) line exploding exponentially.
    Figure 5: A "bad" report showing server panic. This is what oversaturation looks like in data form. The concurrent (blue) plot is a ramp going straight up, and the ttft (yellow) plot explodes from milliseconds to minutes. This is our "time to first word," going from "instant" to "go make coffee."

    Step 2: Handling "unraised" cases

    We had to figure out what to do with "unraised" cases, or when our algorithm runs and never raises an alert. We settled on a "best-case"/"worst-case" inference system:

    • If the run was good and our algorithm never alerted: This is a perfect outcome, a best-case scenario.
    • If the run was bad and our algorithm never alerted: This is a total failure, a worst-case scenario.

    This logic allows us to calculate soft_c_index_best, soft_c_index_worst, and soft_c_index_avg, which is the main metric we use.

    Step 3: Fixing the biased data

    Our dataset was heavily biased: most high-load (high RPS) runs were oversaturated, and most low-load runs were undersaturated. This allowed our algorithm to learn how to read the "load" setting, which is cheating.

    To compensate, we duplicated the data to make the load look higher. We ran three scenarios: 1) duplicated the data once, 2) twice, and 3) eight times. To demonstrate this, we used a "bad" algorithm (a simple threshold) and a "good" algorithm and evaluated them on our augmented dataset. The following table shows the soft_c_index_avg score.

    AlgorithmMultiplier=1Multiplier=2Multiplier=8
    my-bad-algorithm0.7060.8050.548
    my-good-algorithm0.8210.8300.812

    The "bad" algorithm's performance rapidly deteriorates as the average load shifts. The "good" algorithm, in contrast, is better in every aspect. Its score is high, and it's almost completely multiplier invariant, meaning it's stable and reliable. Now we have a complete evaluation setup.

    Next steps

    In this blog, we went through the technical details of choosing an evaluation metric (Soft-C-Index), labeling our dataset, and fixing bias with data augmentation. In the third and final part of this series, we'll walk through the algorithm exploration itself, show our iterative error analysis, and reveal the discovery process of our best algorithm so far.

    Related Posts

    • Reduce LLM benchmarking costs with oversaturation detection

    • How to run performance tests using benchmark-runner

    • GuideLLM: Evaluate LLM deployments for real-world inference

    • GPU benchmarking and how to choose a GPU framework

    • Benchmarking with GuideLLM in air-gapped OpenShift clusters

    • Ollama vs. vLLM: A deep dive into performance benchmarking

    Recent Posts

    • How to build your dynamic plug-ins for Developer Hub

    • Defining success: Evaluation metrics and data augmentation for oversaturation detection

    • Deploying OpenShift hosted clusters on bare metal

    • Get started with language model post-training using Training Hub

    • Speculators: Standardized, production-ready speculative decoding

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue