Calibrated Predictive Lower Bounds on Time-to-Unsafe-Sampling in LLMs
Published 16 Jun 2025 in cs.LG, stat.AP, and stat.ML | (2506.13593v2)
Abstract: We develop a framework to quantify the time-to-unsafe-sampling - the number of LLM generations required to trigger an unsafe (e.g., toxic) response. Estimating this quantity is challenging, since unsafe responses are exceedingly rare in well-aligned LLMs, potentially occurring only once in thousands of generations. As a result, directly estimating time-to-unsafe-sampling would require collecting training data with a prohibitively large number of generations per prompt. However, with realistic sampling budgets, we often cannot generate enough responses to observe an unsafe outcome for every prompt, leaving the time-to-unsafe-sampling unobserved in many cases, making the estimation and evaluation tasks particularly challenging. To address this, we frame this estimation problem as one of survival analysis and develop a provably calibrated lower predictive bound (LPB) on the time-to-unsafe-sampling of a given prompt, leveraging recent advances in conformal prediction. Our key innovation is designing an adaptive, per-prompt sampling strategy, formulated as a convex optimization problem. The objective function guiding this optimized sampling allocation is designed to reduce the variance of the estimators used to construct the LPB, leading to improved statistical efficiency over naive methods that use a fixed sampling budget per prompt. Experiments on both synthetic and real data support our theoretical results and demonstrate the practical utility of our method for safety risk assessment in generative AI models.
The paper introduces a survival analysis framework to compute a calibrated lower predictive bound on time-to-unsafe-sampling in LLMs.
It employs a two-stage quantile regression approach with adaptive, per-prompt censoring to meet rigorous PAC coverage guarantees.
Experiments on synthetic and real data demonstrate that the optimized calibration method produces informative, low-variance safety estimates.
This paper introduces a framework to quantify the "time-to-unsafe-sampling" for LLMs, defined as the number of LLM generations needed to produce an unsafe response (e.g., toxic, private information) for a given prompt. Estimating this is difficult because unsafe responses from well-aligned LLMs are rare, potentially requiring an impractically large number of samples per prompt to observe. Budget constraints on generation and auditing further mean that for many prompts, an unsafe outcome might not be observed, making direct estimation and evaluation challenging.
The core idea is to reframe this estimation problem as a survival analysis task. The "1"T is the time-to-unsafe-sampling, and the "censoring time" C is the maximum number of generation-and-audit cycles allocated for a specific prompt X. The goal is to produce a provably calibrated Lower Predictive Bound (LPB), denoted L^(X), on T. This LPB aims to satisfy P(Ttest≥L^(Xtest))≥1−α with high probability (a PAC guarantee), meaning one can expect at least L^(X) safe responses before an unsafe one. Importantly, L^(X) can be computed at inference time with a single call to a calibrated regression model, without further LLM sampling or auditing.
The method builds on conformalized survival analysis. It involves two stages:
Training a regression model on a subset of prompts to predict the time-to-unsafe-sampling (specifically, its quantiles q^τ(X)).
Using a holdout calibration set to calibrate the regression model's predictions to obtain the LPB.
A key contribution is the design of an adaptive, per-prompt sampling strategy for allocating the censoring times Ci during calibration, subject to a global sampling budget B. This is contrasted with naive approaches that might split the budget equally or assign Ci independently of the prompt.
Methods for Calibration and Censoring Time Allocation:
Naive Baseline:
Defines each per-prompt censoring time Ci as a random variable independent of the prompt Xi, specifically Ci∼Geom(∣I2∣/B), where ∣I2∣ is the size of the calibration set.
Uses true inverse-censoring weights wτ(Xi)=1/P(q^τ(Xi)≤Ci∣Xi) to estimate miscoverage α^(τ).
The LPB is L^(X)=q^τ^(X), where τ^ is the largest τ such that α^(τ)≤α.
This method can suffer from high variability and loose PAC coverage bounds if the weights γτ=supxwτ(x) are large.
Prompt-Adaptive Budget Calibration (Our main contribution): This method has three modes:
Basic Mode:
Sets censoring times adaptively: Ci=Ber(πi)⋅q^τprior(Xi), where τprior is a chosen prior quantile level (e.g., 0.2 if targeting 90% coverage and the quantile estimator is decent) and πi=min(B/(∣I2∣⋅q^τprior(Xi)),1) is a per-prompt evaluation probability.
This aims to maximize P(q^τ(Xi)≤Ci) for τ≤τprior, improving budget utilization.
The search for τ^ is restricted to [0,τprior].
* Trimmed Mode:
* Improves upon Basic mode by capping estimated quantiles: f^τ(X)=min(q^τ(X),M), where M is a fixed threshold.
* Censoring times become Ci=Ber(πi)⋅f^τprior(Xi), with πi=min(B/(∣I2∣⋅f^τprior(Xi)),1).
* This bounds the maximum weight γ=max(∣I2∣⋅M/B,1), tightening the PAC coverage guarantee. The choice of M and B involves a trade-off: a larger M (more informative LPBs) requires a larger B to keep γ low.
* Optimized Mode (Flagship):
* Further refines Trimmed mode by optimizing the allocation of πi to minimize the average weight wˉ=∣I2∣1∑πi1, which reduces the variance of the miscoverage estimator α^(τ).
* This is done by solving a convex optimization problem:
* The resulting weights w({Xj},i)=1/πi∗ are also bounded by γ=max(∣I2∣⋅M/B,1).
Theoretical Guarantees:
The paper provides PAC-type coverage validity for the Naive and Prompt-Adaptive methods. For instance, Theorem \ref{thm:validity} states that for the prompt-adaptive methods, with probability at least 1−δ:
P(Ttest≥L^(Xtest)∣D)≥1−α−∣I2∣2γ2+5⋅log(δ1).
This guarantee holds for any LLM, prompt distribution, audit function, or budget, and its tightness depends on the maximum weight γ. The variance of the miscoverage estimator α^(τ) is shown to be linearly related to the mean calibration weight wˉτ (Proposition \ref{prop:variance_constant_miscoverage}), motivating the optimization in the Optimized mode.
Implementation Details for Quantile Regression:
Since Ti∣Xi is Geometrically distributed with some unsafe probability p(Xi), the conditional quantile function qτ(Xi) can be derived from p(Xi). The paper estimates p(Xi) using a model (e.g., neural network) trained with a binary cross-entropy (BCE) loss on aggregated unsafe proportions from the training set.
For each prompt Xi in the training set, N responses are generated, and the empirical success rate Yˉi=N1∑j=1NYij (where Yij=1 if j-th response is unsafe) is computed. The loss is BCE(Yˉi,p^(Xi)).
Experimental Validation:
Synthetic Data:
A dataset was generated where Xi are covariates and pi is the true unsafe probability, leading to Ti∼Geom(pi).
The Optimized method consistently achieved the target coverage (e.g., 90%) with the lowest variance compared to Uncalibrated, Naive, Basic, and Trimmed methods across various budget constraints (B/∣I2∣).
The Naive method showed poor coverage at low budgets and overcoverage at high budgets. The Basic method had high variance. The Trimmed method was more stable than Basic.
The Optimized and Trimmed methods produced more informative (higher) LPBs as the budget increased.
Real Data:
Dataset: RealToxicityPrompts, with Llama 3.2 1B as the LLM.
Audit Function: Detoxify-original model with a 0.5 toxicity threshold.
Comparison: Uncalibrated, Naive, and Optimized methods. For the Optimized method, M was set such that γ=2.
Results: The Uncalibrated baseline overcovered (too conservative). The Naive method yielded invalid LPBs with high variance at low budgets. The Optimized method achieved near-nominal coverage with small variance across all tested budgets. LPBs from the Optimized method became more informative (higher) with increased budget, eventually surpassing the Uncalibrated baseline.
Empirical miscoverage was estimated by drawing min(L^(Xi),2400) samples for each test prompt.
Practical Implications:
This research offers a proactive way to assess the risk level of a prompt before extensive generation.
If L^(X) is low, it signals a higher risk, suggesting the need for more resource-intensive safety checks for that prompt.
It allows for comparing the reliability of different LLMs on a set of prompts.
The calibration procedures are computationally feasible. For instance, efficient prompt-parallel sampling was implemented using vLLM for real-data experiments.
Limitations and Future Work:
The coverage guarantee is marginal (average over test prompts), not conditional on specific prompt subgroups. Future work aims for selection-conditional coverage.
The i.i.d. assumption for samples might not hold in adaptive or adversarial settings (e.g., jailbreak attacks) or under prompt distribution shifts. Future work includes continual or adaptive recalibration mechanisms.