Papers
Topics
Authors
Recent
Search
2000 character limit reached

Calibrated Predictive Lower Bounds on Time-to-Unsafe-Sampling in LLMs

Published 16 Jun 2025 in cs.LG, stat.AP, and stat.ML | (2506.13593v2)

Abstract: We develop a framework to quantify the time-to-unsafe-sampling - the number of LLM generations required to trigger an unsafe (e.g., toxic) response. Estimating this quantity is challenging, since unsafe responses are exceedingly rare in well-aligned LLMs, potentially occurring only once in thousands of generations. As a result, directly estimating time-to-unsafe-sampling would require collecting training data with a prohibitively large number of generations per prompt. However, with realistic sampling budgets, we often cannot generate enough responses to observe an unsafe outcome for every prompt, leaving the time-to-unsafe-sampling unobserved in many cases, making the estimation and evaluation tasks particularly challenging. To address this, we frame this estimation problem as one of survival analysis and develop a provably calibrated lower predictive bound (LPB) on the time-to-unsafe-sampling of a given prompt, leveraging recent advances in conformal prediction. Our key innovation is designing an adaptive, per-prompt sampling strategy, formulated as a convex optimization problem. The objective function guiding this optimized sampling allocation is designed to reduce the variance of the estimators used to construct the LPB, leading to improved statistical efficiency over naive methods that use a fixed sampling budget per prompt. Experiments on both synthetic and real data support our theoretical results and demonstrate the practical utility of our method for safety risk assessment in generative AI models.

Summary

  • The paper introduces a survival analysis framework to compute a calibrated lower predictive bound on time-to-unsafe-sampling in LLMs.
  • It employs a two-stage quantile regression approach with adaptive, per-prompt censoring to meet rigorous PAC coverage guarantees.
  • Experiments on synthetic and real data demonstrate that the optimized calibration method produces informative, low-variance safety estimates.

This paper introduces a framework to quantify the "time-to-unsafe-sampling" for LLMs, defined as the number of LLM generations needed to produce an unsafe response (e.g., toxic, private information) for a given prompt. Estimating this is difficult because unsafe responses from well-aligned LLMs are rare, potentially requiring an impractically large number of samples per prompt to observe. Budget constraints on generation and auditing further mean that for many prompts, an unsafe outcome might not be observed, making direct estimation and evaluation challenging.

The core idea is to reframe this estimation problem as a survival analysis task. The "1" TT is the time-to-unsafe-sampling, and the "censoring time" CC is the maximum number of generation-and-audit cycles allocated for a specific prompt XX. The goal is to produce a provably calibrated Lower Predictive Bound (LPB), denoted L^(X)\hat{L}(X), on TT. This LPB aims to satisfy P(TtestL^(Xtest))1αP(T_{\text{test}} \geq \hat{L}(X_{\text{test}})) \geq 1-\alpha with high probability (a PAC guarantee), meaning one can expect at least L^(X)\hat{L}(X) safe responses before an unsafe one. Importantly, L^(X)\hat{L}(X) can be computed at inference time with a single call to a calibrated regression model, without further LLM sampling or auditing.

The method builds on conformalized survival analysis. It involves two stages:

  1. Training a regression model on a subset of prompts to predict the time-to-unsafe-sampling (specifically, its quantiles q^τ(X)\hat{q}_\tau(X)).
  2. Using a holdout calibration set to calibrate the regression model's predictions to obtain the LPB.

A key contribution is the design of an adaptive, per-prompt sampling strategy for allocating the censoring times CiC_i during calibration, subject to a global sampling budget BB. This is contrasted with naive approaches that might split the budget equally or assign CiC_i independently of the prompt.

Methods for Calibration and Censoring Time Allocation:

  1. Naive Baseline:
    • Defines each per-prompt censoring time CiC_i as a random variable independent of the prompt XiX_i, specifically CiGeom(I2/B)C_i \sim \text{Geom}(|\mathcal{I}_2|/B), where I2|\mathcal{I}_2| is the size of the calibration set.
    • Uses true inverse-censoring weights wτ(Xi)=1/P(q^τ(Xi)CiXi)w_\tau(X_i) = 1 / P(\hat{q}_\tau(X_i) \le C_i | X_i) to estimate miscoverage α^(τ)\hat{\alpha}(\tau).
    • The LPB is L^(X)=q^τ^(X)\hat{L}(X) = \hat{q}_{\hat{\tau}}(X), where τ^\hat{\tau} is the largest τ\tau such that α^(τ)α\hat{\alpha}(\tau) \le \alpha.
    • This method can suffer from high variability and loose PAC coverage bounds if the weights γτ=supxwτ(x)\gamma_\tau = \sup_x w_\tau(x) are large.
  2. Prompt-Adaptive Budget Calibration (Our main contribution): This method has three modes:
    • Basic Mode:
      • Sets censoring times adaptively: Ci=Ber(πi)q^τprior(Xi)C_i = \text{Ber}(\pi_i) \cdot \hat{q}_{\tau_{\text{prior}}}(X_i), where τprior\tau_{\text{prior}} is a chosen prior quantile level (e.g., 0.2 if targeting 90% coverage and the quantile estimator is decent) and πi=min(B/(I2q^τprior(Xi)),1)\pi_i = \min(B/(|\mathcal{I}_2| \cdot \hat{q}_{\tau_{\text{prior}}}(X_i)), 1) is a per-prompt evaluation probability.
      • This aims to maximize P(q^τ(Xi)Ci)P(\hat{q}_\tau(X_i) \le C_i) for ττprior\tau \le \tau_{\text{prior}}, improving budget utilization.
      • The search for τ^\hat{\tau} is restricted to [0,τprior][0, \tau_{\text{prior}}].

* Trimmed Mode: * Improves upon Basic mode by capping estimated quantiles: f^τ(X)=min(q^τ(X),M)\hat{f}_\tau(X) = \min(\hat{q}_\tau(X), M), where MM is a fixed threshold. * Censoring times become Ci=Ber(πi)f^τprior(Xi)C_i = \text{Ber}(\pi_i) \cdot \hat{f}_{\tau_{\text{prior}}}(X_i), with πi=min(B/(I2f^τprior(Xi)),1)\pi_i = \min(B/(|\mathcal{I}_2| \cdot \hat{f}_{\tau_{\text{prior}}}(X_i)), 1). * This bounds the maximum weight γ=max(I2M/B,1)\gamma = \max(|\mathcal{I}_2| \cdot M/B, 1), tightening the PAC coverage guarantee. The choice of MM and BB involves a trade-off: a larger MM (more informative LPBs) requires a larger BB to keep γ\gamma low.

* Optimized Mode (Flagship): * Further refines Trimmed mode by optimizing the allocation of πi\pi_i to minimize the average weight wˉ=1I21πi\bar{w} = \frac{1}{|\mathcal{I}_2|}\sum \frac{1}{\pi_i}, which reduces the variance of the miscoverage estimator α^(τ)\hat{\alpha}(\tau). * This is done by solving a convex optimization problem:

π=argminπ[0,1]I21I2iI21πis.t.iI2f^τprior(Xi)πiB.\pi^* = \text{argmin}_{\pi \in [0,1]^{|\mathcal{I}_2|}} \frac{1}{|\mathcal{I}_2|}\sum_{i \in \mathcal{I}_2} \frac{1}{\pi_i} \quad \text{s.t.} \quad \sum_{i \in \mathcal{I}_2} \hat{f}_{\tau_{\text{prior}}}(X_i)\pi_i \le B.

* The resulting weights w({Xj},i)=1/πiw(\{X_j\}, i) = 1/\pi_i^* are also bounded by γ=max(I2M/B,1)\gamma = \max(|\mathcal{I}_2| \cdot M/B, 1).

Theoretical Guarantees:

The paper provides PAC-type coverage validity for the Naive and Prompt-Adaptive methods. For instance, Theorem \ref{thm:validity} states that for the prompt-adaptive methods, with probability at least 1δ1-\delta:

P(TtestL^(Xtest)D)1α2γ2+5I2log(1δ).P(T_{\text{test}} \geq \hat{L}(X_{\text{test}}) | \mathcal{D}) \geq 1 - \alpha - \sqrt{ \frac{ 2\gamma^2 + 5 }{|\mathcal{I}_2|} \cdot \log(\frac{1}{\delta}) }.

This guarantee holds for any LLM, prompt distribution, audit function, or budget, and its tightness depends on the maximum weight γ\gamma. The variance of the miscoverage estimator α^(τ)\hat{\alpha}(\tau) is shown to be linearly related to the mean calibration weight wˉτ\bar{w}_\tau (Proposition \ref{prop:variance_constant_miscoverage}), motivating the optimization in the Optimized mode.

Implementation Details for Quantile Regression:

Since TiXiT_i | X_i is Geometrically distributed with some unsafe probability p(Xi)p(X_i), the conditional quantile function qτ(Xi)q_\tau(X_i) can be derived from p(Xi)p(X_i). The paper estimates p(Xi)p(X_i) using a model (e.g., neural network) trained with a binary cross-entropy (BCE) loss on aggregated unsafe proportions from the training set. For each prompt XiX_i in the training set, NN responses are generated, and the empirical success rate Yˉi=1Nj=1NYij\bar{Y}_i = \frac{1}{N} \sum_{j=1}^N Y_i^j (where Yij=1Y_i^j=1 if jj-th response is unsafe) is computed. The loss is BCE(Yˉi,p^(Xi))\text{BCE}(\bar{Y}_i, \hat{p}(X_i)).

Experimental Validation:

  • Synthetic Data:
    • A dataset was generated where XiX_i are covariates and pip_i is the true unsafe probability, leading to TiGeom(pi)T_i \sim \text{Geom}(p_i).
    • The Optimized method consistently achieved the target coverage (e.g., 90%) with the lowest variance compared to Uncalibrated, Naive, Basic, and Trimmed methods across various budget constraints (B/I2B/|\mathcal{I}_2|).
    • The Naive method showed poor coverage at low budgets and overcoverage at high budgets. The Basic method had high variance. The Trimmed method was more stable than Basic.
    • The Optimized and Trimmed methods produced more informative (higher) LPBs as the budget increased.
  • Real Data:
    • Dataset: RealToxicityPrompts, with Llama 3.2 1B as the LLM.
    • Audit Function: Detoxify-original model with a 0.5 toxicity threshold.
    • Comparison: Uncalibrated, Naive, and Optimized methods. For the Optimized method, MM was set such that γ=2\gamma=2.
    • Results: The Uncalibrated baseline overcovered (too conservative). The Naive method yielded invalid LPBs with high variance at low budgets. The Optimized method achieved near-nominal coverage with small variance across all tested budgets. LPBs from the Optimized method became more informative (higher) with increased budget, eventually surpassing the Uncalibrated baseline.
    • Empirical miscoverage was estimated by drawing min(L^(Xi),2400)\min(\hat{L}(X_i), 2400) samples for each test prompt.

Practical Implications:

This research offers a proactive way to assess the risk level of a prompt before extensive generation.

  • If L^(X)\hat{L}(X) is low, it signals a higher risk, suggesting the need for more resource-intensive safety checks for that prompt.
  • It allows for comparing the reliability of different LLMs on a set of prompts.
  • The calibration procedures are computationally feasible. For instance, efficient prompt-parallel sampling was implemented using vLLM for real-data experiments.

Limitations and Future Work:

  • The coverage guarantee is marginal (average over test prompts), not conditional on specific prompt subgroups. Future work aims for selection-conditional coverage.
  • The i.i.d. assumption for samples might not hold in adaptive or adversarial settings (e.g., jailbreak attacks) or under prompt distribution shifts. Future work includes continual or adaptive recalibration mechanisms.

The code is available at \href{https://github.com/giladfrid009/LLM-survival/}{github.com/giladfrid009/LLM-survival/}.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 2 likes about this paper.