Decision-theoretic integration of clarification value and cost for long-horizon agents

Develop a decision-theoretic framework that integrates the value of information obtained from clarifying questions with the costs of user interaction—specifically interruption, latency, and trust costs—to optimally determine when a long-horizon workflow agent should ask for clarification versus proceed autonomously.

Background

The paper studies agents executing long-horizon workflows where human clarification has nontrivial costs. It separately analyzes the value of information gained through clarification and the costs of asking questions, which include interruption, latency, and trust impacts.

While LHAW provides a framework to generate and evaluate underspecified tasks and measures performance gains from clarification, it does not yet unify these gains with explicit costs in a formal decision policy. The authors explicitly defer creating such a unified decision-theoretic framework.

References

Toward this goal, we study the value of information and the cost of corresponding questions separately, with their integration into a decision-theoretic framework left for future work.

LHAW: Controllable Underspecification for Long-Horizon Tasks  (2602.10525 - Pu et al., 11 Feb 2026) in Section 2.1 (Long-Horizon Workflows)