Papers
Topics
Authors
Recent
Search
2000 character limit reached

Selective Querying & Uncertainty Aggregation

Updated 18 February 2026
  • Selective Querying and Uncertainty-Guided Aggregation is a family of methods that use uncertainty signals to trigger targeted queries and aggregate data in complex information environments.
  • They employ measures such as entropy, variance, and MC-dropout to assess model confidence and guide expert interventions across various domains.
  • These techniques improve system calibration, reduce decision regret, and offer scalable solutions for data integration, reinforcement learning, and probabilistic databases.

Selective querying and uncertainty-guided aggregation comprise a family of techniques for robust information extraction, data integration, and decision-making in the presence of incomplete data, ambiguous supervision, complex relational structures, or unreliable model outputs. These methods provide a principled mechanism to signal and manage epistemic uncertainty by leveraging uncertainty estimates—derived from stochastic modeling, entropy calculations, or optimization intervals—as control or filtering signals. They are realized across domains, notably in reinforcement-learning-based agents, integration of heterogeneous datasets, robust combinatorial optimization, imitation learning, and both probabilistic and attribute-annotated databases (Stoisser et al., 2 Sep 2025, Turkcapar et al., 2023, Chen et al., 5 Jan 2025, Cui et al., 2019, Bergami, 2019, Feng et al., 2021).

1. Key Principles and Definitions

Selective querying refers to procedures that interleave uncertainty estimation with agent or system actions, so that new data, simulation rollouts, or expert interventions are triggered only under high model uncertainty (or, conversely, abstention occurs under uncertainty). Uncertainty-guided aggregation encompasses the computation of aggregates—SUM, COUNT, AVG, and more generally statistical or semantic summaries—accompanied by measures (explicit intervals, lower/upper bounds, posterior variances, or confidence scores) that qualify the reliability of the resulting output. Such mechanisms can employ entropy over retrieval actions (Stoisser et al., 2 Sep 2025), maximal and minimal aggregation intervals (Turkcapar et al., 2023), variance estimates (Cui et al., 2019), probabilistic confidence scores (Bergami, 2019), or explicit attribute-level bounds (Feng et al., 2021).

2. Retrieval and Summary Uncertainty in Agent Systems

Structured LLM agents operating over multi-table data commonly exhibit overconfident, but potentially spurious, reasoning chains. Uncertainty-aware agent frameworks define two principal uncertainty signals (Stoisser et al., 2 Sep 2025):

  • Retrieval Uncertainty: Considers entropy over table selection rollouts. Given a query qq, the agent runs KK independent retrieval episodes and measures

p^t=1Kk=1K1[tR(k)],\hat{p}_t = \frac{1}{K}\sum_{k=1}^K \mathbf{1}[t \in R^{(k)}],

where R(k)R^{(k)} is the set of tables touched in episode kk. The per-table entropy

H(t)=p^tlogp^t+(1p^t)log(1p^t)log2H(t) = - \frac{\hat{p}_t \log \hat{p}_t + (1-\hat{p}_t)\log(1-\hat{p}_t)}{\log 2}

is aggregated as uret(q)=1CtCH(t)u_\mathrm{ret}(q) = \frac{1}{|C|} \sum_{t\in C} H(t) over all tables CC encountered. High uretu_\mathrm{ret} signals inconsistent retrieval, guiding the agent to query further or abstain.

  • Summary Uncertainty: Combines token-level perplexity (confidence proxy) and self-consistency (semantic alignment across samples). For a generated summary s1:Ts_{1:T}:
    • Perplexity: uPerp(s)=exp(1Tt=1Tlogpθ(sts<t,context))u_\mathrm{Perp}(s) = \exp\left( -\frac{1}{T}\sum_{t=1}^T \log p_\theta(s_t|s_{<t},\text{context})\right)
    • Self-consistency: ucons(s,{s(k)})=11K1ksim(s(k),s)u_\mathrm{cons}(s^*,\{s^{(k)}\})=1 - \frac{1}{K-1}\sum_{k\neq *} \mathrm{sim}(s^{(k)}, s^*)
    • Combined "CoCoA" score: uCoCoA=uPerp(s)×ucons(s,{s(k)})u_\mathrm{CoCoA} = u_\mathrm{Perp}(s^*) \times u_\mathrm{cons}(s^*,\{s^{(k)}\})
    • Aggregation and filtering are performed by selecting summaries with low uCoCoAu_\mathrm{CoCoA}, otherwise abstaining as necessary.

This dual-signal strategy not only improves factuality and calibration (e.g., correct claims per cancer summary: 3.6 to 9.9), but also facilitates synthetic corpus construction for downstream training via uncertainty filtering (Stoisser et al., 2 Sep 2025).

3. Uncertainty-Guided Aggregation in Data Integration

In entity resolution and data integration, aggregate queries (e.g., SUM, COUNT) become uncertain due to ambiguous matches between records. Uncertainty is characterized by explicitly calculating the minimal and maximal possible values of an aggregate across all valid matching assignments (Turkcapar et al., 2023). The central technique is:

  • Min-Max Aggregation via Graph Matching: For base relations RR and SS, and a candidate match set Ψ\Psi, aggregates are extremized:

l=minMMq(M),u=maxMMq(M)l = \min_{M \in \mathcal{M}} q(M), \qquad u = \max_{M \in \mathcal{M}} q(M)

where q(M)q(M) is the aggregate computed on integrated matching MM. This is realized by a reduction to weighted bipartite graph matching (e.g., using maximum-weight matchings), exploiting polynomial-time algorithms (O(RN+S)3(|R|\cdot N + |S|)^3).

  • Relative Uncertainty: For analyst-selected matchings M~\tilde{M}, the relative interval

rel(q)=ulq(M~)\mathrm{rel}(q) = \frac{u - l}{q(\tilde{M})}

assesses sensitivity and triggers human review, parameter refinement, or further selective matching when rel(q)\mathrm{rel}(q) is high.

Empirical results show that this approach yields 3–5×\times tighter intervals than naive bounds, and scales efficiently to millions of matching candidates (Turkcapar et al., 2023).

4. Decision-Dependent Robust Optimization and Information Discovery

The decision-dependent information discovery (DDID) framework operationalizes selective querying in combinatorial optimization under budgeted uncertainty (Chen et al., 5 Jan 2025). The composite min-max-min-max problem models an initial query stage (to a subset II of items), adversarial revelation of queried uncertainties, downstream selection decisions, and adversarial assignment to unqueried elements:

minIImaxy{0,1}IminxXmaxδ{0,1}ndeltaj=yjjI,jδjΓF(x,δ)\min_{I \in \mathcal{I}} \max_{\mathbf{y} \in \{0,1\}^I} \min_{x \in \mathcal{X}} \max_{\substack{\boldsymbol{\delta}\in\{0,1\}^n\\delta_j=y_j\,\forall j\in I,\,\sum_j\delta_j\le\Gamma}} F(x,\boldsymbol{\delta})

  • Objective-uncertainty and Constraint-uncertainty Variants: Specializations target single-item selection or combinatorial feasibility under item-failure constraints.
  • Algorithmic Results: General instances are NP-hard, but explicit closed forms and O(n)O(n)O(nlogn)O(n \log n) algorithms arise for constrained query-family scenarios.
  • MILP Reformulation: General robust selection with information discovery reduces to tractable mixed-integer linear programs for moderate nn.

This framework demonstrates theoretically and empirically that selective querying (the initial choice of II) can reduce the worst-case decision regret, and that aggregated objective bounds align with uncertainty management (Chen et al., 5 Jan 2025).

5. Uncertainty-Aware Imitation Learning and Anticipatory Data Aggregation

In deep imitation learning, uncertainty-aware data aggregation leverages predictive epistemic variance to selectively query an expert during training. The UAIL algorithm (Cui et al., 2019) employs Monte Carlo Dropout to estimate variance σ(x)\sigma(x) of control outputs at each environmental state xx, and switches to expert action when σ(x)>τ\sigma(x) > \tau. The aggregation protocol:

  1. Accumulates new supervision only at points of high model uncertainty (early warning).
  2. Prevents cascading errors (unlike DAgger's random expert interventions).
  3. Leads to more robust, sample-efficient learning as demonstrated in driving benchmarks (e.g., infraction rates reduced by \approx30\% versus random querying).

6. Probabilistic and Attribute-Annotated Databases: Querying and Aggregation

Probabilistic data management and logical inference frameworks address selective querying and uncertainty-aware aggregation from a different formalism.

  • Probabilistic Databases: Confidence-annotated tuples and MLN-style factor graphs underpin selection by posterior score (P(γ)\mathbb{P}(\gamma)) thresholds. Selective queries (e.g., SELECT ... WHERE conf τ≥\tau) combine consistency filtering, probabilistic inference, and aggregation as expectation (SUM, COUNT, AVG) over possible worlds, or as tight intervals [L,U][L, U] (Bergami, 2019).
  • Attribute-Annotated Uncertainty Databases (AU-DBs): Each tuple is enriched with (κ,κsg,κu)(\kappa_\ell, \kappa_\text{sg}, \kappa_u): under-approximation (certain), selected-guess, and over-approximation (possible) multiplicities, further refined at the attribute-level (Feng et al., 2021). Query processing propagates bounds through all relational operators, supporting efficient uncertainty-guided aggregation. Early aggregation over the selected-guess reduces cost; remaining uncertainty is compressed via bucketing, creating a trade-off between bound tightness and runtime.

Table: Aggregation Models and Key Features

Model/Paper Uncertainty Signal Aggregation Type
Uncertainty-Aware Agent (Stoisser et al., 2 Sep 2025) Retrieval entropy, summary CoCoA Selection/filtering, abstention, confidence output
Data Integration (Turkcapar et al., 2023) Min/max intervals over matchings Extremal aggregate intervals
Robust Selection (Chen et al., 5 Jan 2025) Min-max objective/constraint costs Worst-case over uncertainty expanse
UAIL (Cui et al., 2019) MC-dropout variance Selective data aggregation
Probabilistic DB (Bergami, 2019) Confidence, factor-graph posterior Expectation, interval aggregation
AU-DB (Feng et al., 2021) Attribute-level (l,sg,u)(l,sg,u) bounds PTIME-aggregation under bounds

7. Empirical Impact and Limitations

Across application domains, uncertainty-guided techniques consistently improve calibration, factual correctness of summaries, robustness under adversarial uncertainty, label efficiency in learning, and precision/recall trade-offs of query answers. For instance, uncertainty-aware reinforcement learning with retrieval/summary abstention nearly triples correct and useful claims per summary while raising survival-prediction C-index from 0.32 to 0.63 (Stoisser et al., 2 Sep 2025); uncertainty-based selective aggregation in data integration yields intervals capturing the true aggregate 95–100% of the time, while remaining 3–5×\times tighter than naive baselines (Turkcapar et al., 2023).

Practical challenges include computational cost for exact inference (especially in probabilistic models), the tuning of uncertainty thresholds, and robustness to high-dimensional or implicit uncertainty sources. Efficient compaction (such as bucket compression in AU-DBs) and principled abstention strategies remain active areas for scalable deployment (Feng et al., 2021).

References:

(Stoisser et al., 2 Sep 2025, Turkcapar et al., 2023, Chen et al., 5 Jan 2025, Cui et al., 2019, Bergami, 2019, Feng et al., 2021)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Selective Querying and Uncertainty-Guided Aggregation.