Papers
Topics
Authors
Recent
Search
2000 character limit reached

Disagreement-Based Sampling in ML

Updated 17 December 2025
  • Disagreement-based sampling is a technique that exploits predictive variability among models to select data points that maximize information gain.
  • It underpins active learning, semi-supervised learning, and domain adaptation by focusing on disagreement regions to reduce labeling complexity.
  • Practical implementations like CAL, DPR, and D-BAT demonstrate improved robustness and debiasing by targeting points where model predictions diverge.

Disagreement-based sampling is a family of methodologies that exploit the predictive variability—i.e., disagreement—among a collection of models or hypotheses to drive data selection, uncertainty estimation, debiasing, and exploration in diverse areas of machine learning. This technique is foundational in active learning, @@@@1@@@@ (SSL), domain adaptation, debiased training, robust optimization, and intrinsic-motivation reinforcement learning. Central to these methods is the hypothesis that points on which multiple plausible models disagree carry more information about the target function or the true underlying distribution. Disagreement-based strategies thus selectively prioritize these points for labeling, upweighting, further training, or exploration, resulting in improved sample or computational efficiency and robustness to domain shift, noise, or spurious correlations.

1. Theoretical Foundations: Disagreement Regions and Coefficient

The canonical paradigm for disagreement-based active learning is built around the concept of the version space and the induced disagreement region. For a hypothesis class HH and a labeled sample set SS, the version space VH,SV_{H,S} is the set of hypotheses in HH consistent with SS. The disagreement region DIS(V)DIS(V) is the set of inputs xx for which there is at least one pair h,gVh,g \in V such that h(x)g(x)h(x) \neq g(x). Disagreement-based algorithms such as CAL (Cohn–Atlas–Ladner) sample unlabeled points uniformly from DIS(V)DIS(V) to query, exploiting the information that such points reduce the volume of the version space most efficiently (Wiener et al., 2014, Katz-Samuels et al., 2021).

A key complexity measure is the disagreement coefficient θ(ϵ)\theta(\epsilon), defined as

θ(ϵ):=suprϵPX(DIS(B(h,r)))r\theta(\epsilon) := \sup_{r \geq \epsilon} \frac{P_X(DIS(B(h^*,r)))}{r}

where B(h,r)B(h^*,r) is the Hamming ball of radius rr around a reference or optimal hypothesis hh^*. The label complexity of disagreement-based active learning (the number of queried labels required to attain error ϵ\epsilon) is controlled by θ(ϵ)\theta(\epsilon), with upper bounds of the form

O(θ(ϵ)(ν2ϵ2+ln(1/ϵ))dln(1/δ))O(\theta(\epsilon) \cdot (\frac{\nu^2}{\epsilon^2} + \ln(1/\epsilon)) \cdot d \cdot \ln(1/\delta))

where ν\nu is the optimal error, d=VCdim(H)d = \mathrm{VCdim}(H), and δ\delta is the desired confidence (Katz-Samuels et al., 2021). Improvements exploiting version space compression set size n^m\hat n_m further sharpen these rates (Wiener et al., 2014).

2. Disagreement-Based Active Learning: Algorithms and Label Complexity

Disagreement-based active learning proceeds in rounds: given a current version space VV, the algorithm identifies DIS(V)DIS(V) and queries labels for points sampled (often uniformly) from the disagreement region. After each query, VV is updated to the subset of hypotheses consistent with the expanded labeled set. CAL and its agnostic extensions, such as A², operate under this paradigm. The method provably achieves exponential speedup (in ϵ\epsilon) for many hypothesis classes and data distributions compared to passive learning.

Advancements address sampling distributions beyond uniform over DIS(V)DIS(V). Experimental design-based samplers select a distribution λ\lambda over the unlabeled pool to minimize worst-case variance of risk estimates. Algorithms such as ACED replace uniform selection with distributions that solve convex programs parameterized by higher-order experimental design objectives, providing instance-specific improvements over θ\theta-based methods (Katz-Samuels et al., 2021).

3. Disagreement in Co-Training, Semi-Supervised, and Domain Adaptation Algorithms

Disagreement-based sampling extends beyond active learning to scenarios with multiple learners. Co-training initializes two classifiers (potentially on different views or features) and iteratively exploits the inputs on which their predictions disagree. Inputs with high disagreement are candidates for pseudo-labeling or prioritization, driving the refinement of both classifiers and harvesting unlabeled data more effectively. Theoretical analysis reveals upper bounds on error reductions, convergence properties, and graph-connectivity criteria governing the process (Wang et al., 2017).

In semi-supervised and domain adaptation regimes, disagreement between independently trained "teacher" and "student" models is used to select target-domain pseudo-labels. Self-training with classifier disagreement (SCD) identifies inputs for which the teacher and student predictions diverge; retraining the student with high-disagreement pseudo-labels robustly aligns source and target class-conditional distributions, yielding performance gains and sharper feature representations (Sun et al., 2023). This exploits the domain adaptation bound of Ben-David et al., leveraging empirical inter-classifier divergence.

4. Disagreement-Based Debiasing and Distributional Robustness

Recent work leverages disagreement probabilities as proxies for group membership in robust training under spurious correlations. In the absence of explicit bias labels, a small biased model is trained to overfit spurious associations; the per-input probability that the biased model disagrees with the true label, pdis(x)p_\mathrm{dis}(x), is used to upweight samples likely to be bias-conflicting. This mechanism, formalized in Disagreement Probability based Resampling (DPR), operationalizes a min-max robust objective by deriving sampling weights from pdis(x)p_\mathrm{dis}(x):

r^(x)=pdis(x)i=1npdis(xi)\hat r(x) = \frac{p_\mathrm{dis}(x)}{\sum_{i=1}^n p_\mathrm{dis}(x_i)}

The debiased model is trained on batches sampled according to r^(x)\hat r(x). Theoretical results bound both group-loss gaps and average loss, demonstrating that DPR shrinks worst-group loss disparities without requiring bias labels, and yields state-of-the-art performance among label-free debiasing strategies (Han et al., 2024).

5. Diversity and Transfer via Disagreement: Ensemble Methods and OOD Robustness

Disagreement is central to ensemble-based diversity and out-of-distribution (OOD) detection. D-BAT (Diversity-By-disAgreement Training) enforces agreement between ensemble members on source data and forced disagreement on OOD (unlabeled) data. By minimizing the average loss on source and maximizing the expected disagreement loss (e.g., via negative log-agreement) on OOD data, D-BAT induces the ensemble to span alternative (potentially orthogonal) predictive features, countering simplicity bias and improving generalization under domain shift (Pagliardini et al., 2022). The formalization as a generalized discrepancy measure connects this procedure to domain-adaptation bounds.

Table: Summary of Select Disagreement-Based Sampling Methods

Algorithm/Framework Disagreement Quantification Core Application
CAL, A² (Wiener et al., 2014, Katz-Samuels et al., 2021) Disagreement region DIS(V)DIS(V) Active learning
SCD (Sun et al., 2023) Inter-expert disagreement Cross-domain SSL, OTE
DPR (Han et al., 2024) pdis(x)=1pϕ(yx)p_\mathrm{dis}(x) = 1 - p_\phi(y|x) Debiasing, robust optimization
D-BAT (Pagliardini et al., 2022) Expected loss/negative agreement OOD generalization, ensembles
Self-supervised exploration (Pathak et al., 2019) Ensemble variance RL exploration, curiosity

6. Disagreement in Exploration, Correlated Sampling, and Uncertainty Quantification

In reinforcement learning, ensemble disagreement is used to define an intrinsic reward for exploration. The variance of next-state predictions across an ensemble of forward dynamics models quantifies epistemic uncertainty; policies are incentivized to seek trajectories where this variance is highest. Differentiable formulation allows direct policy optimization through the disagreement reward, achieving high sample efficiency and stability even in stochastic or real-world environments (Pathak et al., 2019).

In theoretical computer science, the disagreement-based (or "correlated") sampling problem analyzes the probability that two players, each sampling from distinct distributions PP, QQ over a space Ω\Omega, output the same value given access only to shared randomness. The canonical Kleinberg–Tardos–Holenstein strategy achieves disagreement probability 2δ/(1+δ)2\delta/(1+\delta), where δ\delta is the total variation distance dTV(P,Q)d_{TV}(P,Q); this is shown to be optimal via reduction to constrained-agreement lower bounds (Bavarian et al., 2016). This framework underpins randomized sketching, parallel repetition, and cryptographic sampling.

7. Practical Considerations, Implementation, and Open Challenges

Disagreement-based sampling methods demand careful selection of ensemble members or hypothesis spaces to maximize initial disagreement, which directly relates to the informativeness of the selected points or the diversity induced. The computational overhead—particularly in ensemble- or ERM-oracle-based techniques—can be managed by sample reuse ("water-filling"), bootstrapping, or batch-mode optimization strategies (Wiener et al., 2014, Katz-Samuels et al., 2021). Stability in ensemble construction is supported by sequential rather than simultaneous training under diversity objectives (Pagliardini et al., 2022).

Interpretation of disagreement is context-dependent: in active learning, disagreement indicates maximal expected model change per label; in debiasing, disagreement with a known-biased learner reflects outlier status regarding spurious patterns; in exploration, ensemble variance encodes epistemic (not aleatoric) uncertainty. A crucial practical dimension is the decomposition of disagreement into useful diversity rather than noise: failures may occur when disagreement is dominated by label noise or when OOD sets are too easily separable.

Open research problems include tight characterization of label complexity for specific hypothesis/data structure hybrids (e.g., beyond VC bounds), optimal design of OOD distributions for training diversity, analysis of negative correlations in correlated sampling, and robust identification of when and how disagreement quantifies actionable uncertainty rather than irreducible error.


Disagreement-based sampling represents a unifying concept across domains, providing principled mechanisms for sample-efficient learning, robust generalization, effective debiasing, and reliable uncertainty quantification, with theoretical guarantees and practical implementations spanning foundational to modern machine learning.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Disagreement-Based Sampling.