Papers
Topics
Authors
Recent
Search
2000 character limit reached

Residual-as-Teacher: Mitigating Bias Propagation in Student--Teacher Estimation

Published 26 Mar 2026 in stat.ML, cs.LG, and math.ST | (2603.25466v1)

Abstract: We study statistical estimation in a student--teacher setting, where predictions from a pre-trained teacher are used to guide a student model. A standard approach is to train the student to directly match the teacher's outputs, which we refer to as student soft matching (SM). This approach directly propagates any systematic bias or mis-specification present in the teacher, thereby degrading the student's predictions. We propose and analyze an alternative scheme, known as residual-as-teacher (RaT), in which the teacher is used to estimate residuals in the student's predictions. Our analysis shows how the student can thereby emulate a proximal gradient scheme for solving an oracle optimization problem, and this provably reduces the effect of teacher bias. For general student--teacher pairs, we establish non-asymptotic excess risk bounds for any RaT fixed point, along with convergence guarantees for the student-teacher iterative scheme. For kernel-based student--teacher pairs, we prove a sharp separation: the RaT method achieves the minimax-optimal rate, while the SM method incurs constant prediction error for any sample size. Experiments on both synthetic data and ImageNette classification under covariate shift corroborate our theoretical findings.

Summary

  • The paper presents a novel Residual-as-Teacher (RaT) methodology that uses teacher-estimated residuals to correct bias inherited from mis-specified teachers.
  • It reformulates knowledge distillation as a proximal fixed-point problem solved efficiently by Picard iteration, addressing confirmation bias in standard soft matching.
  • Empirical and theoretical analyses demonstrate that RaT achieves minimax-optimal rates under covariate shift, outperforming traditional student–teacher approaches.

Residual-as-Teacher (RaT): Mitigating Bias Propagation in Student–Teacher Estimation

Introduction and Problem Context

Student–teacher frameworks, including knowledge distillation and model compression, have become standard practice for transferring predictive signal from high-capacity, potentially biased or black-box models (“teachers”) to lower-complexity models (“students”). While commonly used approaches—such as student soft matching (SM), where the student directly regresses onto the teacher’s predictions—achieve empirical success, they often suffer from confirmation bias: any systematic bias or model mis-specification in the teacher is inherited by the student, even in the presence of large datasets. This effect is especially pernicious under covariate shift or when the teacher model exhibits strong structural bias, e.g., due to limited capacity or strong inductive priors.

The paper introduces the Residual-as-Teacher (RaT) methodology, which addresses the pathology of bias propagation in the student–teacher setup. Rather than matching the teacher’s predictions directly, RaT leverages the teacher to approximate the residuals of the student, thereby reformulating the bias inheritance problem as a proximal fixed-point equation, optionally solved via a computationally efficient Picard iteration.

Methodological Framework

Setup and Oracle Estimand

The general scenario involves regression or classification under potential covariate shift: the student receives unlabeled samples from the target distribution and labeled data from a source distribution (the teacher’s domain). The goal is to find a student function ff (from a potentially restricted function class F\mathcal{F}) minimizing the regularized empirical risk on the target:

f=argminfF{1mj=1mEY[(f(x~j),Y)X=x~j]+Pen(f)}f^\dagger = \arg\min_{f \in \mathcal{F}} \left\{ \frac{1}{m} \sum_{j=1}^m \mathbb{E}_{Y}[ \ell(f(\tilde{x}_j), Y) \mid X = \tilde{x}_j ] + \mathrm{Pen}(f) \right\}

RaT Algorithm

RaT decouples bias inheritance by making the teacher responsible for estimating residuals of the student, not the raw label. The core iterative update is as follows:

  1. Teacher phase: Given a current student fFf \in \mathcal{F}, regress the teacher (from class G\mathcal{G}) on the empirical residuals {(f(xi),yi)/z}\{ \partial\ell(f(x_i), y_i) / \partial z \} using source data.
  2. Student phase: Perform a proximal update on the target covariates,

fk+1=proxPen(fk(x~1m)g^k(x~1m)),f^{k+1} = \operatorname{prox}_{\mathrm{Pen}}(f^k(\tilde{x}_{1}^m) - \hat{g}^{k}(\tilde{x}_{1}^m)),

where g^k\hat{g}^k is the teacher’s prediction of the residuals at the target points.

A RaT fixed point is any ff satisfying

f=proxPen(f(x~1m)g^(f))f = \operatorname{prox}_{\mathrm{Pen}}(f(\tilde{x}_{1}^m) - \hat{g}(f))

and can be efficiently approached by Picard iteration.

Comparison to Soft Matching (SM)

In contrast, SM fits the student to the teacher’s raw predictions on target covariates (pseudo-labeling), incurring direct inheritance of the teacher’s bias because any mis-specification in the teacher is treated as ground truth by the student.

Theoretical Results

Statistical Risk and Bias Decomposition

Non-asymptotic risk bounds for RaT fixed points show risk control is governed by the difference between the true gradient and the teacher’s residual-based approximation evaluated at the fixed point, not over the entire function class (Theorem 1). For least squares, the RaT risk depends on the bias of the teacher only in directions not reachable by the student (i.e., model mis-specification), whereas the SM estimator’s risk is dominated by the entire bias inherited from the teacher.

Kernel Ridge Regression (KRR) Regime: Minimax Separation

The most significant result is a sharp minimax separation in the KRR setting with covariate shift:

  • RaT achieves minimax-optimal rates: For a student tuned with appropriate regularization, RaT achieves the best possible estimation rate given the spectrum of the target kernel and the source kernel (Theorem 2).
  • SM is fundamentally inconsistent: Regardless of student tuning, if the teacher is biased (e.g., due to regularization), SM suffers from a non-vanishing bias term and does not achieve consistency, i.e., there is a constant lower bound on its MSE relative to the student oracle.

These statements are supported by explicit MSE decompositions using operator-theoretic and RKHS machinery.

Benign and Malign Covariate Shift

An intriguing implication is the dichotomy between benign and malign covariate shift. If the target kernel spectrum is richer than the source (benign shift), RaT can outperform even target-only learners. Conversely, if the target is less rich, covariate shift can be harmful.

Computational Guarantees

The Picard iteration for RaT is shown to converge under approximate co-coercivity and monotonicity of the teacher-based gradient estimator, with rates matching those observed in classical proximal algorithms. Under reasonable structural assumptions on the teacher’s estimator, the fixed point is efficiently attainable.

Empirical Evidence

Experiments on synthetic regression (with Hermite and Laplace kernels) and real-world classification (ImageNette under various corruptions) validate the theoretical findings:

  • RaT consistently outperforms SM in the presence of teacher bias or covariate shift, exhibiting decay of MSE with increasing sample size, while SM saturates at a bias-limited error floor.
  • The RaT advantage increases with both the degree of teacher bias and severity of distribution shift.
  • The qualitative phenomenon holds beyond kernel methods, extending to neural network student–teacher pairs.

Implications and Future Directions

Practical Implications:

  • RaT provides a principled mechanism for knowledge distillation or domain adaptation that is robust to teacher mis-specification and systematic bias, and enables reliable deployment of simpler models without sacrificing statistical optimality under mild conditions.
  • The methodology is relevant for scenarios with covariate shift (domain adaptation, distributional robustness), model compression, or anytime teacher mis-specification is unavoidable.

Theoretical Extensions:

  • The foundational machinery could extend to nonconvex student classes, alternative regularization regimes, and more complex label spaces (e.g., structured prediction, sequence models).
  • The characterization of benign/malign covariate shift in terms of eigenspectrum interplay suggests new structural generalization results for adaptive learning scenarios.

Open Questions:

  • Determining optimality and fixed-point structure of RaT in general (deep/nonlinear) settings.
  • Improved analysis of residual regression within black-box function classes.

Conclusion

The Residual-as-Teacher framework fundamentally alters the dynamics of student–teacher estimation, enabling bias correction and minimax-optimal learning in the presence of teacher mis-specification or under covariate shift. By shifting the role of the teacher to residual estimation, RaT provably mitigates confirmation bias and overcomes the statistical limitations of standard soft matching. These insights open new possibilities for robust and statistically principled model transfer and domain adaptation.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 19 likes about this paper.