Simultaneous O(1/(nt)) error rate across all rounds in contaminated PAC learning
Determine whether, in the iterative PAC learning model where each round t collects n examples from distribution D and labels each example by the previous-round classifier f_{t-1} with probability α and by the true concept f* with probability 1−α, there exists a learning algorithm that achieves generalization error O(1/(n t)) at every round t simultaneously for hypothesis classes with finite VC dimension in the realizable setting.
References
We leave open the question of whether it is possible to achieve the $O(1/nt)$ error rate for all rounds simultaneously.
— Learning from Synthetic Data: Limitations of ERM
(2601.15468 - Amin et al., 21 Jan 2026) in Discussion paragraph, subsection 'Learning Disagreements from Positive Examples' within Section 'PAC Learning'