Empirical evaluation of uncertainty quantification methods in supervised learning
Develop empirical evaluation methodologies for assessing methods that quantify aleatoric, epistemic, and total predictive uncertainty in supervised learning, despite the absence of ground-truth uncertainty labels in typical datasets.
References
In addition to theoretical problems of this kind, there are also many open practical questions. This includes, for example, the question of how to perform an empirical evaluation {empirical evaluation} of methods for quantifying uncertainty, whether aleatoric, epistemic, or total.
— Aleatoric and Epistemic Uncertainty in Machine Learning: An Introduction to Concepts and Methods
(1910.09457 - Hüllermeier et al., 2019) in Discussion and conclusion (Section 5)