Papers
Topics
Authors
Recent
Search
2000 character limit reached

Quantifying Deep Learning Model Uncertainty in Conformal Prediction

Published 1 Jun 2023 in cs.LG, cs.AI, and cs.CV | (2306.00876v2)

Abstract: Precise estimation of predictive uncertainty in deep neural networks is a critical requirement for reliable decision-making in machine learning and statistical modeling, particularly in the context of medical AI. Conformal Prediction (CP) has emerged as a promising framework for representing the model uncertainty by providing well-calibrated confidence levels for individual predictions. However, the quantification of model uncertainty in conformal prediction remains an active research area, yet to be fully addressed. In this paper, we explore state-of-the-art CP methodologies and their theoretical foundations. We propose a probabilistic approach in quantifying the model uncertainty derived from the produced prediction sets in conformal prediction and provide certified boundaries for the computed uncertainty. By doing so, we allow model uncertainty measured by CP to be compared by other uncertainty quantification methods such as Bayesian (e.g., MC-Dropout and DeepEnsemble) and Evidential approaches.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. Uncertainty sets for image classifiers using conformal prediction. arXiv preprint arXiv:2009.14193.
  2. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, 1050–1059. PMLR.
  3. On calibration of modern neural networks. In International Conference on Machine Learning, 1321–1330. PMLR.
  4. Simple and scalable predictive uncertainty estimation using deep ensembles. arXiv preprint arXiv:1612.01474.
  5. Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113(523): 1094–1111.
  6. Measuring Calibration in Deep Learning. In CVPR Workshops, volume 2.
  7. Inductive confidence machines for regression. In Machine Learning: ECML 2002: 13th European Conference on Machine Learning Helsinki, Finland, August 19–23, 2002 Proceedings 13, 345–356. Springer.
  8. High-quality prediction intervals for deep learning: A distribution-free, ensembled approach. In International conference on machine learning, 4075–4084. PMLR.
  9. Platt, J.; et al. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3): 61–74.
  10. Classification with valid and adaptive coverage. Advances in Neural Information Processing Systems, 33: 3581–3591.
  11. Evidential deep learning to quantify classification uncertainty. Advances in neural information processing systems, 31.
  12. Misclassification risk and uncertainty quantification in deep classifiers. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2484–2492.
  13. Machine-Learning Applications of Algorithmic Randomness. In Proceedings of the Sixteenth International Conference on Machine Learning, ICML ’99, 444–453. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. ISBN 1558606122.
  14. Algorithmic learning in a random world, volume 29. Springer.
  15. Evidential deep neural networks for uncertain data classification. In International Conference on Knowledge Science, Engineering and Management, 427–437. Springer.
  16. On reject and refine options in multicategory classification. Journal of the American Statistical Association, 113(522): 730–745.
Citations (4)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.