Two-stage Risk Control with Application to Ranked Retrieval
Abstract: Practical machine learning systems often operate in multiple sequential stages, as seen in ranking and recommendation systems, which typically include a retrieval phase followed by a ranking phase. Effectively assessing prediction uncertainty and ensuring effective risk control in such systems pose significant challenges due to their inherent complexity. To address these challenges, we developed two-stage risk control methods based on the recently proposed learn-then-test (LTT) and conformal risk control (CRC) frameworks. Unlike the methods in prior work that address multiple risks, our approach leverages the sequential nature of the problem, resulting in reduced computational burden. We provide theoretical guarantees for our proposed methods and design novel loss functions tailored for ranked retrieval tasks. The effectiveness of our approach is validated through experiments on two large-scale, widely-used datasets: MSLR-Web and Yahoo LTRC.
- Recommendation systems with distribution-free reliability guarantees. In Symposium on Conformal and Probabilistic Prediction with Applications (COPA), 2023, 2023.
- A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv:2107.07511, 2021.
- Learn then test: Calibrating predictive algorithms to achieve risk control. arXiv preprint arXiv:2110.01052, 2021.
- Conformal risk control. ICLR, 2024.
- R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. ACM Press / Addison-Wesley, 1999.
- MS MARCO: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268, 2016.
- Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, 2005.
- Learning to rank with nonsmooth cost functions. In In Proceedings of NIPS conference, 2006.
- Learning to rank: From pairwise approach to listwise approach. In MSR-TR-2007-40, 2007.
- Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Proceedings of the Learning to Rank Challenge, volume 14 of Proceedings of Machine Learning Research. PMLR, 2011.
- W. Chu and Z. Ghahramani. Preference learning with gaussian processes. In Proceedings of the 22nd international conference on Machine learning, 2005.
- Pranking with ranking. In In Proceedings of NIPS conference, 2001.
- An efficient boosting algorithm for combining preferences. In Journal of Machine Learning Research, 2003.
- A deep relevance matching model for ad-hoc retrieval. In Proceedings of the 39th International ACM SIGIR conference, 2016.
- IR evaluation methods for retrieving highly relevant documents. Proceedings of the 23rd international ACM SIGIR conference, 2000.
- Finding the best of both worlds: Faster and more robust top-k document retrieval. Proceedings of the 43rd International ACM SIGIR Conference, 2020.
- A conformal prediction approach to explore functional data. Annals of Mathematics and Artificial Intelligence, 2015.
- Tie-Yan Liu. Learning to rank for information retrieval. Proceedings of the 33rd international ACM SIGIR conference, 2009.
- Inductive confidence machines for regression. In ECML, 2002.
- Introducing LETOR 4.0 datasets. CoRR, abs/1306.2597, 2013. URL http://arxiv.org/abs/1306.2597.
- Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th International ACM SIGIR Conference, 2015.
- Robertson. Stephen and Jones. K., Sparck. Relevance weighting of search terms. journal of the association for information science and technology. 27(3):129-146. doi: 10.1002/ASI.4630270302, 1976.
- Machine-learning applications of algorithmic randomness. Sixteenth International Conference on Machine Learning (ICML-1999), 1999.
- Algorithmic learning in a random world, volume 29. Springer, 2005.
- Text embeddings by weakly-supervised contrastive pre-training. arXiv preprint arXiv:2212.03533, 2022.
- Dawei Yin and et al. Ranking relevance in Yahoo search. Proceedings of the ACM SIGKDD Conference, 2016.
- Hai-Tao Yu. PT-Ranking: A benchmarking platform for neural learning-to-rank, 2020.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.