Papers
Topics
Authors
Recent
Search
2000 character limit reached

Decoy Effect In Search Interaction: Understanding User Behavior and Measuring System Vulnerability

Published 27 Mar 2024 in cs.IR | (2403.18462v2)

Abstract: This study examines the decoy effect's underexplored influence on user search interactions and methods for measuring information retrieval (IR) systems' vulnerability to this effect. It explores how decoy results alter users' interactions on search engine result pages, focusing on metrics like click-through likelihood, browsing time, and perceived document usefulness. By analyzing user interaction logs from multiple datasets, the study demonstrates that decoy results significantly affect users' behavior and perceptions. Furthermore, it investigates how different levels of task difficulty and user knowledge modify the decoy effect's impact, finding that easier tasks and lower knowledge levels lead to higher engagement with target documents. In terms of IR system evaluation, the study introduces the DEJA-VU metric to assess systems' susceptibility to the decoy effect, testing it on specific retrieval tasks. The results show differences in systems' effectiveness and vulnerability, contributing to our understanding of cognitive biases in search behavior and suggesting pathways for creating more balanced and bias-aware IR evaluations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (108)
  1. Denise E. Agosto. 2002. Bounded rationality and satisficing in young people’s Web-based decision making. Journal of the American Society for Information Science and Technology 53, 1 (2002), 16–27. https://doi.org/10.1002/asi.10024 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/asi.10024
  2. Leif Azzopardi. 2021. Cognitive Biases in Search: A Review and Reflection of Cognitive Biases in Information Retrieval. In Proceedings of the 2021 Conference on Human Information Interaction and Retrieval (Canberra ACT, Australia) (CHIIR ’21). Association for Computing Machinery, New York, NY, USA, 27–37. https://doi.org/10.1145/3406522.3446023
  3. Ricardo Baeza-Yates. 2018. Bias on the web. Commun. ACM 61, 6 (may 2018), 54–61. https://doi.org/10.1145/3209581
  4. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv:1611.09268 [cs.CL]
  5. Exploiting order effects to improve the quality of decisions. Patient Education and Counseling 96, 2 (2014), 197–203. https://doi.org/10.1016/j.pec.2014.05.021
  6. Bias-Aware Systems: Exploring Indicators for the Occurrences of Cognitive Biases when Facing Different Opinions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (, Hamburg, Germany,) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 27, 19 pages. https://doi.org/10.1145/3544548.3580917
  7. Katriina Byström. 2002. Information and information sources in tasks of varying complexity. Journal of the American Society for information Science and Technology 53, 7 (2002), 581–591.
  8. Katriina Byström and Kalervo Järvelin. 1995. Task complexity affects information seeking and use. Information processing & management 31, 2 (1995), 191–213.
  9. Donald J. Campbell. 1988. Task Complexity: A Review and Analysis. The Academy of Management Review 13, 1 (1988), 40–52. http://www.jstor.org/stable/258353
  10. The Effects of Search Task Determinability on Search Behavior. In Advances in Information Retrieval, Joemon M Jose, Claudia Hauff, Ismail Sengor Altıngovde, Dawei Song, Dyaa Albakour, Stuart Watt, and John Tait (Eds.). Springer International Publishing, Cham, 108–121.
  11. Praveen Chandar and Ben Carterette. 2012. Using Preference Judgments for Novel Document Retrieval. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval (Portland, Oregon, USA) (SIGIR ’12). Association for Computing Machinery, New York, NY, USA, 861–870. https://doi.org/10.1145/2348283.2348398
  12. Expected Reciprocal Rank for Graded Relevance. In Proceedings of the 18th ACM Conference on Information and Knowledge Management (Hong Kong, China) (CIKM ’09). Association for Computing Machinery, New York, NY, USA, 621–630. https://doi.org/10.1145/1645953.1646033
  13. Bias and Debias in Recommender System: A Survey and Future Directions. ACM Trans. Inf. Syst. 41, 3, Article 67 (feb 2023), 39 pages. https://doi.org/10.1145/3564284
  14. A Reference-Dependent Model for Web Search Evaluation: Understanding and Measuring the Experience of Boundedly Rational Users. In Proceedings of the ACM Web Conference 2023 (Austin, TX, USA) (WWW ’23). Association for Computing Machinery, New York, NY, USA, 3396–3405. https://doi.org/10.1145/3543507.3583551
  15. Decoy Effect in Search Interaction: A Pilot Study. In Proceedings of the Tenth International Workshop on Evaluating Information Access (EVIA 2023). Tokyo, Japan, 14–19.
  16. Practice and Challenges in Building a Business-oriented Search Engine Quality Metric. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (Taipe, Taiwan) (SIGIR ’23). Association for Computing Machinery, New York, NY, USA, 3295–3299. https://doi.org/10.1145/3539618.3591841
  17. Constructing Better Evaluation Metrics by Incorporating the Anchoring Effect into the User Model. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain) (SIGIR ’22). Association for Computing Machinery, New York, NY, USA, 2709–2714. https://doi.org/10.1145/3477495.3531953
  18. Meta-evaluation of Online and Offline Web Search Evaluation Metrics. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (Shinjuku, Tokyo, Japan) (SIGIR ’17). Association for Computing Machinery, New York, NY, USA, 15–24. https://doi.org/10.1145/3077136.3080804
  19. Junghoo Cho and Sourashis Roy. 2004. Impact of search engines on page popularity. In Proceedings of the 13th International Conference on World Wide Web (New York, NY, USA) (WWW ’04). Association for Computing Machinery, New York, NY, USA, 20–29. https://doi.org/10.1145/988672.988676
  20. Click Model-Based Information Retrieval Metrics. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval (Dublin, Ireland) (SIGIR ’13). Association for Computing Machinery, New York, NY, USA, 493–502. https://doi.org/10.1145/2484028.2484071
  21. User Activity Patterns During Information Search. ACM Trans. Inf. Syst. 33, 1, Article 1 (mar 2015), 39 pages. https://doi.org/10.1145/2699656
  22. Overview of the TREC 2020 deep learning track. arXiv:2102.07662 [cs.IR]
  23. Overview of the TREC 2019 deep learning track. arXiv:2003.07820 [cs.IR]
  24. An Experimental Comparison of Click Position-Bias Models. In Proceedings of the 2008 International Conference on Web Search and Data Mining (Palo Alto, California, USA) (WSDM ’08). Association for Computing Machinery, New York, NY, USA, 87–94. https://doi.org/10.1145/1341531.1341545
  25. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Jill Burstein, Christy Doran, and Thamar Solorio (Eds.). Association for Computational Linguistics, Minneapolis, Minnesota, 4171–4186. https://doi.org/10.18653/v1/N19-1423
  26. Carsten Eickhoff. 2018. Cognitive Biases in Crowdsourcing. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (Marina Del Rey, CA, USA) (WSDM ’18). Association for Computing Machinery, New York, NY, USA, 162–170. https://doi.org/10.1145/3159652.3159654
  27. From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (, Madrid, Spain,) (SIGIR ’22). Association for Computing Machinery, New York, NY, USA, 2353–2359. https://doi.org/10.1145/3477495.3531857
  28. SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’21). ACM. https://doi.org/10.1145/3404835.3463098
  29. COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (Eds.). Association for Computational Linguistics, Online, 3030–3042. https://doi.org/10.18653/v1/2021.naacl-main.241
  30. Leveraging Large Language Models for Sequential Recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems (Singapore, Singapore) (RecSys ’23). Association for Computing Machinery, New York, NY, USA, 1096–1102. https://doi.org/10.1145/3604915.3610639
  31. Adding Asymmetrically Dominated Alternatives: Violations of Regularity and the Similarity Hypothesis. Journal of Consumer Research 9, 1 (1982), 90–98. http://www.jstor.org/stable/2488940
  32. Unsupervised Dense Information Retrieval with Contrastive Learning. https://doi.org/10.48550/ARXIV.2112.09118
  33. Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated Gain-Based Evaluation of IR Techniques. ACM Trans. Inf. Syst. 20, 4 (oct 2002), 422–446. https://doi.org/10.1145/582415.582418
  34. GenRec: Large Language Model for Generative Recommendation. arXiv:2307.00457 [cs.IR]
  35. Understanding and Predicting Graded Search Satisfaction. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining (Shanghai, China) (WSDM ’15). Association for Computing Machinery, New York, NY, USA, 57–66. https://doi.org/10.1145/2684822.2685319
  36. Billion-Scale Similarity Search with GPUs. IEEE Transactions on Big Data 7, 03 (jul 2021), 535–547. https://doi.org/10.1109/TBDATA.2019.2921572
  37. Erik Jones and Jacob Steinhardt. 2022. Capturing failures of large language models via human cognitive biases. Advances in Neural Information Processing Systems 35 (2022), 11785–11799.
  38. Daniel Kahneman. 2011. Thinking, Fast and Slow. Farrar, Straus and Giroux, New York.
  39. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (Eds.). Association for Computational Linguistics, Online, 6769–6781. https://doi.org/10.18653/v1/2020.emnlp-main.550
  40. Development and Evaluation of Search Tasks for IIR Experiments using a Cognitive Complexity Framework. In Proceedings of the 2015 International Conference on The Theory of Information Retrieval (Northampton, Massachusetts, USA) (ICTIR ’15). Association for Computing Machinery, New York, NY, USA, 101–110. https://doi.org/10.1145/2808194.2809465
  41. Omar Khattab and Matei Zaharia. 2020. ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (Virtual Event, China) (SIGIR ’20). Association for Computing Machinery, New York, NY, USA, 39–48. https://doi.org/10.1145/3397271.3401075
  42. Jeonghyun Kim. 2005. Task Difficulty in Information Searching Behavior: Expected Difficulty and Experienced Difficulty. In Proceedings of the 5th ACM/IEEE-CS Joint Conference on Digital Libraries (Denver, CO, USA) (JCDL ’05). Association for Computing Machinery, New York, NY, USA, 383. https://doi.org/10.1145/1065385.1065486
  43. Benchmarking Cognitive Biases in Large Language Models as Evaluators. arXiv:2309.17012 [cs.CL]
  44. Nima Kordzadeh and Maryam Ghasemaghaei. 2022. Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems 31, 3 (2022), 388–409. https://doi.org/10.1080/0960085X.2021.1927212 arXiv:https://doi.org/10.1080/0960085X.2021.1927212
  45. Arie W. Kruglanski and Icek Ajzen. 1983. Bias and error in human judgment. European Journal of Social Psychology 13, 1 (1983), 1–44. https://doi.org/10.1002/ejsp.2420130102 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/ejsp.2420130102
  46. John Lafferty and Chengxiang Zhai. 2001. Document language models, query models, and risk minimization for information retrieval. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (New Orleans, Louisiana, USA) (SIGIR ’01). Association for Computing Machinery, New York, NY, USA, 111–119. https://doi.org/10.1145/383952.383970
  47. Should Fairness be a Metric or a Model? A Model-based Framework for Assessing Bias in Machine Learning Pipelines. ACM Trans. Inf. Syst. (jan 2024). https://doi.org/10.1145/3641276 Just Accepted.
  48. Large Language Models for Generative Recommendation: A Survey and Visionary Discussions. arXiv:2309.01157 [cs.IR]
  49. Q. Vera Liao and Wai-Tat Fu. 2013. Beyond the filter bubble: interactive effects of perceived threat and topic involvement on selective exposure to information. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France) (CHI ’13). Association for Computing Machinery, New York, NY, USA, 2359–2368. https://doi.org/10.1145/2470654.2481326
  50. Jimmy Lin and Xueguang Ma. 2021. A Few Brief Notes on DeepImpact, COIL, and a Conceptual Framework for Information Retrieval Techniques. arXiv:2106.14807 [cs.IR]
  51. Pyserini: A Python Toolkit for Reproducible Information Retrieval Research with Sparse and Dense Representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021). 2356–2362.
  52. In-Batch Negatives for Knowledge Distillation with Tightly-Coupled Teachers for Dense Retrieval. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), Anna Rogers, Iacer Calixto, Ivan Vulić, Naomi Saphra, Nora Kassner, Oana-Maria Camburu, Trapit Bansal, and Vered Shwartz (Eds.). Association for Computational Linguistics, Online, 163–173. https://doi.org/10.18653/v1/2021.repl4nlp-1.17
  53. Mitigating Confounding Bias in Recommendation via Information Bottleneck. In Proceedings of the 15th ACM Conference on Recommender Systems (Amsterdam, Netherlands) (RecSys ’21). Association for Computing Machinery, New York, NY, USA, 351–360. https://doi.org/10.1145/3460231.3474263
  54. Jiqun Liu. 2022. Toward Cranfield-inspired reusability assessment in interactive information retrieval evaluation. Information Processing & Management 59, 5 (2022), 103007.
  55. Jiqun Liu. 2023a. A Behavioral Economics Approach to Interactive Information Retrieval: Understanding and Supporting Boundedly Rational Users. Vol. 48. Springer Nature.
  56. Jiqun Liu. 2023b. Toward A Two-Sided Fairness Framework in Search and Recommendation. In Proceedings of the 2023 Conference on Human Information Interaction and Retrieval (Austin, TX, USA) (CHIIR ’23). Association for Computing Machinery, New York, NY, USA, 236–246. https://doi.org/10.1145/3576840.3578332
  57. Search behaviors in different task types. In Proceedings of the 10th Annual Joint Conference on Digital Libraries (Gold Coast, Queensland, Australia) (JCDL ’10). Association for Computing Machinery, New York, NY, USA, 69–78. https://doi.org/10.1145/1816123.1816134
  58. Jiqun Liu and Fangyuan Han. 2020. Investigating Reference Dependence Effects on User Search Interaction and Satisfaction: A Behavioral Economics Perspective. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (Virtual Event, China) (SIGIR ’20). Association for Computing Machinery, New York, NY, USA, 1141–1150. https://doi.org/10.1145/3397271.3401085
  59. Can Search Systems Detect Users’ Task Difficulty? Some Behavioral Signals. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval (Geneva, Switzerland) (SIGIR ’10). Association for Computing Machinery, New York, NY, USA, 845–846. https://doi.org/10.1145/1835449.1835645
  60. ”Satisfaction with Failure” or ”Unsatisfied Success”: Investigating the Relationship between Search Success and User Satisfaction. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23-27, 2018. 1533–1542. https://doi.org/10.1145/3178876.3186065
  61. Investigating Cognitive Effects in Session-Level Search User Satisfaction. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (Anchorage, AK, USA) (KDD ’19). Association for Computing Machinery, New York, NY, USA, 923–931. https://doi.org/10.1145/3292500.3330981
  62. ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining (Merida, Mexico) (WSDM ’24). Association for Computing Machinery, New York, NY, USA, 452–461. https://doi.org/10.1145/3616855.3635845
  63. Irene Lopatovska. 2014. Toward a model of emotions and mood in the online information search process. Journal of the Association for Information Science and Technology 65, 9 (2014), 1775–1793. https://doi.org/10.1002/asi.23078 arXiv:https://asistdl.onlinelibrary.wiley.com/doi/pdf/10.1002/asi.23078
  64. Zero-Shot Listwise Document Reranking with a Large Language Model. arXiv:2305.02156 [cs.IR]
  65. How Does Domain Expertise Affect Users’ Search Interaction and Outcome in Exploratory Search? ACM Trans. Inf. Syst. 36, 4, Article 42 (jul 2018), 30 pages. https://doi.org/10.1145/3223045
  66. When Does Relevance Mean Usefulness and User Satisfaction in Web Search?. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval (Pisa, Italy) (SIGIR ’16). Association for Computing Machinery, New York, NY, USA, 463–472. https://doi.org/10.1145/2911451.2911507
  67. User Interaction Sequences for Search Satisfaction Prediction. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (Shinjuku, Tokyo, Japan) (SIGIR ’17). Association for Computing Machinery, New York, NY, USA, 165–174. https://doi.org/10.1145/3077136.3080833
  68. Meta. 2023. LLaMA: Open and Efficient Foundation Language Models. arXiv:2302.13971 [cs.CL]
  69. Decoy Effect of Recommendation Systems on Real E-commerce Websites. In CEUR Workshop Proceedings, Vol. 3222. CEUR-WS, 151–163.
  70. Incorporating User Expectations and Behavior into the Measurement of Search Effectiveness. ACM Trans. Inf. Syst. 35, 3, Article 24 (jun 2017), 38 pages. https://doi.org/10.1145/3052768
  71. Alistair Moffat and Justin Zobel. 2008. Rank-Biased Precision for Measurement of Retrieval Effectiveness. ACM Trans. Inf. Syst. 27, 1, Article 2 (dec 2008), 27 pages. https://doi.org/10.1145/1416950.1416952
  72. Query strategies during information searching. Inf. Process. Manage. 51, 5 (sep 2015), 557–569. https://doi.org/10.1016/j.ipm.2015.05.004
  73. OpenAI. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 1877–1901. https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
  74. Gordon Pennycook and David G. Rand. 2019. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188 (2019), 39–50. https://doi.org/10.1016/j.cognition.2018.06.011 The Cognitive Science of Political Thought.
  75. Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting. arXiv:2306.17563 [cs.IR]
  76. Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (Eds.). Association for Computational Linguistics, Hong Kong, China, 3982–3992. https://doi.org/10.18653/v1/D19-1410
  77. Stephen Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Found. Trends Inf. Retr. 3, 4 (apr 2009), 333–389. https://doi.org/10.1561/1500000019
  78. S. E. Robertson and S. Walker. 1994. Some simple effective approximations to the 2-Poisson model for probabilistic weighted retrieval. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (Dublin, Ireland) (SIGIR ’94). Springer-Verlag, Berlin, Heidelberg, 232–241.
  79. Tetsuya Sakai. 2006. Evaluating evaluation metrics based on the bootstrap. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (Seattle, Washington, USA) (SIGIR ’06). Association for Computing Machinery, New York, NY, USA, 525–532. https://doi.org/10.1145/1148170.1148261
  80. Tetsuya Sakai and Zhicheng Dou. 2013. Summaries, Ranked Retrieval and Sessions: A Unified Framework for Information Access Evaluation. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval (Dublin, Ireland) (SIGIR ’13). Association for Computing Machinery, New York, NY, USA, 473–482. https://doi.org/10.1145/2484028.2484031
  81. Tetsuya Sakai and Zhaohao Zeng. 2021. Retrieval Evaluation Measures that Agree with Users’ SERP Preferences: Traditional, Preference-based, and Diversity Measures. ACM Trans. Inf. Syst. 39, 2, Article 14 (dec 2021), 35 pages. https://doi.org/10.1145/3431813
  82. How do older and young adults start searching for information? Impact of age, domain knowledge and problem complexity on the different steps of information searching. Comput. Hum. Behav. 72, C (jul 2017), 67–78. https://doi.org/10.1016/j.chb.2017.02.038
  83. Mark Sanderson. 2010. Test Collection Based Evaluation of Information Retrieval Systems. Foundations and Trends in Information Retrieval 4 (01 2010), 247–375. https://doi.org/10.1561/1500000009
  84. The Effect of Threshold Priming and Need for Cognition on Relevance Calibration and Assessment. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval (Dublin, Ireland) (SIGIR ’13). Association for Computing Machinery, New York, NY, USA, 623–632. https://doi.org/10.1145/2484028.2484090
  85. Anchoring and Adjustment in Relevance Estimation. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval (Santiago, Chile) (SIGIR ’15). Association for Computing Machinery, New York, NY, USA, 963–966. https://doi.org/10.1145/2766462.2767841
  86. Gary Smith. 2012. Chapter 10 - Multiple Regression. In Essential Statistics, Regression, and Econometrics, Gary Smith (Ed.). Academic Press, Boston, 297–331. https://doi.org/10.1016/B978-0-12-382221-5.00010-6
  87. Mark D. Smucker and Charles L.A. Clarke. 2012. Time-Based Calibration of Effectiveness Measures. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval (Portland, Oregon, USA) (SIGIR ’12). Association for Computing Machinery, New York, NY, USA, 95–104. https://doi.org/10.1145/2348283.2348300
  88. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agents. arXiv:2304.09542 [cs.CL]
  89. Impacts of Decoy Effects on the Decision Making Ability. In 2010 IEEE 12th Conference on Commerce and Enterprise Computing. 112–119. https://doi.org/10.1109/CEC.2010.30
  90. Erich Christian Teppan and Markus Zanker. 2015. Decision Biases in Recommender Systems. Journal of Internet Commerce 14, 2 (2015), 255–275. https://doi.org/10.1080/15332861.2015.1018703 arXiv:https://doi.org/10.1080/15332861.2015.1018703
  91. Amos Tversky and Daniel Kahneman. 1974. Judgment under Uncertainty: Heuristics and Biases. Science 185, 4157 (1974), 1124–1131. https://doi.org/10.1126/science.185.4157.1124 arXiv:https://www.science.org/doi/pdf/10.1126/science.185.4157.1124
  92. Amos Tversky and Daniel Kahneman. 1991. Loss Aversion in Riskless Choice: A Reference-Dependent Model. Quarterly Journal of Economics 106 (1991), 1039–1061.
  93. Amos Tversky and Daniel Kahneman. 1992. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty 5 (1992), 297–323.
  94. Ellen M. Voorhees. 2002. The Philosophy of Information Retrieval Evaluation. In Evaluation of Cross-Language Information Retrieval Systems, Carol Peters, Martin Braschler, Julio Gonzalo, and Michael Kluck (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 355–370.
  95. Ben Wang and Jiqun Liu. 2023a. Characterizing and Early Predicting User Performance for Adaptive Search Path Recommendation. Proceedings of the Association for Information Science and Technology 60, 1 (2023), 408–420. https://doi.org/10.1002/pra2.799 arXiv:https://asistdl.onlinelibrary.wiley.com/doi/pdf/10.1002/pra2.799
  96. Ben Wang and Jiqun Liu. 2023b. Investigating the role of in-situ user expectations in Web search. Information Processing & Management 60, 3 (2023), 103300.
  97. Modeling action-level satisfaction for search task satisfaction prediction. In Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval (Gold Coast, Queensland, Australia) (SIGIR ’14). Association for Computing Machinery, New York, NY, USA, 123–132. https://doi.org/10.1145/2600428.2609607
  98. Deconfounded Recommendation for Alleviating Bias Amplification. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (Virtual Event, Singapore) (KDD ’21). Association for Computing Machinery, New York, NY, USA, 1717–1725. https://doi.org/10.1145/3447548.3467249
  99. Model-Agnostic Counterfactual Reasoning for Eliminating Popularity Bias in Recommender System. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (Virtual Event, Singapore) (KDD ’21). Association for Computing Machinery, New York, NY, USA, 1791–1800. https://doi.org/10.1145/3447548.3467289
  100. Ryen W. White and Diane Kelly. 2006. A Study on the Effects of Personalization and Task Information on Implicit Feedback Performance. In Proceedings of the 15th ACM International Conference on Information and Knowledge Management (Arlington, Virginia, USA) (CIKM ’06). Association for Computing Machinery, New York, NY, USA, 297–306. https://doi.org/10.1145/1183614.1183659
  101. Alfan Farizki Wicaksono and Alistair Moffat. 2018. Empirical Evidence for Search Effectiveness Models. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (Torino, Italy) (CIKM ’18). Association for Computing Machinery, New York, NY, USA, 1571–1574. https://doi.org/10.1145/3269206.3269242
  102. Chunhua Wu and Koray Cosguner. 2020. Profiting from the Decoy Effect: A Case Study of an Online Diamond Retailer. Marketing Science 39, 5 (2020), 974–995. https://doi.org/10.1287/mksc.2020.1231 arXiv:https://doi.org/10.1287/mksc.2020.1231
  103. Decoy effect in food appearance, traceability, and price: Case of consumer preference for pork hindquarters. Journal of Behavioral and Experimental Economics 87 (2020), 101553.
  104. Serial Position Effects of Clicking Behavior on Result Pages Returned by Search Engines. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management (Maui, Hawaii, USA) (CIKM ’12). Association for Computing Machinery, New York, NY, USA, 2411–2414. https://doi.org/10.1145/2396761.2398654
  105. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. In International Conference on Learning Representations. https://openreview.net/forum?id=zeFrfgyZln
  106. Deconfounded Causal Collaborative Filtering. ACM Trans. Recomm. Syst. 1, 4, Article 17 (oct 2023), 25 pages. https://doi.org/10.1145/3606035
  107. Cascade or Recency: Constructing Better Evaluation Metrics for Session Search. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (Virtual Event, China) (SIGIR ’20). Association for Computing Machinery, New York, NY, USA, 389–398. https://doi.org/10.1145/3397271.3401163
  108. Calibrate before use: Improving few-shot performance of language models. In International conference on machine learning. PMLR, 12697–12706.
Citations (2)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.