Multi-Hop Question Answering: When Can Humans Help, and Where do They Struggle?
Abstract: Multi-hop question answering is a challenging task for both LLMs and humans, as it requires recognizing when multi-hop reasoning is needed, followed by reading comprehension, logical reasoning, and knowledge integration. To better understand how humans might collaborate effectively with AI, we evaluate the performance of crowd workers on these individual reasoning subtasks. We find that while humans excel at knowledge integration (97\% accuracy), they often fail to recognize when a question requires multi-hop reasoning (67\% accuracy). Participants perform reasonably well on both single-hop and multi-hop QA (84\% and 80\% accuracy, respectively), but frequently make semantic mistakes--for example, answering "when" an event happened when the question asked "where." These findings highlight the importance of designing AI systems that complement human strengths while compensating for common weaknesses.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.