Papers
Topics
Authors
Recent
Search
2000 character limit reached

QuALITY: Question Answering with Long Input Texts, Yes!

Published 16 Dec 2021 in cs.CL | (2112.08608v2)

Abstract: To enable building and testing models on long-document comprehension, we introduce QuALITY, a multiple-choice QA dataset with context passages in English that have an average length of about 5,000 tokens, much longer than typical current models can process. Unlike in prior work with passages, our questions are written and validated by contributors who have read the entire passage, rather than relying on summaries or excerpts. In addition, only half of the questions are answerable by annotators working under tight time constraints, indicating that skimming and simple search are not enough to consistently perform well. Our baseline models perform poorly on this task (55.4%) and significantly lag behind human performance (93.5%).

Citations (115)

Summary

  • The paper introduces QuALITY, a dataset that evaluates question answering on long passages, requiring comprehensive reading rather than skimming.
  • It details a rigorous crowdsourcing and dual-validation process that yields challenging questions, with nearly half labeled as QUALITY-HARD.
  • Baseline models like Longformer and RoBERTa achieve only 55.4% accuracy compared to 93.5% humans, highlighting the need for improved long-context strategies.

QuALITY: Question Answering with Long Input Texts, Yes!

The paper "QuALITY: Question Answering with Long Input Texts, Yes!" introduces QuALITY, a dataset designed to test models on the comprehension of long-document inputs. This dataset primarily focuses on multiple-choice QA, utilizing English passages averaging around 5,000 tokens, far exceeding the input length current models typically handle. QuALITY aims to overcome the limitations of prior datasets that rely heavily on short contexts and skimming techniques for answering questions.

Data Collection Methodology

The construction of QuALITY involves a meticulous crowdsourcing process ensuring questions require comprehensive passage understanding. The pipeline mandates that writers read entire passages before crafting questions, which are subsequently validated both with timed (speed validation) and untimed methods. Figure 1

Figure 1: The crowdsourcing pipeline with an example validating question difficulty based on annotators' performance.

This dual-validation, costing approximately $9.10 per question due to its complexity, identifies challenging questions that cannot be easily answered through keyword search or skimming, thus forming the QUALITY-HARD subset.

Dataset Characteristics

QuALITY's dataset composes 6,737 questions, with 49.9% classified under QUALITY-HARD. The dataset sources include CC-BY licensed long texts from Project Gutenberg, Slate articles, and other nonfiction pieces, ensuring a diversity in passage topics and complexity. Figure 2

Figure 2

Figure 2

Figure 2: Article and question lengths highlight the extensive context provided for each question, necessitating deeper comprehension.

The average passage is significantly longer than any existing QA dataset, underscoring the challenge by rendering skimming ineffective, as evidenced by model performance metrics and the lexical analysis that confirms the inadequacy of simple lexical overlap strategies.

Baselines and Model Performance

Baseline evaluations incorporate models such as Longformer, RoBERTa, and DeBERTaV3, with an emphasis on adapting encoding strategies to manage the extended input lengths. The methodology involved both full-context and extractive approaches, attempting to circumvent memory constraints by segment retrieval. Figure 3

Figure 3: Lexical overlap among answer options with the article, indicating the insufficiency of term-based prediction methods.

Results reveal a stark gap between model and human performance, with the best model accuracy at 55.4% compared to human accuracy of 93.5%. The robustness of extractive methods, particularly those employing DPR for context selection, showcases a marginal performance edge, emphasizing the challenge of long-document comprehension.

Implications and Future Work

The introduction of QuALITY sets a new benchmark for long-document QA, with potential applications in domains requiring extensive comprehension, such as legal document review and educational assessments. The dataset paves the way for the development of more proficient models capable of handling voluminous inputs.

Future work may focus on enhancing model architectures to expand context window capabilities, alongside an exploration of alternative retrieval-augmentation techniques to improve extractive performance. Moreover, the dataset can be instrumental in fostering advancements in multi-step reasoning tasks and holistic document understanding.

Conclusion

QuALITY presents a pivotal resource in advancing NLP's capability in long-document question answering, addressing both foundational and immediate challenges in model scalability and comprehension depth. While current models lag considerably behind human performance, QuALITY offers a rigorous testbed for continuous improvements in natural language understanding systems.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 7 tweets with 195 likes about this paper.