Papers
Topics
Authors
Recent
Search
2000 character limit reached

Memorization vs. Generalization: Quantifying Data Leakage in NLP Performance Evaluation

Published 3 Feb 2021 in cs.CL and cs.LG | (2102.01818v1)

Abstract: Public datasets are often used to evaluate the efficacy and generalizability of state-of-the-art methods for many tasks in NLP. However, the presence of overlap between the train and test datasets can lead to inflated results, inadvertently evaluating the model's ability to memorize and interpreting it as the ability to generalize. In addition, such data sets may not provide an effective indicator of the performance of these methods in real world scenarios. We identify leakage of training data into test data on several publicly available datasets used to evaluate NLP tasks, including named entity recognition and relation extraction, and study them to assess the impact of that leakage on the model's ability to memorize versus generalize.

Citations (80)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.