Papers
Topics
Authors
Recent
Search
2000 character limit reached

Consistency is Key: Disentangling Label Variation in Natural Language Processing with Intra-Annotator Agreement

Published 25 Jan 2023 in cs.CL | (2301.10684v1)

Abstract: We commonly use agreement measures to assess the utility of judgements made by human annotators in NLP tasks. While inter-annotator agreement is frequently used as an indication of label reliability by measuring consistency between annotators, we argue for the additional use of intra-annotator agreement to measure label stability over time. However, in a systematic review, we find that the latter is rarely reported in this field. Calculating these measures can act as important quality control and provide insights into why annotators disagree. We propose exploratory annotation experiments to investigate the relationships between these measures and perceptions of subjectivity and ambiguity in text items.

Citations (12)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 10 likes about this paper.