Papers
Topics
Authors
Recent
Search
2000 character limit reached

Directions in Abusive Language Training Data: Garbage In, Garbage Out

Published 3 Apr 2020 in cs.CL | (2004.01670v3)

Abstract: Data-driven analysis and detection of abusive online content covers many different tasks, phenomena, contexts, and methodologies. This paper systematically reviews abusive language dataset creation and content in conjunction with an open website for cataloguing abusive language data. This collection of knowledge leads to a synthesis providing evidence-based recommendations for practitioners working with this complex and highly diverse data.

Citations (231)

Summary

  • The paper systematically reviews 63 datasets, revealing that dataset quality critically influences machine learning outcomes in abuse detection.
  • It highlights challenges in annotation practices and dataset biases stemming from limited linguistic diversity and platform reliance.
  • The study proposes best practices, including comprehensive documentation and diverse data sourcing, to bolster effective online abuse detection systems.

Systematic Review of Abusive Language Training Data: An Examination of Best Practices and Challenges

The paper "Directions in Abusive Language Training Data, a Systematic Review: Garbage In, Garbage Out" by Vidgen and Derczynski provides a comprehensive examination of the landscape of training datasets used in the detection of abusive online content. Through an analysis of 63 publicly available datasets, the authors detail the state of this research area, highlighting both the challenges faced in dataset creation and the crucial role these datasets play in developing robust machine learning systems for detecting online abuse.

The paper elucidates several key areas integral to the understanding of these datasets. Firstly, the motivation behind dataset creation is dissected into distinct social objectives, such as reducing harm, eliminating illegal content, improving online conversational health, and reducing the burden on human moderators. Each of these goals shapes the dataset's taxonomy and annotation guidelines, reflecting the specific nature of abuse being addressed.

The paper explores the detection tasks categorically, distinguishing between the nature of abuse—such as person-directed and group-directed abuse—and the level of taxonomic granularity. A pertinent highlight is the multi-faceted nature of abusive content detection: tasks range from simple binary classification (e.g., hate/not hate) to more nuanced multi-class classifications that account for various targets, strengths, and thematic elements of abuse.

Furthermore, the review offers a detailed depiction of the datasets' content and methodological diversity. It examines variable factors such as linguistic focus, source platforms, dataset size, class distribution, and annotator identity. Of particular note is the prevalent reliance on English-language datasets and data sourced from Twitter, leading to potential biases and a lack of representativeness. The authors advocate for more diverse dataset sources and emphasize the necessity of documenting contextual information about data collection and annotators to mitigate bias and enhance the validity of machine learning models.

Annotation practices are scrutinized, unveiling a range of approaches from expert annotation to crowdsourcing. The review underlines the importance of guidelines in annotation and the issues related to annotator diversity and background, which significantly impact dataset quality. The discussion on guidelines, however, reveals a concerning lack of transparency in many datasets.

From an open science perspective, the authors critically assess the challenges and opportunities of dataset sharing. They argue for increased transparency and access, recognizing both the ethical challenges and the substantial advantages in terms of collaboration and reproducibility. The proposal of innovative sharing mechanisms like platform-backed datasets or data trusts offers a pathway to enhance accessibility while considering ethical deliberations.

Practically, the paper proposes best practices for creating training datasets that encompass task definition, dataset sampling, annotation integrity, and comprehensive documentation. These recommendations aim to enhance quality and relevance, paving the way for the deployment of effective abusive content detectors in real-world applications. This systematic review serves as an essential reference point for researchers and practitioners committed to the field, encouraging meticulous attention to dataset development as a foundational pillar for advancing online abuse detection capabilities.

In sum, "Directions in Abusive Language Training Data, a Systematic Review: Garbage In, Garbage Out" extends significant insights into the complexities of dataset design and utilization, calling for a concerted effort to overcome intrinsic challenges for the betterment of society's online environments.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.