Papers
Topics
Authors
Recent
Search
2000 character limit reached

Measuring Geographic Performance Disparities of Offensive Language Classifiers

Published 15 Sep 2022 in cs.CL | (2209.07353v1)

Abstract: Text classifiers are applied at scale in the form of one-size-fits-all solutions. Nevertheless, many studies show that classifiers are biased regarding different languages and dialects. When measuring and discovering these biases, some gaps present themselves and should be addressed. First, Does language, dialect, and topical content vary across geographical regions?'' and secondlyIf there are differences across the regions, do they impact model performance?''. We introduce a novel dataset called GeoOLID with more than 14 thousand examples across 15 geographically and demographically diverse cities to address these questions. We perform a comprehensive analysis of geographical-related content and their impact on performance disparities of offensive language detection models. Overall, we find that current models do not generalize across locations. Likewise, we show that while offensive LLMs produce false positives on African American English, model performance is not correlated with each city's minority population proportions. Warning: This paper contains offensive language.

Citations (4)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.