Papers
Topics
Authors
Recent
Search
2000 character limit reached

Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models

Published 23 Jun 2022 in cs.CL and cs.CY | (2206.11484v2)

Abstract: This paper presents exploratory work on whether and to what extent biases against queer and trans people are encoded in LLMs such as BERT. We also propose a method for reducing these biases in downstream tasks: finetuning the models on data written by and/or about queer people. To measure anti-queer bias, we introduce a new benchmark dataset, WinoQueer, modeled after other bias-detection benchmarks but addressing homophobic and transphobic biases. We found that BERT shows significant homophobic bias, but this bias can be mostly mitigated by finetuning BERT on a natural language corpus written by members of the LGBTQ+ community.

Citations (8)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.