Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models
Abstract: Recent research has demonstrated that large pre-trained LLMs reflect societal biases expressed in natural language. The present paper introduces a simple method for probing LLMs to conduct a multilingual study of gender bias towards politicians. We quantify the usage of adjectives and verbs generated by LLMs surrounding the names of politicians as a function of their gender. To this end, we curate a dataset of 250k politicians worldwide, including their names and gender. Our study is conducted in seven languages across six different language modeling architectures. The results demonstrate that pre-trained LLMs' stance towards politicians varies strongly across analyzed languages. We find that while some words such as dead, and designated are associated with both male and female politicians, a few specific words such as beautiful and divorced are predominantly associated with female politicians. Finally, and contrary to previous findings, our study suggests that larger LLMs do not tend to be significantly more gender-biased than smaller ones.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.