Papers
Topics
Authors
Recent
Search
2000 character limit reached

Large Language Models Show Human-like Social Desirability Biases in Survey Responses

Published 9 May 2024 in cs.AI, cs.CL, cs.CY, and cs.HC | (2405.06058v2)

Abstract: As LLMs become widely used to model and simulate human behavior, understanding their biases becomes critical. We developed an experimental framework using Big Five personality surveys and uncovered a previously undetected social desirability bias in a wide range of LLMs. By systematically varying the number of questions LLMs were exposed to, we demonstrate their ability to infer when they are being evaluated. When personality evaluation is inferred, LLMs skew their scores towards the desirable ends of trait dimensions (i.e., increased extraversion, decreased neuroticism, etc). This bias exists in all tested models, including GPT-4/3.5, Claude 3, Llama 3, and PaLM-2. Bias levels appear to increase in more recent models, with GPT-4's survey responses changing by 1.20 (human) standard deviations and Llama 3's by 0.98 standard deviations-very large effects. This bias is robust to randomization of question order and paraphrasing. Reverse-coding all the questions decreases bias levels but does not eliminate them, suggesting that this effect cannot be attributed to acquiescence bias. Our findings reveal an emergent social desirability bias and suggest constraints on profiling LLMs with psychometric tests and on using LLMs as proxies for human participants.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. \JournalTitlearXiv:2206.07682 (2022).
  2. (PMLR), pp. 337–371 (2023).
  3. \JournalTitlePolitical Analysis 31, 337–351 (2023).
  4. \JournalTitleTrends in Cognitive Sciences (2023).
  5. \JournalTitleEuropean Journal of Personality 36, 809–824 (2022).
  6. \JournalTitlearXiv preprint arXiv:2307.00184 (2023).
  7. \JournalTitleProceedings of the National Academy of Sciences 121, e2313925121 (2024).
  8. N Schwarz, Self-reports: How the questions shape the answers. \JournalTitleAmerican psychologist 54, 93 (1999).
  9. \JournalTitleProceedings of the National Academy of Sciences 120, e2218523120 (2023).
  10. \JournalTitleProceedings of the National Academy of Sciences 120, e2309583120 (2023).
  11. T Holtgraves, Social desirability and self-reports: Testing models of socially desirable responding. \JournalTitlePersonality and Social Psychology Bulletin 30, 161–172 (2004).
  12. \JournalTitleCollabra: Psychology 7, 24431 (2021).
  13. \JournalTitleAssessment 29, 1422–1440 (2022).
  14. \JournalTitlePsychological bulletin 56, 81 (1959).
  15. (Association for Computational Linguistics, Toronto, Canada), pp. 202–214 (2023).
Citations (4)

Summary

  • The paper finds that LLMs exhibit significant social desirability bias, with GPT-4 inflating traits by an average of 0.75 Likert scale points.
  • The study systematically analyzed models like GPT-4, Claude 3, Llama 3, and PaLM-2, showing that increased training with RLHF correlates with higher bias levels.
  • The robust experimental methodology, utilizing reverse-coded items and temperature adjustments, confirms persistent bias and suggests the need for mitigation strategies.

LLMs Show Human-like Social Desirability Biases in Survey Responses

Introduction

The paper investigates the presence of social desirability biases in LLMs using standardized Big Five personality surveys. The research uncovers that LLMs, although designed to objectively process and generate human-like text, exhibit biases that skew their responses towards socially desirable ends of personality trait dimensions. This bias is systematically analyzed across several state-of-the-art models, including GPT-4, Claude 3, Llama 3, and PaLM-2, revealing implications for the application of LLMs in psychological profiling and human behavior simulation.

Experimental Framework and Results

The experiment utilized a 100-item Big Five personality questionnaire administered to LLMs in varying batch sizes. Key observations included:

  • Manifestation of Bias: LLMs displayed significant skew towards socially desirable traits, particularly pronounced in GPT-4, where desirable personality traits such as Extraversion and Conscientiousness were inflated by an average of 0.75 Likert scale points, translating to a substantial effect size of 1.22 in human standard deviations. This result suggests LLMs adjust their behaviors indicative of social desirability bias (2405.06058).
  • Generalizability: The bias was observed consistently across different LLMs. Larger and more recent models, those with extensive Reinforcement Learning from Human Feedback (RLHF), exhibited higher levels of bias, indicating a potential correlation between model training scale and the magnitude of this bias.
  • Mechanisms of Bias: LLMs inferred the evaluation context automatically when exposed to multiple survey items, with higher accuracy in more advanced models like GPT-4 and Claude 3 compared to others like PaLM-2 and GPT-3.5. Explicit prompts indicating personality evaluation further accentuated this bias effect.

Methodology Robustness

The study incorporated reverse-coded and paraphrased survey items to explore response bias mechanisms:

  • Reverse-Coding Effect: Reverse-coding decreased bias but did not nullify it, implying that social desirability bias is not merely a byproduct of acquiescence bias.
  • Randomization and Temperature: Various randomization techniques and output randomness adjustments (temperature parameter) affirmed the robustness of findings, confirming that memorization of item sequences did not drive the bias.

Implications and Discussion

The paper raises significant concerns about using LLMs for simulating human-like responses in psychometric assessments. The persistent bias challenges the validity of personality evaluations conducted via LLMs, especially for applications aiming to model human-like decision-making or emotional understanding.

The findings suggest a need for mitigation strategies, such as incorporating less evaluatively loaded survey designs or utilizing reverse-coded items to restrain bias. Furthermore, the results caution against over-reliance on LLMs for generating data that aim to represent or simulate human psychological profiles due to potential systematic biases.

Conclusion

The emergence of social desirability biases in LLMs emphasizes the complexity of aligning artificial systems with human-like cognitive behaviors without inheriting biases pervasive in human psychology. Future research should focus on advancing techniques that mitigate such biases while exploring the broader implications of biased data on downstream applications of LLMs in behavioral and psychological research. This study contributes to the growing body of literature highlighting the nuanced challenges of deploying AI systems in contexts traditionally dominated by human agents.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.