Papers
Topics
Authors
Recent
Search
2000 character limit reached

Hypothesis Testing for Quantifying LLM-Human Misalignment in Multiple Choice Settings

Published 17 Jun 2025 in cs.CY, cs.CL, and cs.LG | (2506.14997v1)

Abstract: As LLMs increasingly appear in social science research (e.g., economics and marketing), it becomes crucial to assess how well these models replicate human behavior. In this work, using hypothesis testing, we present a quantitative framework to assess the misalignment between LLM-simulated and actual human behaviors in multiple-choice survey settings. This framework allows us to determine in a principled way whether a specific LLM can effectively simulate human opinions, decision-making, and general behaviors represented through multiple-choice options. We applied this framework to a popular LLM for simulating people's opinions in various public surveys and found that this model is ill-suited for simulating the tested sub-populations (e.g., across different races, ages, and incomes) for contentious questions. This raises questions about the alignment of this LLM with the tested populations, highlighting the need for new practices in using LLMs for social science studies beyond naive simulations of human subjects.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.