LLMs replacing human participants harmfully misportray, flatten identity groups

1 day ago 2

[Submitted on 2 Feb 2024 (v1), last revised 3 Feb 2025 (this version, v3)]

View PDF HTML (experimental)

Abstract:Large language models (LLMs) are increasing in capability and popularity, propelling their application in new domains -- including as replacements for human participants in computational social science, user testing, annotation tasks, and more. In many settings, researchers seek to distribute their surveys to a sample of participants that are representative of the underlying human population of interest. This means in order to be a suitable replacement, LLMs will need to be able to capture the influence of positionality (i.e., relevance of social identities like gender and race). However, we show that there are two inherent limitations in the way current LLMs are trained that prevent this. We argue analytically for why LLMs are likely to both misportray and flatten the representations of demographic groups, then empirically show this on 4 LLMs through a series of human studies with 3200 participants across 16 demographic identities. We also discuss a third limitation about how identity prompts can essentialize identities. Throughout, we connect each limitation to a pernicious history of epistemic injustice against the value of lived experiences that explains why replacement is harmful for marginalized demographic groups. Overall, we urge caution in use cases where LLMs are intended to replace human participants whose identities are relevant to the task at hand. At the same time, in cases where the benefits of LLM replacement are determined to outweigh the harms (e.g., the goal is to supplement rather than fully replace, engaging human participants may cause them harm), we provide inference-time techniques that we empirically demonstrate do reduce, but do not remove, these harms.

Submission history

From: Angelina Wang [view email]
[v1] Fri, 2 Feb 2024 21:21:06 UTC (17,771 KB)
[v2] Tue, 1 Oct 2024 00:30:06 UTC (18,168 KB)
[v3] Mon, 3 Feb 2025 16:28:59 UTC (20,725 KB)

Read Entire Article