Violent Tendencies in LLMs: Analysis via Behavioral Vignettes

8 hours ago 1

[Submitted on 25 Jun 2025]

View PDF HTML (experimental)

Abstract:Large language models (LLMs) are increasingly proposed for detecting and responding to violent content online, yet their ability to reason about morally ambiguous, real-world scenarios remains underexamined. We present the first study to evaluate LLMs using a validated social science instrument designed to measure human response to everyday conflict, namely the Violent Behavior Vignette Questionnaire (VBVQ). To assess potential bias, we introduce persona-based prompting that varies race, age, and geographic identity within the United States. Six LLMs developed across different geopolitical and organizational contexts are evaluated under a unified zero-shot setting. Our study reveals two key findings: (1) LLMs surface-level text generation often diverges from their internal preference for violent responses; (2) their violent tendencies vary across demographics, frequently contradicting established findings in criminology, social science, and psychology.

Submission history

From: Yanjun Gao [view email]
[v1] Wed, 25 Jun 2025 20:43:04 UTC (2,465 KB)

Read Entire Article