Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity

3 weeks ago 2

[Submitted on 1 Oct 2025 (v1), last revised 10 Oct 2025 (this version, v3)]

View PDF HTML (experimental)

Abstract:Post-training alignment often reduces LLM diversity, leading to a phenomenon known as mode collapse. Unlike prior work that attributes this effect to algorithmic limitations, we identify a fundamental, pervasive data-level driver: typicality bias in preference data, whereby annotators systematically favor familiar text as a result of well-established findings in cognitive psychology. We formalize this bias theoretically, verify it on preference datasets empirically, and show that it plays a central role in mode collapse. Motivated by this analysis, we introduce Verbalized Sampling, a simple, training-free prompting strategy to circumvent mode collapse. VS prompts the model to verbalize a probability distribution over a set of responses (e.g., "Generate 5 jokes about coffee and their corresponding probabilities"). Comprehensive experiments show that VS significantly improves performance across creative writing (poems, stories, jokes), dialogue simulation, open-ended QA, and synthetic data generation, without sacrificing factual accuracy and safety. For instance, in creative writing, VS increases diversity by 1.6-2.1x over direct prompting. We further observe an emergent trend that more capable models benefit more from VS. In sum, our work provides a new data-centric perspective on mode collapse and a practical inference-time remedy that helps unlock pre-trained generative diversity.

Submission history

From: Jiayi Zhang [view email]
[v1] Wed, 1 Oct 2025 17:55:37 UTC (7,784 KB)
[v2] Mon, 6 Oct 2025 16:29:44 UTC (28,095 KB)
[v3] Fri, 10 Oct 2025 17:38:52 UTC (9,987 KB)

Read Entire Article