Mechanistic Interpretability of Emotion Inference in Large Language Models

4 months ago 9

[Submitted on 8 Feb 2025]

View PDF HTML (experimental)

Abstract:Large language models (LLMs) show promising capabilities in predicting human emotions from text. However, the mechanisms through which these models process emotional stimuli remain largely unexplored. Our study addresses this gap by investigating how autoregressive LLMs infer emotions, showing that emotion representations are functionally localized to specific regions in the model. Our evaluation includes diverse model families and sizes and is supported by robustness checks. We then show that the identified representations are psychologically plausible by drawing on cognitive appraisal theory, a well-established psychological framework positing that emotions emerge from evaluations (appraisals) of environmental stimuli. By causally intervening on construed appraisal concepts, we steer the generation and show that the outputs align with theoretical and intuitive expectations. This work highlights a novel way to causally intervene and precisely shape emotional text generation, potentially benefiting safety and alignment in sensitive affective domains.

Submission history

From: Ala Nekouvaght Tak [view email]
[v1] Sat, 8 Feb 2025 08:11:37 UTC (1,174 KB)

Read Entire Article