Exploring Task Performance with Interpretable Models via Sparse Auto-Encoders

3 months ago 3

[Submitted on 8 Jul 2025]

View PDF

Abstract:Large Language Models (LLMs) are traditionally viewed as black-box algorithms, therefore reducing trustworthiness and obscuring potential approaches to increasing performance on downstream tasks. In this work, we apply an effective LLM decomposition method using a dictionary-learning approach with sparse autoencoders. This helps extract monosemantic features from polysemantic LLM neurons. Remarkably, our work identifies model-internal misunderstanding, allowing the automatic reformulation of the prompts with additional annotations to improve the interpretation by LLMs. Moreover, this approach demonstrates a significant performance improvement in downstream tasks, such as mathematical reasoning and metaphor detection.

Submission history

From: Shun Wang [view email]
[v1] Tue, 8 Jul 2025 22:17:52 UTC (8,055 KB)

Read Entire Article