Text or pixels? On the token efficiency of visual text inputs in multimodal LLMs

1 day ago 1

[Submitted on 21 Oct 2025 (v1), last revised 22 Oct 2025 (this version, v2)]

View PDF HTML (experimental)

Abstract:Large language models (LLMs) and their multimodal variants can now process visual inputs, including images of text. This raises an intriguing question: can we compress textual inputs by feeding them as images to reduce token usage while preserving performance? In this paper, we show that visual text representations are a practical and surprisingly effective form of input compression for decoder LLMs. We exploit the idea of rendering long text inputs as a single image and provide it directly to the model. This leads to dramatically reduced number of decoder tokens required, offering a new form of input compression. Through experiments on two distinct benchmarks RULER (long-context retrieval) and CNN/DailyMail (document summarization) we demonstrate that this text-as-image method yields substantial token savings (often nearly half) without degrading task performance.

Submission history

From: Yanhong Li [view email]
[v1] Tue, 21 Oct 2025 04:07:20 UTC (2,927 KB)
[v2] Wed, 22 Oct 2025 01:54:03 UTC (2,927 KB)

Read Entire Article