TrojanStego: Your Language Model Can Be a Steganographic Agent

1 day ago 5

[Submitted on 26 May 2025 (v1), last revised 27 May 2025 (this version, v2)]

View PDF

Abstract:As large language models (LLMs) become integrated into sensitive workflows, concerns grow over their potential to leak confidential information. We propose TrojanStego, a novel threat model in which an adversary fine-tunes an LLM to embed sensitive context information into natural-looking outputs via linguistic steganography, without requiring explicit control over inference inputs. We introduce a taxonomy outlining risk factors for compromised LLMs, and use it to evaluate the risk profile of the threat. To implement TrojanStego, we propose a practical encoding scheme based on vocabulary partitioning learnable by LLMs via fine-tuning. Experimental results show that compromised models reliably transmit 32-bit secrets with 87% accuracy on held-out prompts, reaching over 97% accuracy using majority voting across three generations. Further, they maintain high utility, can evade human detection, and preserve coherence. These results highlight a new class of LLM data exfiltration attacks that are passive, covert, practical, and dangerous.

Submission history

From: Dominik Meier [view email]
[v1] Mon, 26 May 2025 15:20:51 UTC (329 KB)
[v2] Tue, 27 May 2025 07:24:52 UTC (329 KB)

Read Entire Article