Pre-Training Large Memory Language Models with Internal and External Knowledge

3 months ago 4

[Submitted on 21 May 2025 (v1), last revised 2 Jul 2025 (this version, v2)]

View PDF HTML (experimental)

Abstract:Neural language models are black-boxes -- both linguistic patterns and factual knowledge are distributed across billions of opaque parameters. This entangled encoding makes it difficult to reliably inspect, verify, or update specific facts. We propose a new class of language models, Large Memory Language Models (LMLM) with a pre-training recipe that stores factual knowledge in both internal weights and an external database. Our approach strategically masks externally retrieved factual values from the training loss, thereby teaching the model to perform targeted lookups rather than relying on memorization in model weights. Our experiments demonstrate that LMLMs achieve competitive performance compared to significantly larger, knowledge-dense LLMs on standard benchmarks, while offering the advantages of explicit, editable, and verifiable knowledge bases. This work represents a fundamental shift in how language models interact with and manage factual knowledge.

Submission history

From: Linxi Zhao [view email]
[v1] Wed, 21 May 2025 19:26:03 UTC (1,708 KB)
[v2] Wed, 2 Jul 2025 04:16:51 UTC (1,709 KB)

Read Entire Article