Synthetic Bootstrapped Pretraining

1 month ago 4

[Submitted on 17 Sep 2025 (v1), last revised 24 Sep 2025 (this version, v2)]

View PDF HTML (experimental)

Abstract:We introduce Synthetic Bootstrapped Pretraining (SBP), a language model (LM) pretraining procedure that first learns a model of relations between documents from the pretraining dataset and then leverages it to synthesize a vast new corpus for joint training. While the standard pretraining teaches LMs to learn causal correlations among tokens within a single document, it is not designed to efficiently model the rich, learnable inter-document correlations that can potentially lead to better performance. We validate SBP by designing a compute-matched pretraining setup and pretrain a 3B-parameter model on up to 1T tokens from scratch. We find SBP consistently improves upon a strong repetition baseline and delivers a significant fraction of performance improvement attainable by an oracle upper bound with access to 20x more unique data. Qualitative analysis reveals that the synthesized documents go beyond mere paraphrases -- SBP first abstracts a core concept from the seed material and then crafts a new narration on top of it. Besides strong empirical performance, SBP admits a natural Bayesian interpretation: the synthesizer implicitly learns to abstract the latent concepts shared between related documents.

Submission history

From: Zitong Yang [view email]
[v1] Wed, 17 Sep 2025 22:28:27 UTC (234 KB)
[v2] Wed, 24 Sep 2025 06:04:40 UTC (235 KB)

Read Entire Article