Universal pre-training by iterated random computation

4 hours ago 2

[Submitted on 24 Jun 2025]

View PDF HTML (experimental)

Abstract:We investigate the use of randomly generated data for the sake of pre-training a model. We justify this approach theoretically from the perspective of algorithmic complexity, building on recent research that shows that sequence models can be trained to approximate Solomonoff induction. We derive similar, but complementary theoretical results. We show empirically that synthetically generated data can be used to pre-train a model before the data is seen. We replicate earlier results that models trained this way show zero-shot in-context learning across a variety of datasets, and that this performance improves with scale. We extend earlier results to real-world data, and show that finetuning a model after pre-training offers faster convergence and better generalization.

Submission history

From: Peter Bloem [view email]
[v1] Tue, 24 Jun 2025 23:36:35 UTC (420 KB)

Read Entire Article