LLMZip: Lossless Text Compression Using Large Language Models

7 hours ago 1

[Submitted on 6 Jun 2023 (v1), last revised 26 Jun 2023 (this version, v2)]

View PDF

Abstract:We provide new estimates of an asymptotic upper bound on the entropy of English using the large language model LLaMA-7B as a predictor for the next token given a window of past tokens. This estimate is significantly smaller than currently available estimates in \cite{cover1978convergent}, \cite{lutati2023focus}. A natural byproduct is an algorithm for lossless compression of English text which combines the prediction from the large language model with a lossless compression scheme. Preliminary results from limited experiments suggest that our scheme outperforms state-of-the-art text compression schemes such as BSC, ZPAQ, and paq8h.

Submission history

From: Krishna Narayanan [view email]
[v1] Tue, 6 Jun 2023 22:42:00 UTC (22 KB)
[v2] Mon, 26 Jun 2023 18:03:12 UTC (23 KB)

Read Entire Article