Compressing for the Browser in Go

3 hours ago 1

When a modern browser sends a HTTP request to a web server, it includes the following header:

Accept-Encoding: gzip, deflate, br, zstd

This tells the server that the response can be compressed using one of the following compression algorithms: gzip and deflate (oldest, very similar), brotli and zstandard.

If your server is written in Go, which algorithm should you use?

I wondered that myself so I decided to test it. I measured compression time and the size of a compressed data.

In benchmarking it’s important to pick a representative sample.

I’m currently working on Edna - a scratchpad and note-taker for developers and power users, like myself (this very article was written in Edna).

It’s an SPA written in Svelte. After bundling and optimizing I get ~640 kB index.js which is a good test case for a real-life, production, optimized JavaScript.

Here are results of compression test:

found index-YpZ0JZes.js of size 639861 (640 kB) compressing with gzip compressing with brotli: default (level 6) compressing with brotli: best (level 11) compressing with zstd level: better (3) compressing with zstd level: best (4) gzip: 200746 (201 kB) in 12 ms brotli default: 206298 (206 kB) in 18 ms brotli best: 183887 (184 kB) in 977 ms zstd better: 106458 (106 kB) in 3 ms zstd best: 93966 (94 kB) in 14 ms

the winner is: zstd level 3

zstd level 3 is a clear winner: it achieves much better compression ratio than gzip/brotli and much faster speeds.

If you want the absolute smallest files, zstd level 4 has a slight edge over level 3 but at a cost of much higher compression times.

the code

We use the following Go libraries:

  • github.com/andybalholm/brotli for brotli
  • github.com/klauspost/compress for gzip and zstd

The code of benchmark function:

func benchFileCompress(path string) {   d, err := os.ReadFile(path)   panicIfErr(err)   var results []benchResult   gzipCompress := func(d []byte) []byte {     var buf bytes.Buffer     w, err := gzip.NewWriterLevel(&buf, gzip.BestCompression)     panicIfErr(err)     _, err = w.Write(d)     panicIfErr(err)     err = w.Close()     panicIfErr(err)     return buf.Bytes()   }     zstdCompress := func(d []byte, level zstd.EncoderLevel) []byte {     var buf bytes.Buffer     w, err := zstd.NewWriter(&buf, zstd.WithEncoderLevel(level), zstd.WithEncoderConcurrency(1))     panicIfErr(err)     _, err = w.Write(d)     panicIfErr(err)     err = w.Close()     panicIfErr(err)     return buf.Bytes()   }   brCompress := func(d []byte, level int) []byte {     var dst bytes.Buffer     w := brotli.NewWriterLevel(&dst, level)     _, err := w.Write(d)     panicIfErr(err)     err = w.Close()     panicIfErr(err)     return dst.Bytes()   }   var cd []byte   logf("compressing with gzip\n")   t := time.Now()   cd = gzipCompress(d)   push(&results, benchResult{"gzip", cd, time.Since(t)}) logf("compressing with brotli: default (level 6)\n")   t = time.Now()   cd = brCompress(d, brotli.DefaultCompression)   push(&results, benchResult{"brotli default", cd, time.Since(t)})   logf("compressing with brotli: best (level 11)\n")   t = time.Now()   cd = brCompress(d, brotli.BestCompression)   push(&results, benchResult{"brotli best", cd, time.Since(t)})   logf("compressing with zstd level: better (3)\n")   t = time.Now()   cd = zstdCompress(d, zstd.SpeedBetterCompression)   push(&results, benchResult{"zstd better", cd, time.Since(t)})   logf("compressing with zstd level: best (4)\n")   t = time.Now()   cd = zstdCompress(d, zstd.SpeedBestCompression)   push(&results, benchResult{"zstd best", cd, time.Since(t)})   for _, r := range results {     logf("%14s: %6d (%s) in %s\n", r.name, len(r.data), humanSize(len(r.data)), r.dur)   } }
Read Entire Article