This has results for a CPU-bound Insert Benchmark with Postgres on a large server. A blog post about a CPU-bound workload on the same server is here.
tl;dr
- initial load step (l.i0)
- 18 beta1 is 4% faster than 17.4
- create index step (l.x)
- 18 beta1 with io_method =sync and =workers has similar perf as 17.4 and is 7% faster than 17.4 with =io_uring
- write-heavy steps (l.i1, l.i2)
- 18 beta1 and 17.4 have similar performance except for l.i2 with 18 beta1 and io_method=workers where 18 beta1 is 40% faster. This is an odd result and I am repeating the benchmark.
- range query steps (qr100, qr500, qr1000)
- 18 beta1 is up to (3%, 2%, 3%) slower than 17.4 with io_method= (sync, workers, io_uring). The issue might be new CPU overhead.
- point query steps (qp100, qp500, qp1000)
- 18 beta1 is up to (3%, 5%, 2%) slower than 17.4 with io_method= (sync, workers, io_uring). The issue might be new CPU overhead.
Builds, configuration and hardware
I compiled Postgres from source using -O2 -fno-omit-frame-pointer for version 18 beta1 and 17.4. I got the source for 18 beta1 from github using the REL_18_BETA1 tag because I started this benchmark effort a few days before the official release.
For 18 beta1 I tested 3 configuration files, and they are here:
- conf.diff.cx10b_c32r128 (x10b) - uses io_method=sync
- conf.diff.cx10cw4_c32r128 (x10cw4) - uses io_method=worker with io_workers=4
- conf.diff.cx10d_c32r128 (x10d) - uses io_method=io_uring
The Benchmark
The benchmark is explained here and is run with 20 client and tables (table per client) and 200M rows per table. The database is larger than memory. In some benchmark steps the working set is larger than memory (see the point query steps qp100, qp500, qp1000) while the working set it cached for other benchmarks steps (see the range query steps qr100, qr500 and qr1000).
The benchmark steps are:
- l.i0
- insert 10 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
- l.x
- create 3 secondary indexes per table. There is one connection per client.
- l.i1
- use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
- l.i2
- like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
- Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
- qr100
- use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
- qp100
- like qr100 except uses point queries on the PK index
- qr500
- like qr100 but the insert and delete rates are increased from 100/s to 500/s
- qp500
- like qp100 but the insert and delete rates are increased from 100/s to 500/s
- qr1000
- like qr100 but the insert and delete rates are increased from 100/s to 1000/s
- qp1000
- like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview
The performance report is here.
The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.
Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result with io_workers=2.
When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures:
- insert/s for l.i0, l.i1, l.i2
- indexed rows/s for l.x
- range queries/s for qr100, qr500, qr1000
- point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.
Results: details
The performance summary is here.
See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.4 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
- x10b with io_method=sync
- x10cw4 with io_method=worker and io_workers=4
- x10d with io_method=io_uring).
The summary of the summary is:
- initial load step (l.i0)
- 18 beta1 is 4% faster than 17.4
- From metrics, 18 beta1 has a lower context switch rate (cspq) and sustains a higher write rate to storage (wmbps).
- create index step (l.x)
- 18 beta1 with io_method =sync and =workers has similar perf as 17.4 and is 7% faster than 17.4 with =io_uring
- From metrics, 18 beta1 with io_method=io_uring sustains a higher write rate (wmbps)
- write-heavy steps (l.i1, l.i2)
- 18 beta1 and 17.4 have similar performance except for l.i2 with 18 beta1 and io_method=workers where 18 beta1 is 40% faster. This is an odd result and I am repeating the benchmark.
- From metrics for l.i1 and l.i2, in the case where 18 beta1 is 40% faster, there is much less CPU/operation (cpupq).
- range query steps (qr100, qr500, qr1000)
- 18 beta1 is up to (3%, 2%, 3%) slower than 17.4 with io_method= (sync, workers, io_uring)
- From metrics for qr100, qr500 and qr1000 the problem might be more CPU/operation (cpupq)
- Both 17.4 and 18 beta1 failed to sustain the target rate of 20,000 inserts and 20,000 deletes/s. They were close and did ~18,000/s for each. See the third table here.
- point query steps (qp100, qp500, qp1000)
- 18 beta1 is up to (3%, 5%, 2%) slower than 17.4 with io_method= (sync, workers, io_uring).
- From metrics for qp100, qp500 and qp1000 the problem might be more CPU/operation (cpupq)
- Both 17.4 and 18 beta1 failed to sustain the target rate of 20,000 inserts and 20,000 deletes/s. They were close and did ~18,000/s for each. See the third table here.
The summary is:
- initial load step (l.i0)
- rQPS for (x10b, x10cw4, x10d) was (1.04, 1.04, 1.04)
- create index step (l.x)
- rQPS for (x10b, x10cw4, x10d) was (0.99, 0.99, 1.07)
- write-heavy steps (l.i1, l.i2)
- for l.i1 the rQPS for (x10b, x10cw4, x10d) was (1.01, 0.99, 1.02)
- for l.i2 the rQPS for (x10b, x10cw4, x10d) was (1.00, 1.40, 0.99)
- range query steps (qr100, qr500, qr1000)
- for qr100 the rQPS for (x10b, x10cw4, x10d) was (0.97, 0.98, 0.97)
- for qr500 the rQPS for (x10b, x10cw4, x10d) was (0.98, 0.98, 0.97)
- for qr1000 the rQPS for (x10b, x10cw4, x10d) was (1.00, 0.99, 0.98)
- point query steps (qp100, qp500, qp1000)
- for qp100 the rQPS for (x10b, x10cw4, x10d) was (1.00, 0.99, 0.98)
- for qp500 the rQPS for (x10b, x10cw4, x10d) was (1.00, 0.95, 0.98)
- for qp1000 the rQPS for (x10b, x10cw4, x10d) was (0.97, 0.95, 0.99)