We had a requirement to display the last N transactions on the ATM screen (”mini-statement”). The simplest solution is to keep the list of transactions, order them by date, and take the newest N transactions. But it gets tricky once you realize there are active customers making several transactions per day and inactive ones who use the card occasionally and might make a transaction every couple of months. Which means you have to preserve transaction records for a long period, and queries run more slowly. For a larger customer base, even half a year of data might be challenging to query in real time.
Now we start thinking of keeping only the last N transactions per payment card and doing an INSERT followed by a clever DELETE that discards all extra transactions. In a regular programming language, a better solution would be to have a ring buffer with a fixed capacity and a “write pointer” that points to where the newest record should be stored, overwriting the oldest one. To implement something similar in the database, we would need locking to prevent concurrent updates between retrieving the “write pointer”, storing the record, and advancing the “write pointer”.
Years ago, I came up with a solution that I still have mixed feelings about. It is nice because it can be done with a single SQL statement, requires no explicit locking, and it works (in production as well). On the other hand, it is a bit fugly, my internal DBA demon complains about the size of the WAL records, and it’s weird. Old colleagues called that “shifter” because it works like the bit-shifting operations << and >>.
First, you create a model with as many message/transaction fields as you want to keep:
Every time you want to store a new message, you assign it to the field message1. Whatever value was there in message1 you store it in message2. Whatever value was there in message2 you store it in message3 , etc. And the value of the last field message4 in this case is forgotten.
And here’s how it works. We start with an empty ring buffer:
We add the first message and verify it is stored correctly:
We add the second message and verify it is stored correctly:
We add the third message and verify it is stored correctly:
We add the fourth message and verify it is stored correctly:
We add the fifth message and verify it is stored correctly and the first message has been discarded:
So? Is this stupid or smart? After all these years I still have mixed feelings about it.