An S3-backed counter that scales by sharding updates across many small objects and periodically compacting them into a single base total. The design makes all operations optimistic and cheap—perfect for high write, low read workloads such as analytics counters, rate limits, or metering.
Sharded writes remain indefinitely until compaction folds them into the base total. Use the provided Compactor helper to run this in the background.
You can also trigger compaction manually via Compactor.Trigger() (for example after a burst of writes) or by calling counter.Compact yourself.
For workloads with very high write rates, use BufferedCounter to batch increments in memory before flushing to S3:
Note: Since BufferedCounter accumulates increments in-memory, any data not yet flushed to S3 will be lost if the service crashes. This is acceptable for approximate metrics (analytics, rate limits, etc.) but not for scenarios requiring guaranteed durability.
Data is only written to S3 during flushes: The buffered increments are held in memory until:
- The periodic flush timer triggers (based on flushInterval)
- A buffer reaches maxBufferSize and auto-flushes
- Flush() is called explicitly
- Stop() is called for graceful shutdown
Redis is a popular choice for counters, but sharded-counter uses S3 for fundamentally different workload characteristics:
| Durability | 11 nines (99.999999999%) | In-memory; loses data on restart unless persisted |
| Scalability | Automatic, unlimited | Requires horizontal scaling & complex sharding |
| Cost per write | $0.000005 per operation | Hourly instance cost ($0.20+/hour minimum) |
| Storage persistence | Permanent at minimal cost | Memory expensive (~$0.05/GB-month) |
| Best for | Append-heavy, high-volume, low-read workloads | Real-time access, frequently-read data |
When to use S3-backed counters:
- Analytics, metrics, and event counting
- Rate limiting and quota tracking
- Audit logs and activity tracking
- Workloads where approximate eventual consistency is acceptable
When to use Redis:
- Low-latency counters requiring sub-millisecond reads
- Session state and caching
- Real-time leaderboards or rankings
Current S3 Standard pricing (Oct 2025):
| Storage | $0.023/GB/month |
| PUT/POST/LIST | $0.005 per 1,000 requests |
| GET/SELECT | $0.0004 per 1,000 requests |
| Data Transfer Out | $0.09/GB (first 10TB), $0.085/GB (next 40TB) |
100 million events/month distributed across 50 shards, with daily compaction.
1 billion events/month (12K events/second), 64 shards, buffered with 5s flush.
Key Insight: Even with high volume, S3 is cost-competitive with Redis while providing unlimited persistence and durability.
The package ships with an in-memory stub that demonstrates how to satisfy the Client interface. When writing your own tests, follow the same pattern:
- Honour S3 conditional headers (If-Match / If-None-Match) to exercise optimistic updates.
- Provide deterministic behaviour for listing and deleting to model compaction.
See counter_test.go for concrete examples that cover Ensure, Increment, approximations, and compaction.
.png)
_uber.jpg)
