A high-performance message queue and pool system for Go that simplifies concurrent task processing using worker pool. Through Go generics, it provides type safety without sacrificing performance.
With VarMQ, you can process messages asynchronously, handle errors properly, store data persistently, and scale across systems using adapters. All through a clean, intuitive API that feels natural to Go developers.
- ⚡ High performance: Optimized for high throughput with minimal overhead, even under heavy load. see benchmarks
- 🛠️ Variants of queue types:
- Standard queues for in-memory processing
- Priority queues for importance-based ordering
- Persistent queues for durability across restarts
- Distributed queues for processing across multiple systems
- 🧩 Worker abstractions:
- NewWorker - Fire-and-forget operations (most performant)
- NewErrWorker - Returns only error (when result isn't needed)
- NewResultWorker - Returns result and error
- 🚦 Concurrency control: Fine-grained control over worker pool size, dynamic tuning and idle workers management
- 🧬 Multi Queue Binding: Bind multiple queues to a single worker
- 💾 Persistence: Support for durable storage through adapter interfaces
- 🌐 Distribution: Scale processing across multiple instances via adapter interfaces
- 🧩 Extensible: Build your own storage adapters by implementing VarMQ's internal queue interfaces
You can use priority queue to prioritize jobs based on their priority. Lower number = higher priority
VarMQ supports both persistent and distributed queue processing through adapter interfaces:
- Persistent Queues: Store jobs durably so they survive program restarts
- Distributed Queues: Process jobs across multiple systems
Usage is simple:
See complete working examples in the examples directory:
- Persistent Queue Example (SQLite)
- Persistent Queue Example (Redis)
- Distributed Queue Example (Redis)
Create your own adapters by implementing the IPersistentQueue or IDistributedQueue interfaces.
Note
Before testing examples, make sure to start the Redis server using docker compose up -d.
Bind multiple queues to a single worker, enabling efficient processing of jobs from different sources with configurable strategies. The worker supports three strategies:
- RoundRobin (default - cycles through queues equally)
- MaxLen (prioritizes queues with more jobs)
- MinLen (prioritizes queues with fewer jobs)
It will process jobs from all queues in a round-robin fashion.
VarMQ provides a NewResultWorker that returns both the result and error for each job processed. This is useful when you need to handle both success and failure cases.
NewErrWorker is similar to NewResultWorker but it only returns error.
VarMQ provides helper functions that enable direct function submission similar to the Submit() pattern in other pool packages like Pond or Ants
- Func(): For basic functions with no return values - use with NewWorker
- ErrFunc(): For functions that return errors - use with NewErrWorker
- ResultFunc[R](): For functions that return a result and error - use with NewResultWorker
Important
Function helpers don't support persistence or distribution since functions cannot be serialized.
Command: go test -run=^$ -benchmem -bench '^(BenchmarkAdd)$' -cpu=1
Why use -cpu=1? Since the benchmark doesn’t test with more than 1 concurrent worker, a single CPU is ideal to accurately measure performance.
Worker | Queue | 918.6 | 128 | 3 |
Priority | 952.7 | 144 | 4 | |
ErrWorker | ErrQueue | 1017 | 305 | 6 |
ErrPriority | 1006 | 320 | 7 | |
ResultWorker | ResultQueue | 1026 | 353 | 6 |
ResultPriority | 1039 | 368 | 7 |
Command: go test -run=^$ -benchmem -bench '^(BenchmarkAddAll)$' -cpu=1
Worker | Queue | 635,186 | 146,841 | 4,002 |
Priority | 755,276 | 162,144 | 5,002 | |
ErrWorker | ErrQueue | 673,912 | 171,090 | 4,505 |
ErrPriority | 766,043 | 186,663 | 5,505 | |
ResultWorker | ResultQueue | 675,420 | 187,897 | 4,005 |
ResultPriority | 777,680 | 203,263 | 5,005 |
Note
AddAll benchmarks use a batch of 1000 items per call. The reported numbers (ns/op, B/op, allocs/op) are totals for the whole batch. For per-item values, divide each by 1000.
e.g. for default Queue, the average time per item is approximately 635ns.
Why is AddAll faster than individual Add calls? Here's what makes the difference:
- Batch Processing: Uses a single group job to process multiple items, reducing per-item overhead
- Shared Resources: Utilizes a single result channel for all items in the batch
Chart images is been generated using Vizb
We conducted comprehensive benchmarking between VarMQ and Pond v2, as both packages provide similar worker pool functionalities. While VarMQ draws inspiration from some of Pond's design patterns, it offers unique advantages in queue management and persistence capabilities.
Key Differences:
- Queue Types: VarMQ provides multiple queue variants (standard, priority, persistent, distributed) vs Pond's single pool type
- Multi-Queue Management: VarMQ supports binding multiple queues to a single worker with configurable strategies (RoundRobin, MaxLen, MinLen)
For detailed performance comparisons and benchmarking results, visit:
- 📊 Benchmark Repository - Complete benchmark suite
- 📈 Interactive Charts - Visual performance comparisons
For detailed API documentation, see the API Reference.
VarMQ's concurrency model is built around a smart event loop that keeps everything running smoothly.
The event loop continuously monitors for pending jobs in queues and available workers in the pool. When both conditions are met, jobs get distributed to workers instantly. When there's no work to distribute, the system enters a low-power wait state.
Workers operate independently - they process jobs and immediately signal back when they're ready for more work. This triggers the event loop to check for new jobs and distribute them right away.
The system handles worker lifecycle automatically. Idle workers either stay in the pool or get cleaned up based on your configuration, so you never waste resources or run short on capacity.
Contributions are welcome! Please feel free to submit a Pull Request or open an issue.
Please note that this project has a Code of Conduct. By participating in this project, you agree to abide by its terms.