Store, stream, and sync instantly — UnisonDB is a log-native, real-time database that replicates like a message bus for AI and Edge Computing.
UnisonDB is an open-source database designed specifically for Edge AI and Edge Computing.
It is a reactive, log-native and multi-model database built for real-time and edge-scale applications. UnisonDB combines a B+Tree storage engine with WAL-based (Write-Ahead Logging) streaming replication, enabling near-instant fan-out replication across hundreds of nodes — all while preserving strong consistency and durability.
- Multi-Modal Storage: Key-Value, Wide-Column, and Large Objects (LOB)
- Streaming Replication: WAL-based replication with sub-second fan-out to 100+ edge replicas
- Real-Time Notifications: ZeroMQ-based change notifications with sub-millisecond latency
- Durable & Fast: B+Tree storage with Write-Ahead Logging
- Edge-First Design: Optimized for edge computing and local-first architectures
- Namespace Isolation: Multi-tenancy support with namespace-based isolation
UnisonDB is built for distributed edge-first architectures systems where data and computation must live close together — reducing network hops, minimizing latency, and enabling real-time responsiveness at scale.
By co-locating data with the services that use it, UnisonDB removes the traditional boundary between the database and the application layer. Applications can react to local changes instantly, while UnisonDB’s WAL-based replication ensures eventual consistency across all replicas globally.
- Getting Started with UnisonDB
- Complete Configuration Guide
- Architecture Overview
- HTTP API Reference
- Backup and Restore
- Deployment Topologies
We validated the WAL-based replication architecture using the pkg/replicator component in a local test environment. We Fuzzed the Write Path with all supported operations including Put, BatchPut, Delete, and row-column mutations. This tests the core replication mechanics without network overhead.
Server Running on Digitalocean s-8vcpu-16gb-480gb-intel
- 1000 Concurrent Readers: Simulates heavy read load alongside writes
- 1000 Operations per Second: Sustained write throughput
- Mixed Workload: Combines small metadata updates (100B) with larger payloads (100KB)
- Isolation Testing: Validates transaction isolation under concurrent access patterns
Each replication stream operates as an independent WAL reader, capturing critical performance metrics:
Physical Latency Tracking: Measures p50, p90, p99, and max latencies using timestamps
Traditional databases persist. Stream systems propagate. UnisonDB does both — turning every write into a durable, queryable stream that replicates seamlessly across the edge.
Modern systems are reactive — every change needs to propagate instantly to dashboards, APIs, caches, and edge devices.
Yet, databases were built for persistence, not propagation.
You write to a database, then stream through Kafka.
You replicate via CDC.
You patch syncs between cache and storage.
This split between state and stream creates friction:
- Two systems to maintain and monitor
- Eventual consistency between write path and read path
- Network latency on every read or update
- Complex fan-out when scaling to hundreds of edges
LMDB and BoltDB excel at local speed — but stop at one node.
etcd and Consul replicate state — but are consensus-bound and small-cluster only.
Kafka and NATS stream messages — but aren’t queryable databases.
| LMDB / BoltDB | Fast local storage | No replication |
| etcd / Consul | Cluster consistency | No local queries, low fan-out |
| Kafka / NATS | Scalable streams | No storage or query model |
UnisonDB fuses database semantics with streaming mechanics — the log is the database.
Every write is durable, ordered, and instantly available as a replication stream.
No CDC, no brokers, no external pipelines.
Just one unified engine that:
- Stores data in B+Trees for predictable reads
- Streams data via WAL replication to thousands of nodes
- Reacts instantly with sub-second fan-out
- Keeps local replicas fully queryable, even offline
UnisonDB eliminates the divide between “database” and “message bus,”
enabling reactive, distributed, and local-first systems — without the operational sprawl.
UnisonDB collapses two worlds — storage and streaming — into one unified log-native core.
The result: a single system that stores, replicates, and reacts — instantly.
UnisonDB is built on three foundational layers:
- WALFS - Write-Ahead Log File System (mmap-based, optimized for reading at scale).
- Engine - Hybrid storage combining WAL, MemTable, and B-Tree
- Replication - WAL-based streaming with offset tracking
UnisonDB stacks a multi-model engine on top of WALFS — a log-native core that unifies storage, replication, and streaming into one continuous data flow.
WALFS is a memory-mapped, segmented write-ahead log implementation designed for both writing AND reading at scale. Unlike traditional WALs that optimize only for sequential writes, WALFS provides efficient random access for replication, and real-time tailing.
Each WALFS segment consists of two regions:
| 0 | 4 | Magic | Magic number (0x5557414C) |
| 4 | 4 | Version | Metadata format version |
| 8 | 8 | CreatedAt | Creation timestamp (nanoseconds) |
| 16 | 8 | LastModifiedAt | Last modification timestamp (nanoseconds) |
| 24 | 8 | WriteOffset | Offset where next chunk will be written |
| 32 | 8 | EntryCount | Total number of chunks written |
| 40 | 4 | Flags | Segment state flags (e.g. Active, Sealed) |
| 44 | 12 | Reserved | Reserved for future use |
| 56 | 4 | CRC | CRC32 checksum of first 56 bytes |
| 60 | 4 | Padding | Ensures 64-byte alignment |
Each record is written in its own aligned frame:
| 0 | 4 bytes | CRC | CRC32 of [Length | Data] |
| 4 | 4 bytes | Length | Size of the data payload in bytes |
| 8 | N bytes | Data | User payload (FlatBuffer-encoded LogRecord) |
| 8 + N | 8 bytes | Trailer | Canary marker (0xDEADBEEFFEEEDFACE) |
| ... | ≥0 bytes | Padding | Zero padding to align to 8-byte boundary |
WALFS provides powerful reading capabilities essential for replication and recovery:
- Zero-copy reads - data is a memory-mapped slice
- Position tracking - each record returns its (SegmentID, Offset) position
- Automatic segment traversal - seamlessly reads across segment boundaries
- Efficient seek without scanning
- Follower catch-up from last synced position
- Recovery from checkpoint
- Returns ErrNoNewData when caught up (not io.EOF)
- Enables low-latency streaming
- Supports multiple parallel readers
Unlike traditional "write-once, read-on-crash" WALs, WALFS optimizes for:
- Continuous replication - Followers constantly read from primary's WAL
- Real-time tailing - Low-latency streaming of new writes
- Parallel readers - Multiple replicas read concurrently without contention
The Engine orchestrates writes, reads, and persistence using three components:
- WAL (WALFS) - Durability and replication source
- MemTable (SkipList) - In-memory write buffer
- B-Tree Store - Persistent index for efficient reads
UnisonDB uses FlatBuffers for zero-copy serialization of WAL records:
- No deserialization on replicas
- Fast replication
Replication efficiency - No deserialization needed on replicas
UnisonDB provides atomic multi-key transactions:
Transaction Properties:
- Atomicity - All writes become visible on commit, or none on abort
- Isolation - Uncommitted writes are hidden from readers
Large values can be chunked and streamed using TXN.
LOB Properties:
- Transactional - All chunks committed atomically
- Streaming - Can write/read chunks incrementally
- Efficient replication - Replicas get chunks as they arrive
UnisonDB supports partial updates to column families:
Benefits:
- Efficient updates - Only modified columns are written/replicated
- Flexible schema - Columns can be added dynamically
- Merge semantics - New columns merged with existing row
Replication in UnisonDB is WAL-based streaming - designed around the WALFS reader capabilities. Followers continuously stream WAL records from the primary's WALFS and apply them locally.
- Offset-based positioning - Followers track their replication offset (SegmentID, Offset)
- Catch-up from any offset - Can resume replication from any position
- Real-time streaming - Active tail following for low-latency replication
- Self-describing records - FlatBuffer LogRecords are self-contained
- Batched streaming - Records sent in batches for efficiency
- Offset-based positioning - Followers track (SegmentID, Offset) Independently.
- Catch-up from any offset - Resume from any position
- Real-time streaming - Active tail following for low latency
- UnisonDB shows lower SET throughput than pure LSM databases — by design.
- Writes are globally ordered under a lock to ensure replication-safe WAL entries.
- This favors consistency and durability over raw speed.
- Still, UnisonDB is nearly 2x faster than BoltDB, a pure B+Tree store.
- Even with ordered writes, it outperforms BoltDB while offering stronger replication guarantees.
- Read-heavy workloads (edge nodes, replicas)
- Predictable latency requirements (no background compaction)
- Replication is critical (built-in, transactional)
- Pure write throughput is #1 priority.
- Read amplification is acceptable
Most traditional key-value stores were designed for simple, point-in-time key-value operations — and their replication models reflect that. While this works for basic use cases, it quickly breaks down under real-world demands like multi-key transactions, large object handling, and fine-grained updates.
Replication is often limited to raw key-value pairs. There’s no understanding of higher-level constructs like rows, columns, or chunks — making it impossible to efficiently replicate partial updates or large structured objects.
Replication happens on a per-operation basis, not as part of an atomic unit. Without multi-key transactional guarantees, systems can fall into inconsistent states across replicas, especially during batch operations, network partitions, or mid-transaction failures.
When large values are chunked and streamed to the store, traditional replication models expose chunks as they arrive. If a transfer fails mid-way, replicas may store incomplete or corrupted objects, with no rollback or recovery mechanism.
Wide-column data is treated as flat keys or opaque blobs. If only a single column is modified, traditional systems replicate the entire row, wasting bandwidth, increasing storage overhead, and making efficient synchronization impossible.
Without built-in transactional semantics, developers must implement their own logic for deduplication, rollback, consistency checks, and coordination — which adds fragility and complexity to the system.
• LSM-Trees (e.g., RocksDB) excel at fast writes but suffer from high read amplification and costly background compactions, which hurt latency and predictability.
• B+Trees (e.g., BoltDB,LMDB) offer efficient point lookups and range scans, but struggle with high-speed inserts and lack native replication support.
UnisonDB combines append-only logs for high-throughput ingest with B-Trees for fast and efficient range reads — while offering:
- Transactional, multi-key replication with commit visibility guarantees.
- Chunked LOB writes that are fully atomic.
- Column-aware replication for efficient syncing of wide-column updates.
- Isolation by default — once a network-aware transaction is started, all intermediate writes are fully isolated and not visible to readers until a successful txn.Commit().
- Built-in replication via gRPC WAL streaming + B-Tree snapshots.
- Zero-compaction overhead, high write throughput, and optimized reads.
.png)















