Your SaaS application just hit a major milestone. 100,000 active users, each with their own isolated database. You're celebrating until you realise you need to deploy a schema change. What used to be a simple ALTER TABLE now means coordinating migrations across 100,000 separate databases. And if you're building the next big thing, that number could be heading toward millions.
Most developers cut their teeth on the classic migration workflow: write the script, run it once, ship it. But the database-per-tenant architecture that gives your users data isolation and compliance guarantees suddenly turns every schema change into a coordination nightmare.
- How do you push updates to millions of databases without bringing down your entire platform?
- How do you handle failures, rollbacks, and the inevitable edge cases that emerge at scale?
# Pull, Don't Push
What if instead of trying to orchestrate migrations from your servers, you let each client handle its own database updates? This isn't about abandoning control, it's about building a smarter system where a hosted schema registry coordinates updates while individual applications do the heavy lifting.
Your registry becomes a lightweight coordination service that tracks what schema changes exist and at what versions, while client applications periodically check in to see what updates they need. When a client discovers it's running an outdated schema, it pulls down the specific migrations required to catch up and applies them locally.
The result? Instead of your servers struggling to coordinate millions of database updates, each tenant manages its own updates at its own pace.
# The Registry
At its core, a schema registry is surprisingly simple. You need to track just a few key pieces of information:
type Migration struct { Version string `json:"version"` Description string `json:"description"` SQLScript string `json:"sql"` CreatedAt time.Time `json:"created_at"` }The registry itself stores these in a simple table structure:
CREATE TABLE migrations ( version TEXT PRIMARY KEY, description TEXT NOT NULL, sql_script TEXT NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP );That's it. No complex orchestration tables, no state tracking per tenant, no distributed locks. The registry is just a versioned catalog of SQL scripts.
# In Action
Let's walk through a realistic scenario. You're adding a new feature that requires a user_preferences table. Here's how the flow works:
# Step 1: Register the migration
curl -X POST https://your-registry.com/migrations \ -H "Content-Type: application/json" \ -d '{ "description": "Add user preferences table", "sql": "CREATE TABLE user_preferences (id INTEGER PRIMARY KEY, user_id INTEGER, theme TEXT, notifications BOOLEAN);" }'The registry auto-increments to version 1.2.5 and responds:
{ "status": "registered", "version": "1.2.5", "currentVersion": "1.2.5" }# Step 2: Client applications check for updates
Each client application periodically calls home with its current schema version:
curl -X POST https://your-registry.com/verify \ -H "Content-Type: application/json" \ -d '{"version": "1.2.4"}'The registry responds with what the client needs:
{ "status": "update-required", "currentVersion": "1.2.5", "clientVersion": "1.2.4", "requiredMigrations": [ { "version": "1.2.5", "description": "Add user preferences table", "sql": "CREATE TABLE user_preferences (id INTEGER PRIMARY KEY, user_id INTEGER, theme TEXT, notifications BOOLEAN);", "created_at": "2025-05-26T10:30:00Z" } ] }The timing of these check-ins becomes a key design decision. Some applications might verify their schema before every critical operation, others might check daily during off-peak hours, and some might only update when users explicitly trigger certain workflows. The flexibility lets you balance consistency requirements with performance considerations.
# Step 3: Apply migration
The client application takes that SQL script, runs it against its local database, and updates its stored schema version to 1.2.5.
The next time it checks in, the registry will respond with "status": "latest".
# Offline-First Applications
The pull-based approach shines for offline-first mobile apps. Consider field technicians working offline for weeks, when they reconnect, their local schema might be several versions behind.
Traditional systems require complex orchestration to detect missed migrations and handle conflicts. With the pull-based registry, the client simply asks "I'm on version 1.2.4, what did I miss?" and applies the returned migrations locally.
With Turso's upcoming offline conflict resolution, this becomes even more powerful. Your app operates completely disconnected, then on reconnect:
- Pulls any missed schema changes
- Applies migrations to the local database
- Resolves conflicts between offline data and server changes
Intermittent connectivity transforms from a deployment headache into a non-issue, clients catch up when they can.
# Try It Yourself
Want to experiment with this approach? I've built a proof-of-concept schema registry that demonstrates these concepts in action. The implementation is straightforward Go with SQLite, showing how simple the core infrastructure can be while still handling the complexity of distributed schema management.
This approach transforms the traditional migration bottleneck into a distributed, resilient system that handles failures gracefully. When you have a million tenant databases, you're not trying to coordinate a million simultaneous operations—you're simply serving metadata about what changes are available. Each client handles its own database updates,.