Catch unsafe migrations in development
✓ Detects potentially dangerous operations
✓ Prevents them from running by default
✓ Provides instructions on safer ways to do what you want
Supports PostgreSQL, MySQL, and MariaDB
🍊 Battle-tested at Instacart
Add this line to your application’s Gemfile:
And run:
Strong Migrations sets a long statement timeout for migrations so you can set a short statement timeout for your application.
When you run a migration that’s potentially dangerous, you’ll see an error message like:
An operation is classified as dangerous if it either:
- Blocks reads or writes for more than a few seconds (after a lock is acquired)
- Has a good chance of causing application errors
Potentially dangerous operations:
- removing a column
- changing the type of a column
- renaming a column
- renaming a table
- creating a table with the force option
- adding an auto-incrementing column
- adding a stored generated column
- adding a check constraint
- executing SQL directly
- backfilling data
Postgres-specific checks:
- adding an index non-concurrently
- adding a reference
- adding a foreign key
- adding a unique constraint
- adding an exclusion constraint
- adding a json column
- setting NOT NULL on an existing column
- adding a column with a volatile default value
- renaming a schema
Config-specific checks:
Best practices:
You can also add custom checks or disable specific checks.
Active Record caches database columns at runtime, so if you drop a column, it can cause exceptions until your app reboots.
- Tell Active Record to ignore the column from its cache
- Deploy the code
- Write a migration to remove the column (wrap in safety_assured block)
- Deploy and run the migration
- Remove the line added in step 1
Changing the type of a column causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres, and writes are blocked in MySQL and MariaDB.
Some changes don’t require a table rewrite and are safe in Postgres:
| cidr | Changing to inet |
| citext | Changing to text if not indexed, changing to string with no :limit if not indexed |
| datetime | Increasing or removing :precision, changing to timestamptz when session time zone is UTC in Postgres 12+ |
| decimal | Increasing :precision at same :scale, removing :precision and :scale |
| interval | Increasing or removing :precision |
| numeric | Increasing :precision at same :scale, removing :precision and :scale |
| string | Increasing or removing :limit, changing to text, changing citext if not indexed |
| text | Changing to string with no :limit, changing to citext if not indexed |
| time | Increasing or removing :precision |
| timestamptz | Increasing or removing :limit, changing to datetime when session time zone is UTC in Postgres 12+ |
And some in MySQL and MariaDB:
| string | Increasing :limit from under 63 up to 63, increasing :limit from over 63 to the max (the threshold can be different if using an encoding other than utf8mb4 - for instance, it’s 85 for utf8mb3 and 255 for latin1) |
A safer approach is to:
- Create a new column
- Write to both columns
- Backfill data from the old column to the new column
- Move reads from the old column to the new column
- Stop writing to the old column
- Drop the old column
Renaming a column that’s in use will cause errors in your application.
A safer approach is to:
- Create a new column
- Write to both columns
- Backfill data from the old column to the new column
- Move reads from the old column to the new column
- Stop writing to the old column
- Drop the old column
Renaming a table that’s in use will cause errors in your application.
A safer approach is to:
- Create a new table
- Write to both tables
- Backfill data from the old table to the new table
- Move reads from the old table to the new table
- Stop writing to the old table
- Drop the old table
The force option can drop an existing table.
Create tables without the force option.
If you intend to drop an existing table, run drop_table first.
Adding an auto-incrementing column (serial/bigserial in Postgres and AUTO_INCREMENT in MySQL and MariaDB) causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres, and writes are blocked in MySQL and MariaDB.
With MySQL and MariaDB, this can also generate different values on replicas if using statement-based replication.
Create a new table and migrate the data with the same steps as renaming a table.
Adding a stored generated column causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres, and writes are blocked in MySQL and MariaDB.
Add a non-generated column and use callbacks or triggers instead (or a virtual generated column with MySQL and MariaDB).
🐢 Safe by default available
Adding a check constraint blocks reads and writes in Postgres and blocks writes in MySQL and MariaDB while every row is checked.
Add the check constraint without validating existing rows:
Then validate them in a separate migration.
Let us know if you have a safe way to do this (check constraints can be added with NOT ENFORCED, but enforcing blocks writes).
Strong Migrations can’t ensure safety for raw SQL statements. Make really sure that what you’re doing is safe, then use:
Note: Strong Migrations does not detect dangerous backfills.
Active Record creates a transaction around each migration, and backfilling in the same transaction that alters a table keeps the table locked for the duration of the backfill.
Also, running a single query to update data can cause issues for large tables.
There are three keys to backfilling safely: batching, throttling, and running it outside a transaction. Use the Rails console or a separate migration with disable_ddl_transaction!.
Note: If backfilling with a method other than update_all, use User.reset_column_information to ensure the model has up-to-date column information.
🐢 Safe by default available
In Postgres, adding an index non-concurrently blocks writes.
Add indexes concurrently.
If you forget disable_ddl_transaction!, the migration will fail. Also, note that indexes on new tables (those created in the same migration) don’t require this.
With gindex, you can generate an index migration instantly with:
🐢 Safe by default available
Rails adds an index non-concurrently to references by default, which blocks writes in Postgres.
Make sure the index is added concurrently.
🐢 Safe by default available
In Postgres, adding a foreign key blocks writes on both tables.
or
Add the foreign key without validating existing rows:
Then validate them in a separate migration.
In Postgres, adding a unique constraint creates a unique index, which blocks reads and writes.
Create a unique index concurrently, then use it for the constraint.
In Postgres, adding an exclusion constraint blocks reads and writes while every row is checked.
Let us know if you have a safe way to do this (exclusion constraints cannot be marked NOT VALID).
In Postgres, there’s no equality operator for the json column type, which can cause errors for existing SELECT DISTINCT queries in your application.
Use jsonb instead.
🐢 Safe by default available
In Postgres, setting NOT NULL on an existing column blocks reads and writes while every row is checked.
Instead, add a check constraint.
Then validate it in a separate migration. Once the check constraint is validated, you can safely set NOT NULL on the column and drop the check constraint.
Adding a column with a volatile default value to an existing table causes the entire table to be rewritten. During this time, reads and writes are blocked.
Instead, add the column without a default value, then change the default.
Then backfill the data.
Renaming a schema that’s in use will cause errors in your application.
A safer approach is to:
- Create a new schema
- Write to both schemas
- Backfill data from the old schema to the new schema
- Move reads from the old schema to the new schema
- Stop writing to the old schema
- Drop the old schema
Rails < 7 enables partial writes by default, which can cause incorrect values to be inserted when changing the default value of a column.
Disable partial writes in config/application.rb. For Rails < 7, use:
For Rails 7+, use:
Adding a non-unique index with more than three columns rarely improves performance.
Instead, start an index with columns that narrow down the results the most.
For Postgres, be sure to add them concurrently.
To mark a step in the migration as safe, despite using a method that might otherwise be dangerous, wrap it in a safety_assured block.
Certain methods like execute and change_table cannot be inspected and are prevented from running by default. Make sure what you’re doing is really safe and use this pattern.
Make certain operations safe by default. This allows you to write the code under the "Bad" section, but the migration will be performed as if you had written the "Good" version.
- adding and removing an index
- adding a foreign key
- adding a check constraint
- setting NOT NULL on an existing column
Add to config/initializers/strong_migrations.rb:
Add your own custom checks with:
Use the stop! method to stop migrations.
Note: Since remove_column always requires a safety_assured block, it’s not possible to add a custom check for remove_column operations.
Postgres supports removing indexes concurrently, but removing them non-concurrently shouldn’t be an issue for most applications. You can enable this check with:
Disable specific checks with:
Check the source code for the list of keys.
Skip checks and other functionality for specific databases with:
Note: This does not affect alphabetize_schema.
By default, checks are disabled when migrating down. Enable them with:
To customize specific messages, create an initializer with:
Check the source code for the list of keys.
It’s extremely important to set a short lock timeout for migrations. This way, if a migration can’t acquire a lock in a timely manner, other statements won’t be stuck behind it. We also recommend setting a long statement timeout so migrations can run for a while.
Create config/initializers/strong_migrations.rb with:
Or set the timeouts directly on the database user that runs migrations. For Postgres, use:
Note: If you use PgBouncer in transaction mode, you must set timeouts on the database user.
We recommend adding timeouts to config/database.yml to prevent connections from hanging and individual queries from taking up too many resources in controllers, jobs, the Rails console, and other places.
For Postgres:
Note: If you use PgBouncer in transaction mode, you must set the statement and lock timeouts on the database user as shown above.
For MySQL:
For MariaDB:
For HTTP connections, Redis, and other services, check out this guide.
In Postgres, adding an index non-concurrently can leave behind an invalid index if the lock timeout is reached. Running the migration again can result in an error.
To automatically remove the invalid index when the migration runs again, use:
Note: This feature is experimental.
There’s the option to automatically retry statements for migrations when the lock timeout is reached. Here’s how it works:
- If a lock timeout happens outside a transaction, the statement is retried
- If it happens inside the DDL transaction, the entire migration is retried (only applicable to Postgres)
Add to config/initializers/strong_migrations.rb:
Set the delay between retries with:
To mark migrations as safe that were created before installing this gem, create an initializer with:
Use the version from your latest migration.
If your development database version is different from production, you can specify the production version so the right checks run in development.
The major version works well for Postgres, while the major and minor version is recommended for MySQL and MariaDB.
For safety, this option only affects development and test environments. In other environments, the actual server version is always used.
If your app has multiple databases with different versions, you can use:
Analyze tables automatically (to update planner statistics) after an index is added. Create an initializer with:
Only dump the schema when adding a new migration. If you use Git, add to config/environments/development.rb:
Columns can flip order in db/schema.rb when you have multiple developers. One way to prevent this is to alphabetize them. Add to config/initializers/strong_migrations.rb:
We recommend using a separate database user for migrations when possible so you don’t need to grant your app user permission to alter tables.
You probably don’t need this gem for smaller projects, as operations that are unsafe at scale can be perfectly safe on smaller, low-traffic tables.
- PostgreSQL at Scale: Database Schema Changes Without Downtime
- MySQL InnoDB Online DDL Operations
- MariaDB InnoDB Online DDL Overview
Thanks to Bob Remeika and David Waller for the original code and Sean Huber for the bad/good readme format.
View the changelog
Everyone is encouraged to help improve this project. Here are a few ways you can help:
- Report bugs
- Fix bugs and submit pull requests
- Write, clarify, or fix documentation
- Suggest or add new features
To get started with development:
.png)


