DB Spec Template
Use this schema template to reduce migration risk and make data changes explicit across the team. It is optimized for migration safety, compatibility windows, and rollback clarity.
# DB Spec Template ## Table - Name: - Domain / owner: ## Columns - id BIGINT PK - field_name: type / nullable / default / notes ## Constraints - PRIMARY KEY (...) - UNIQUE (...) - FOREIGN KEY (...) ## Indexes - idx_xxx (col1, col2) - expected query pattern: ## Migration Plan - deploy steps: - backfill plan: - rollback plan: ## Compatibility - read/write compatibility window: - old clients impact: ## Test Checklist - [ ] migration up/down - [ ] constraint behavior - [ ] index performance - [ ] data integrity checks
Template usage path
- Copy the migration template before writing SQL or ORM migrations.
- Fill row counts, lock expectations, compatibility window, rollback trigger, and verification SQL.
- Ask the database reviewer to check query plans and migration order.
- Record actual runtime, lock behavior, and validation results after rollout.
What a safe migration spec looks like
A template becomes useful only after it carries a real decision, owner, and evidence. Use the blank template above for structure, then aim for this level of specificity before implementation starts.
Migration: add refund_retry_state Rows: 18M Plan: - add nullable column - backfill 5k rows per batch - monitor lock waits and write latency - enforce NOT NULL after 24h of clean reads
If the copied template still has empty owner, evidence, or rollback fields, keep it in review.
When a DB spec is mandatory
- Any schema change touching primary business tables.
- Migrations with backfill, data rewrite, or lock risk.
- Index changes that affect high-traffic query paths.
- Changes requiring compatibility with old clients or jobs.
How to avoid migration incidents
- Declare migration stages: deploy, backfill, verify, cutover, cleanup.
- Estimate affected rows and expected run time for each stage.
- Specify rollback trigger conditions and exact rollback commands.
- Document read/write compatibility window for old and new code.
Small operational details in the spec often prevent large production outages.
High-risk mistakes
- Adding non-null columns without default/backfill strategy.
- Dropping indexes before confirming replacement query plans.
- Running long blocking DDL in peak traffic windows.
- Assuming rollback is possible without validating down migration.
Operational review checklist
- Migration tested on realistic data volume.
- Constraint/index behavior validated with representative queries.
- Monitoring and alerts in place during rollout window.
- Rollback runbook tested and approved by owner.
Related guides: edge-case checklist and spec review checklist.
Weak vs strong migration note
Weak migration
Add status to subscriptions, backfill old rows, then deploy. This omits nullable phases, default behavior, deployment order, lock risk, and what rollback means after new writes begin.
Strong migration
Add nullable status, deploy readers that fall back to the billing state, backfill 5,000 rows per batch, monitor write latency and lock waits, then enforce NOT NULL after 24 hours of zero null reads.
The strong version gives reviewers a safe transition with observable checkpoints.
Review roles for database changes
- Application owner: confirms old and new app versions can read and write safely during the compatibility window.
- Database reviewer: checks lock behavior, query plans, index order, constraint timing, and data volume assumptions.
- QA or data owner: verifies data integrity checks, backfill sampling, and edge cases such as duplicates and null values.
- On-call owner: confirms alerts, rollback triggers, communication path, and who can pause or abort the migration.
Using DB specs with AI coding tools
AI-generated migrations are useful for draft SQL, but they are risky when the prompt omits production scale. Include row counts, database version, lock constraints, read/write compatibility, rollback limits, and the exact data state that must remain true after the migration.
Ask the assistant to explain the migration in stages and list failure modes. If it cannot describe lock behavior, batch strategy, and rollback safety, the generated migration is not ready for review.
Evidence to attach before approval
- Query evidence: include the current query plan and the expected plan after any new index or constraint.
- Scale evidence: include estimated row counts, batch size, runtime expectation, and whether the change was tested on production-like data.
- Compatibility evidence: show how old and new application versions behave during the migration window.
- Rollback evidence: describe the last safe rollback point and what data may need manual repair after that point.
This evidence keeps the DB spec anchored in production behavior rather than schema intent alone.
When to split the database spec
If a change combines schema design, data backfill, application rollout, and cleanup in one large document, split it into phases. Keep the logical schema in one section, the migration plan in another, and the verification plan in a third. That separation helps reviewers approve the design without accidentally approving an unsafe rollout sequence.
For irreversible changes, add a "stop point" before deletion or constraint tightening. The team should know which step can still be paused safely and which step requires a formal release decision.
DB template FAQ
Do I need a DB spec for every migration?
No. Small, reversible migrations on low-risk tables can use a short checklist. Use the full template when the change touches customer data, high-traffic tables, cross-service contracts, backfills, constraints, or rollback complexity.
What if rollback is not fully possible?
Say that explicitly. Some data migrations cannot be undone perfectly after new writes begin. In that case, document the mitigation plan, pause condition, customer impact, and the safest path forward.
Real-world example: adding a JSONB preferences column
## Table: users ## Change: Add notification_preferences column ### Schema | Column | Type | Nullable | Default | |--------------------------|--------|----------|---------| | notification_preferences | JSONB | yes | NULL | ### Constraints - No NOT NULL (existing rows have no prefs → null = "use defaults") - CHECK: jsonb_typeof(notification_preferences) = 'object' OR notification_preferences IS NULL ### Indexes - GIN index on notification_preferences for @> queries - Estimated table size: 2.4M rows, ~180 bytes per JSONB value ### Migration Plan - UP: ALTER TABLE users ADD COLUMN notification_preferences JSONB; - This is a metadata-only change on Postgres 11+ (no rewrite) - Estimated lock duration: < 1 second - No backfill needed (null = use defaults in application code) ### Rollback - DOWN: ALTER TABLE users DROP COLUMN notification_preferences; - Application code must handle missing column gracefully - Rollback window: 72 hours before dependent API ships ### Monitoring - Alert if column add takes > 5 seconds (indicates lock contention) - Watch for query plan changes on users table after GIN index creation
Editorial note
This template covers database specification for spec-first engineering teams. The JSONB migration example is an illustrative scenario based on PostgreSQL.
- Author: Daniel Marsh
- Editorial policy: How we review and update content
- Corrections: Contact the editor
Tip: include SQL examples, estimated row impact, and lock expectations for risky migrations. Last updated: May 6, 2026.
Fill a form, get Markdown — ready for your repo.