Spec Review Checklist Before Coding
Most spec reviews are theater. Three people skim the doc, someone writes "LGTM," and everyone moves on with the same ambiguities intact. A real spec review catches the things that'll cost you a week later. Here's the checklist I run through before letting any spec past review.
What a spec review is actually for
Spec review isn't a thumbs-up ceremony. It's the cheapest moment to find what's missing. Every ambiguity that survives into implementation costs roughly 10× more to resolve. Every one that survives into QA costs 50×. The review is where you spend five minutes now to save five hours later.
If a reviewer walks away thinking "that was a good doc," the review probably failed. A good review produces a list of things to fix, not applause.
The checklist, grouped by risk
I group the checks by where ambiguity hurts most. Scope and decision ownership first — those surface the biggest misalignments. Then acceptance and edge cases, where implementation invention happens. Then rollout and rollback, which determine whether a bad deploy can be stopped at 2am.
Scope and decisions
- Is the goal stated as a user or system outcome, not a feature list?
- Are at least two or three explicit non-goals written down?
- Is there a named owner for scope changes — a person, not a team?
- Does the spec call out dependencies on teams or systems outside this review?
Acceptance criteria
- Can QA write test cases from the AC alone, without asking the author anything?
- Are AC written as Given/When/Then (or another explicit input-trigger-result format)?
- Does each AC include both the happy path and at least one failure path?
- Are performance expectations stated as numbers (latency, throughput, budget), not adjectives like "fast"?
Edge cases and failure modes
- What happens on duplicate requests, network retries, or partial failures?
- What happens when the user has stale data or an expired session?
- What happens at input boundaries — empty, maximum size, unicode, negative numbers?
- What happens on concurrent writes from multiple clients?
Rollout and rollback
- Is there a staging plan with explicit percentages or user cohorts, not "gradually"?
- Are rollback thresholds expressed as numeric alerts, not "if it looks bad"?
- Does rollback mean a code revert, a config flag, a job pause, or a data repair? These differ by an order of magnitude in response time.
- If this is a database change, is the migration forward-compatible with old code still running?
How to actually run the review
Run it asynchronously. Synchronous spec reviews tend to devolve into the author explaining things, which means the document isn't self-contained. If the author has to explain it, the spec has already failed one of its own goals.
The mechanical process I use:
- Author posts the spec 48 hours before they want signoff. Anything less and reviewers feel rushed and nod it through.
- Reviewers leave comments in the document, one concern per thread. Use the checklist above — copy-paste the question if it helps.
- Author responds to each thread in writing. "Fixed" isn't enough; what changed, and in which section?
- Second pass. This is where lazy first-reviews get caught. If the second pass has zero new concerns, the first review was probably too shallow.
Red flags that mean the spec isn't ready
Some patterns I reject on sight now:
- The word "reasonable." As in "reasonable error handling" or "reasonable performance." Define it or remove it.
- Passive voice in acceptance criteria. "The system will be notified" — by what, how, within what deadline? Concrete subject and timing, always.
- Features described without data models. If the spec doesn't show the schema or at least the key fields, implementation will invent them.
- No mention of an error state. Every happy-path spec has at least three error states. If they're not written down, they'll be discovered in production.
- Cross-team dependencies mentioned but unowned. "The payments team will expose an API" — who on that team agreed, and when?
What not to do in the review itself
- Don't debate implementation choices. If the AC is clear, let engineering pick the approach. Reviewing implementation in the spec is scope creep for the review.
- Don't approve conditionally. "LGTM if you add X" means you're approving work not yet done. Return it for revision and re-review.
- Don't rubber-stamp because the author is senior. Senior authors still miss things; that's why we have reviews.
Send back vs. escalate
Send back when the spec has gaps or ambiguities the author can fix with more thought. Escalate when there's a scope or priority disagreement that needs a manager-level decision. If you can't tell which one it is, it's probably escalation dressed up as ambiguity.
How you know the review actually worked
The test isn't whether the spec looks good. It's whether QA can start writing tests without Slacking the author, and whether implementation proceeds for more than two days without someone asking the author to clarify something in the document.
The second sign, in retrospective: do incidents and rework trace back to decisions that weren't in the spec? If you do reviews well, that trend line goes down.
Field note: the review comment that saved the release
A useful spec review comment is specific enough that the author can fix it without another meeting. The best one I saw on a checkout change was not "what about retries?" It was this:
Blocker: The spec defines POST /checkout but does not define idempotency. Please add: - idempotency key source - duplicate request behavior - timeout retry behavior - what support sees when payment capture is pending
That single comment changed the implementation plan and gave QA four new tests. That is the standard I want from a review checklist.
Review drill
Run the checklist as a blocking review, not as a courtesy scan. Every item should end as pass, blocker, accepted risk, or intentionally not applicable.
- Blockers: stop when acceptance criteria, failure handling, data ownership, rollout, or compatibility decisions are missing.
- Accepted risk: record who accepted the risk and what evidence would force the team to revisit it.
- Follow-up: turn preferences into backlog items so they do not masquerade as launch requirements.
The checklist is done only when the spec names what changed because of review. A silent approval does not help the next person.
Example: if rollback is missing, the review comment should not say "consider rollback." It should say "blocker: add the flag, migration revert, owner, and expected recovery time before implementation starts."
Worked Review Example
For a subscription upgrade ticket, the checklist should catch more than missing tests. It should ask what happens to the current billing period, whether proration appears on the invoice, what event is emitted, which support note explains the charge, and whether downgrade rollback is possible. If any answer is unknown, implementation is about to encode a product decision by accident.
Spec Writing Block to Copy
Use this when a ticket sounds clear but still needs acceptance language. It forces the author to name the actor, trigger, result, and evidence.
Spec writing review block: Spec Review Checklist Before Coding Decision to make: - Use this pre-coding review checklist to separate blockers from preferences, record accepted risks, and attach test evidence before implementation. Owner check: - Product owner: - Engineering owner: - QA or operations reviewer: Scope boundary: - In scope: - Out of scope: - Assumption that still needs approval: Acceptance evidence: - Test or fixture: - Log, metric, or screenshot: - Manual review step: Writing boundary: avoid vague verbs; every criterion needs a visible pass or fail signal. Reviewer prompt: - What would still be ambiguous to someone who missed the planning meeting? - What evidence would make this safe enough to ship?
Flagship Use Path
This is one of the primary Spec Coding references for Pre-coding review gate. Use it with a real ticket, pull request, or release review instead of treating it as background reading.
- Start here when: implementation is about to start and the team needs a blocker checklist.
- Copy this: the review gate and accepted-risk record.
- Evidence to attach: explicit approval for scope, rollback, testing, and unresolved assumptions.
- Pair it with: Spec Review Checklist and Downloads.
Flagship review path: - Open this page during planning or review. - Copy the relevant artifact into the work item. - Replace example values with your system, owner, and failure mode. - Block implementation if the evidence line is still blank.
Second-pass reviewer note: a checklist should block something
This page now leans harder on rejection criteria. A checklist that cannot stop implementation is just reading material. The article should help a reviewer say no without sounding subjective.
Blocking language to copy: - Blocked until the rollback trigger names a metric and owner. - Blocked until AC-3 has a fixture or manual verification step. - Blocked until the API consumer list is complete. - Blocked until the non-goal excludes migration of existing records.
Editorial Review Note
Reviewed Apr 29, 2026. This update added a reusable artifact, checked the article against the related topic hub, and tightened the next-step links so the page works as a practical reference rather than a standalone essay.
Topic Path
This article belongs to the Spec-First Development track. Start with the hub, then use the checklist, template, or tool below on a real project.
Keep Reading
Editorial Note
Last reviewed Apr 29, 2026: examples, internal links, and reusable review blocks were checked for practical specificity.
- Author details: Spec Coding Editorial Team
- Editorial policy: How we review and update articles
- Corrections: Contact the editor
Consolidated Coverage
This canonical guide now covers several related notes that used to live as separate pages. Keeping them together makes Spec Review Checklist Before Coding easier to review, link, and use as the main reference.
- Code Review Best Practices That Actually Improve Code Quality
- How to Run Effective Technical Design Reviews
- Spec Review Anti-Patterns: When to Skip Specs and What Not to Specify
- Technical Debt: How to Measure, Prioritize, and Pay It Down