10 Common Mistakes in Software Specifications
Every failed project I've reviewed had a spec. The specs just didn't say the things that mattered. These ten mistakes keep showing up in the specs I review — mine included. None of them are hard to fix, but all of them are easy to miss when you're writing under deadline pressure.
1. Describing features instead of decisions
A feature description tells the reader what the system will do. A decision tells the reader what the system will do instead of something else. "Users can reset their password via email" describes a feature. "Users reset via email — not SMS, not security questions, not magic links — because email is our only verified contact channel for unauthenticated users" describes a decision.
If your spec is a tour of the feature without any "because," you're writing a brochure, not a spec. The decisions are where ambiguity lives. Surface them.
2. Adjectives where numbers belong
"Fast," "reliable," "reasonable," "acceptable performance" — these words mean different things to different readers, always. The spec should not ship until every adjective describing system behavior has been replaced with a number and a unit.
- "Fast response" → "p95 under 300ms for 10KB payloads, measured at the load balancer"
- "Reasonable retry" → "3 retries with 1s, 2s, 4s backoff"
- "Reliable delivery" → "99.9% delivery within 60s, measured over 30-day windows"
3. Missing non-goals
Every project I've seen ship late had either no non-goals or one-liners that didn't commit to anything. Non-goals prevent scope creep during implementation. If the spec has a goal section but no non-goals section, that's the first thing to fix.
Aim for 3-6 explicit non-goals per mid-sized feature. Each with a reason — deferred, rejected, or out-of-domain.
4. Acceptance criteria that aren't criteria
"The user should be able to see their order history" is a feature request, not an acceptance criterion. QA can't test "should be able to." They need input, trigger, and specific observable output.
Test yourself: if you removed the word "should," does the sentence still make sense? If yes, it's probably a testable claim. If not, you have more work to do.
5. Skipping the failure paths
Every happy-path acceptance criterion has at least three failure paths that need writing down: validation failure, authorization failure, and downstream dependency failure. Specs that only cover the happy path are the reason your error handling becomes a series of 500s discovered in production.
For each AC, ask: what does the user see when this fails? If the answer is "I don't know," the spec isn't finished.
6. Vague scope ownership
"The team will decide" isn't ownership — it's a deferred argument. Every scope question needs a named person who can say yes or no, not a group that will discuss. When the feature expands mid-implementation (and it will), you need to know who has the authority to say "no, not this milestone."
One person. Written in the spec. Even if that person is you.
7. Implementation details dressed as requirements
"The system uses Redis for session storage" is an implementation detail. It does not belong in a spec unless Redis is a user-observable behavior (it isn't). Specs should describe observable behavior: session persists across reloads, session expires after 30 minutes of inactivity, session survives server restart.
Whether that's Redis, Postgres, or an in-memory cache is engineering's call. Putting it in the spec constrains the team without adding clarity for reviewers.
8. Rollout plan as an afterthought
I see this one almost every week. A 5-page spec with detailed acceptance criteria, then a single line at the bottom: "Rollout: feature flag." That tells nobody anything useful at 2am when something goes wrong.
A rollout section needs named stages with explicit gates, numeric stop-loss thresholds, and a rollback mechanism described by type (flag flip vs. code revert vs. migration reversal). Each of those takes minutes to write and saves hours during incidents.
9. No concurrency story
If two users can hit the same record, you have a concurrency problem whether the spec acknowledges it or not. Most specs I review skip concurrency entirely. The result is that implementation picks an approach silently — usually last-write-wins with no user notification — and the first real race condition surfaces as a customer complaint.
State the concurrency rule explicitly. "Last-write-wins, no conflict banner" is a valid choice. So is "optimistic concurrency with 409 on conflict." What's not valid is leaving it for implementation to invent.
10. Specs that pretend the past doesn't exist
Specs for new features often describe the feature as if it's shipping into a vacuum. In reality, most features interact with existing flows, existing data, existing user expectations. The spec needs a section — even a paragraph — that names what's changing in relation to what already exists.
- What existing behaviors is this feature relying on?
- What existing data does it read or write?
- What existing flows might be affected by this change?
- Are any existing behaviors being changed, even subtly?
Features that "just add" something still need to answer these questions. The "just" is usually where production surprises hide.
The one thing these all share
Notice the pattern: every mistake is a place where the spec left a decision for later. Not "wrong decision" — missing decision. The spec-first discipline isn't about being right on the first try. It's about not punting decisions to implementation, where they're expensive to change.
When I review my own specs, I scan for exactly these ten things. Not because they're the only mistakes possible, but because they're the ones I personally keep making. Your list may be different. Finding your own ten is the real skill.
Review drill
Use this list to audit one draft spec before implementation starts. The useful question is not whether the document looks complete; it is which decisions a developer, QA reviewer, or release owner would still have to invent.
- Scope: replace vague verbs such as support or handle with the exact workflow, boundary, and excluded case.
- Evidence: add an observable check for each acceptance criterion: test case, example payload, screenshot, log line, or manual verification note.
- Failure paths: write the error, retry, concurrency, and compatibility behavior before code review becomes the first place those choices appear.
Add a short "decisions closed in review" note to the spec. It should name the ambiguity that was fixed and any risk the team consciously accepted.
Example: a phrase like "admins can manage users" hides at least four decisions: which roles count as admin, which user states are editable, what audit entry is written, and what happens when the target user is locked.
Worked Review Example
When a spec says "users can invite teammates," slow down. The review should ask who can invite, whether guests count, how duplicate invites behave, when links expire, whether disabled domains are blocked, and what audit entry is written. None of those questions are edge trivia. They are the places where support tickets and security exceptions appear after launch.
Spec Writing Block to Copy
Use this when a ticket sounds clear but still needs acceptance language. It forces the author to name the actor, trigger, result, and evidence.
Spec writing review block: 10 Common Mistakes in Software Specifications Decision to make: - Avoid ten specification mistakes that hide decisions, blur acceptance criteria, skip failure paths, and push scope arguments into implementation. Owner check: - Product owner: - Engineering owner: - QA or operations reviewer: Scope boundary: - In scope: - Out of scope: - Assumption that still needs approval: Acceptance evidence: - Test or fixture: - Log, metric, or screenshot: - Manual review step: Writing boundary: avoid vague verbs; every criterion needs a visible pass or fail signal. Reviewer prompt: - What would still be ambiguous to someone who missed the planning meeting? - What evidence would make this safe enough to ship?
Editorial Review Note
Reviewed Apr 28, 2026. This update added a reusable artifact, checked the article against the related topic hub, and tightened the next-step links so the page works as a practical reference rather than a standalone essay.
Reviewer pass: one bad paragraph rewritten
This is the kind of rewrite that makes a spec feel less like planning theater and more like an engineering artifact.
Before: The new export flow should be fast, reliable, and easy to use. Users should be notified if something goes wrong. After: The CSV export starts within 2 seconds for reports under 50k rows. Reports over 50k rows move to an async job and show a job_id. If the job fails, the user sees the failed state, retry action, and last_error_code. The release is blocked unless QA verifies timeout, retry, duplicate request, and permission-denied cases.
The second version names numbers, states, errors, and release evidence. It is still product-friendly, but it no longer hides decisions inside adjectives.
Topic Path
This article belongs to the Spec-First Development track. Start with the hub, then use the checklist, template, or tool below on a real project.
Keep Reading
Editorial Note
Last reviewed Apr 28, 2026: examples, internal links, and reusable review blocks were checked for practical specificity.
- Author details: Spec Coding Editorial Team
- Editorial policy: How we review and update articles
- Corrections: Contact the editor