Learn the method
Understand what belongs in a spec, what waits for implementation, and how to keep decisions testable.
Open the spec-first hubUse this page when you are new to Spec Coding or when a teammate needs the shortest route from unclear request to reviewable spec.
Understand what belongs in a spec, what waits for implementation, and how to keep decisions testable.
Open the spec-first hubUse this path for schemas, error taxonomy, compatibility, versioning, webhooks, SDKs, and client safety.
Open the API contracts hubTurn vague success statements into Given/When/Then criteria, failure paths, and release evidence.
Open acceptance criteriaConnect prompts, risk registers, allowed files, test evidence, and review gates before generated code merges.
Open AI governanceWrite the behavior the team is committing to, not the task list or UI preference.
Define what this release will not change so implementation cannot expand silently.
Use examples, failure paths, fixtures, logs, screenshots, or metrics reviewers can inspect.
Pick a feature, API, or DB template. Use a generator only to draft the first structured version.
Do not treat approval as proof. Link tests, logs, rollout gates, or manual checks to the spec.
A one-page structure for goals, non-goals, acceptance criteria, edge cases, and rollback notes.
Copy templateTurn rough notes into editable Markdown, then tighten the evidence and owner fields by hand.
Open generatorCheck scope, dependencies, rollout, rollback, and test evidence before implementation starts.
Use checklistWrite a spec when a change can fail in more than one reasonable way. That usually means the work touches customer data, money, permissions, public API behavior, migrations, background jobs, release flags, or AI-generated code. A short copy change can use a checklist. A refund workflow, schema migration, partner endpoint, notification preference model, or queue consumer deserves a written decision path before implementation starts.
The fastest test is simple: if a reviewer could approve the pull request while still disagreeing with the intended behavior, write the spec first. The spec does not need to predict the implementation. It should name the behavior, the boundaries, the owner, and the evidence that will prove the change is safe enough to ship.
Confirms the goal, non-goals, user-facing behavior, and which trade-offs should be rejected in this release.
Checks API, data, permissions, compatibility, rollout order, rollback conditions, and implementation constraints.
Turns acceptance criteria into test cases, fixtures, manual checks, and known failure paths that support teams can recognize.
Checks that generated code stayed inside allowed files, followed non-goals, and produced the evidence requested in the spec.
A refund workflow is a good first practice case because the failure modes are concrete. A vague ticket says, "Allow users to refund orders." A reviewable spec decides the refund window, duplicate request behavior, provider timeout state, support override, event emission, audit log, and the rollout signal that should stop the release.
The spec-first version stays compact. It might say: refunds are allowed for captured charges within 90 days; repeated requests with the same idempotency key return the same refund id; provider timeouts move the refund to pending confirmation; and rollout stops if duplicate refund attempts exceed the threshold for 15 minutes. That is enough for engineering, QA, support, and AI coding tools to work from the same boundary.
# Refund workflow acceptance slice Given a captured charge less than 90 days old When a refund request arrives with a new idempotency key Then create one refund record, emit one refund_requested event, and write an audit entry with actor, charge_id, refund_id, and reason. Given the same request is replayed with the same idempotency key When the service receives it again Then return the existing refund_id and do not emit another event.
When you use an AI coding tool, paste the spec before asking for code. Make the boundaries explicit enough that the model can be judged against them. The prompt should say which files are allowed, which behavior is out of scope, which tests count as evidence, and what the model should do if the spec is ambiguous.
Use the spec below as the source of truth. Rules: - Do not add behavior outside the non-goals. - Only modify the files listed under Allowed files. - If a requirement is ambiguous, ask before implementing. - Add tests for every acceptance criterion. - Return a short evidence summary with changed files and test commands. Spec: [paste reviewed spec here]
Start with the spec-first development hub if you are new to the method, then choose a template for the work in front of you.
A one-page spec with a goal, non-goals, acceptance criteria, edge cases, owner, and evidence is enough for most low-risk work.
Use tools to draft structure quickly, then edit the output so it names real owners, test evidence, and release boundaries.
Give the AI the spec, non-goals, allowed files, and acceptance evidence. Do not ask for implementation until the review boundary is clear.
For the next real ticket, choose a template first, then use the review checklist before code generation or implementation.