---
name:        test-coverage-baseline
description: "Establishes a committed test coverage baseline and CI gate on the module being migrated — before any migration skill runs — so regressions are detectable."
metadata:
  phase:           3
  source_stack:    "Any — this skill is stack-agnostic"
  target_stack:    "N/A — runs before the target stack is introduced"
  effort_estimate: M
  last_updated:    2026-04-04
---

# 1. Purpose

This skill establishes a behavioral baseline — a committed coverage report and a CI gate —
on the module being migrated, before any Phase 4 migration skill touches the code. The
core principle is: you cannot know whether a migration broke something unless you had a
passing test before you started. The baseline is not about achieving perfect coverage; it
is about capturing enough behavioral evidence on the current stack that regressions on the
new stack produce a signal. The focus is deliberately on integration and contract tests
rather than unit tests: unit tests are typically implementation-coupled and will need to be
rewritten anyway when the stack changes, whereas a test that exercises a route handler or
a service boundary end-to-end survives a rewrite because it tests observable behavior, not
internal structure. When coverage cannot be achieved through tests (external APIs, legacy
code with no seams), this skill provides a protocol for snapshotting API behavior using
recording tools (Polly.js, VCR, nock-record, supertest snapshots) so that the snapshot
library itself becomes the behavioral contract.

---

# 2. Trigger Conditions

**Use when:**
- A Phase 2 migration manifest has been produced and identifies a specific module, service, or route group as the next migration target.
- The current test suite for `target_module` passes on the current commit but has no committed coverage report — there is no way to detect coverage regression in CI today.
- The coverage percentage for `target_module` is unknown or below 70% line coverage — migration would begin on an untested foundation.
- The team is about to run any Phase 4 skill (`/migrate`, `express-to-fastify`, `js-to-typescript`, `cra-to-vite`, etc.) — this skill is the mandatory gate before Phase 4 begins for any module.

**Do NOT use when:**
- A coverage baseline for `target_module` already exists as a committed artifact (`coverage/baseline-<module>.json`) and the CI gate is already configured — check with `grep coverage <ci_config_path>` before running.
- You have already started migrating `target_module` — running this skill after migration has begun defeats its purpose. Write tests against the original stack, not the in-progress migration. If migration has started, stop, revert to the pre-migration state, run this skill, then resume.
- The intent is to improve test quality rather than establish a baseline — use a dedicated test-quality skill. This skill writes only the minimum tests needed to detect regressions; it deliberately does not refactor, optimize, or clean up the existing test suite.
- The coverage tool cannot run against `target_module` in isolation (e.g., circular dependency prevents loading the module in the test environment) — resolve the dependency issue first.

---

# 3. Inputs

**Required:**

| Input | Type | Description |
|-------|------|-------------|
| `target_module` | string | Identifier for the module, service, or route group being baselined. Used to scope coverage collection and name output artifacts. Example: `src/services/AuthService` or `api/users`. |
| `repo_root` | file-path | Absolute path to the repository root. All commands run from here. |
| `coverage_tool` | enum(`c8`, `istanbul`, `pytest-cov`, `coverage.py`, `simplecov`, `other`) | Coverage tool for the repo's language and test framework. Determines the exact CLI commands used in Steps 4 and 9. |
| `ci_config_path` | file-path | Repo-root-relative path to the CI pipeline configuration file where the coverage gate will be added (e.g., `.github/workflows/test.yml`, `Jenkinsfile`, `Makefile`). |

**Optional:**

| Input | Type | Default | Description |
|-------|------|---------|-------------|
| `line_coverage_threshold` | integer | `70` | Minimum line coverage percentage required on `target_module` before this skill declares done. Raise to 80+ for high-risk modules (auth, payments); 70 is the floor, not the target. |
| `api_coverage_threshold` | integer | `100` | Minimum line coverage required specifically on the public API surface (route handlers, exported service methods, public class methods). Always 100 unless the surface has untestable branches (e.g., vendor callbacks with no seam). |
| `test_dir` | file-path | `tests/` or `src/__tests__/` (auto-detect) | Directory where new test files are written. Defaults to the first existing test directory found under `repo_root`. |
| `snapshot_tool` | enum(`polly`, `nock-record`, `msw`, `vcr`, `supertest-snapshots`, `none`) | `none` | API behavior snapshotting tool to configure when coverage cannot be reached through pure tests. See Step 7 for detail. Set to `none` if all code paths can be exercised without external calls. |
| `coverage_reporter` | string | `text-summary,json,lcov` | Comma-separated list of coverage reporters to pass to the coverage tool. `json` is required for the baseline artifact; `lcov` is required if the CI provider (Codecov, Coveralls) reads LCOV files. |

<!--
  STOP CONDITIONS:
  - If `target_module` path does not exist under `repo_root`, halt:
    "target_module '<value>' does not exist. Verify the path relative to <repo_root>."
  - If `ci_config_path` does not exist, halt:
    "CI configuration file not found at <ci_config_path>. Provide the correct path or
     create the file before running this skill. Without a CI config, the coverage gate
     cannot be enforced automatically."
  - If the existing test suite fails (non-zero exit from the test runner), halt:
    "The current test suite is failing before any baseline work begins. Fix the failing
     tests on the current stack before establishing a baseline — a baseline built on a
     broken suite is meaningless."
-->

---

# 4. Steps

1. → Hand off to `code-archaeologist` (see Section 5. Agent Handoffs) to produce the public API surface inventory for `target_module`. Wait for `output/test-coverage-baseline-api-surface-<timestamp>.md` before continuing.

2. Run the existing test suite with coverage collection scoped to `target_module`. Use the command appropriate for `coverage_tool`:

   **c8 (Node.js):**
   ```bash
   npx c8 \
     --include='<target_module>/**' \
     --reporter=text-summary \
     --reporter=json \
     --reporter=lcov \
     --report-dir=coverage/current \
     npx jest --testPathPattern='<target_module>'
   ```

   **istanbul / nyc (Node.js):**
   ```bash
   npx nyc \
     --include='<target_module>/**' \
     --reporter=text-summary \
     --reporter=json \
     --reporter=lcov \
     --report-dir=coverage/current \
     npx jest --testPathPattern='<target_module>'
   ```

   **pytest-cov (Python):**
   ```bash
   pytest \
     --cov=<target_module> \
     --cov-report=term-missing \
     --cov-report=json:coverage/current/coverage.json \
     --cov-report=lcov:coverage/current/lcov.info \
     tests/
   ```

   **coverage.py (Python, without pytest):**
   ```bash
   coverage run --source=<target_module> -m unittest discover tests/
   coverage report --format=text
   coverage json -o coverage/current/coverage.json
   coverage lcov -o coverage/current/lcov.info
   ```

   Record: (a) current overall line coverage percentage for `target_module`, (b) current coverage percentage for each file in the public API surface (from Step 1), (c) which lines are uncovered.
   - If this fails: check that the test runner can find tests scoped to `target_module`. If no tests exist yet, coverage will be 0% — record that and proceed to Step 3.

3. Read `output/test-coverage-baseline-api-surface-<timestamp>.md`. For each item in the public API surface (exported functions, route handlers, public class methods), cross-reference with the coverage output from Step 2. Produce two lists:
   - **Surface gaps**: API surface items with <100% line coverage. These are mandatory to close.
   - **Module gaps**: lines in `target_module` outside the API surface with <`line_coverage_threshold`% coverage. These are advisory — close as many as practical without writing implementation-coupled unit tests.

   Write both lists to `output/test-coverage-baseline-gaps-<timestamp>.md`. If both lists are empty (all thresholds already met), skip to Step 8.

4. For each item in the **surface gaps** list: write an integration or contract test that exercises the item through its observable interface — not by calling internal helpers directly. Follow these rules:

   - **Prefer integration depth over unit isolation.** A test that calls `POST /api/users` through a real HTTP handler (supertest, pytest `test_client`, Rack `rack-test`) is more migration-resilient than one that calls `UserService.create()` directly, because it survives a handler rewrite.
   - **Test every status code / return type variant.** If a handler returns 200 and 422, there must be at least one test for each. Untested branches are invisible to equivalence tests.
   - **Never mock the database for surface-gap tests** unless the database is genuinely unavailable. A test that mocks the DB teaches nothing about what the system actually does; it will pass on the new stack even if the queries are wrong.
   - **Do mock external services** (payment gateways, email providers, third-party APIs) — but use recording-based mocks (Step 7), not hand-written stubs, so the mock captures real behavior.
   - **Name tests to describe observable behavior**, not implementation: `it('returns 401 when the session cookie is missing')`, not `it('calls requireAuth middleware')`.

   Write each new test file to `<test_dir>` using the naming convention `<module-name>.baseline.test.<ext>`. This suffix distinguishes baseline tests from pre-existing tests and makes them grep-able in CI.

5. Run coverage again (same command as Step 2) after writing the new tests. Check whether surface gaps are closed. For any surface item still below 100%:
   - If the gap is in an error path that requires infrastructure unavailable in the test environment (e.g., a DB failure branch): proceed to Step 7 (API snapshotting) for that path.
   - If the gap is reachable but complex: write the test. Do not skip.
   - If the gap is in dead code that is never reachable in production: document it in the migration log as confirmed-dead and exclude it from coverage collection using the tool's ignore directive (`/* c8 ignore next */`, `# pragma: no cover`). Do not add ignore directives speculatively.

6. Check coverage for **module gaps**. Write additional integration-level tests to raise line coverage toward `line_coverage_threshold`. Apply the same rules as Step 4. Stop when:
   - The overall line coverage for `target_module` meets or exceeds `line_coverage_threshold`, or
   - All remaining uncovered lines are confirmed dead code (documented in the migration log), or
   - The only way to cover remaining lines would require writing implementation-coupled unit tests (testing private methods, mocking internal state). If so: document each uncovered line in the migration log as `impl-coupled — acceptable gap`, and reduce the effective threshold accordingly. Do not write implementation-coupled tests to hit a coverage number.

7. **API Behavior Snapshotting** — for any code path that cannot be covered by tests (external API calls, vendor webhooks, legacy code with no injectable seam): configure the snapshot tool specified in `snapshot_tool` to record real interactions and replay them as behavioral contracts.

   **Polly.js (Node.js — records HTTP interactions at the adapter level):**
   ```typescript
   // tests/baseline/AuthService.snapshot.test.ts
   import { Polly } from '@pollyjs/core';
   import NodeHttpAdapter from '@pollyjs/adapter-node-http';
   import FSPersister from '@pollyjs/persister-fs';

   Polly.register(NodeHttpAdapter);
   Polly.register(FSPersister);

   describe('AuthService — OAuth callback (snapshot)', () => {
     let polly: Polly;

     beforeEach(() => {
       polly = new Polly('auth-oauth-callback', {
         adapters: ['node-http'],
         persister: 'fs',
         persisterOptions: {
           fs: { recordingsDir: 'tests/recordings' },
         },
         // RECORD mode on first run; REPLAY on subsequent runs.
         // Commit recordings/ to the repo — they ARE the behavioral contract.
         mode: process.env.POLLY_MODE === 'record' ? 'record' : 'replay',
       });
     });

     afterEach(() => polly.stop());

     it('exchanges OAuth code for tokens and returns user profile', async () => {
       const result = await AuthService.handleOAuthCallback({ code: 'test-code', state: 'csrf' });
       expect(result).toMatchObject({ userId: expect.any(String), email: expect.any(String) });
     });
   });
   ```

   **nock-record (Node.js — simpler, records nock interceptors as JSON):**
   ```typescript
   // tests/baseline/payments.snapshot.test.ts
   import nockRecord from 'nock-record';

   describe('PaymentService — charge (snapshot)', () => {
     const recorder = nockRecord.setupRecorder();

     it('charges a card and returns a transaction ID', async () => {
       // On first run with NOCK_RECORD=true: makes real HTTP calls and saves fixtures.
       // On subsequent runs: replays from fixtures/payments-charge.json.
       const { completeRecording } = await recorder.record('payments-charge');
       const result = await PaymentService.charge({ amount: 100, token: 'tok_test' });
       completeRecording();
       expect(result.transactionId).toMatch(/^txn_/);
     });
   });
   ```

   **msw (Mock Service Worker — works in Node.js test environments via `msw/node`):**
   ```typescript
   // tests/baseline/notifications.snapshot.test.ts
   import { setupServer } from 'msw/node';
   import { http, HttpResponse } from 'msw';
   // Recorded response fixtures — committed to the repo.
   import sendgridFixture from '../fixtures/sendgrid-send-202.json';

   const server = setupServer(
     http.post('https://api.sendgrid.com/v3/mail/send', () => {
       return HttpResponse.json(sendgridFixture, { status: 202 });
     })
   );

   beforeAll(() => server.listen({ onUnhandledRequest: 'error' }));
   afterAll(() => server.close());

   it('sends a transactional email and returns 202', async () => {
     const result = await NotificationService.sendWelcomeEmail('alice@example.com');
     expect(result.statusCode).toBe(202);
   });
   ```

   **supertest snapshots (for route-level behavioral contracts):**
   ```typescript
   // tests/baseline/users-api.snapshot.test.ts
   import request from 'supertest';
   import app from '../../src/app';

   describe('GET /api/users/:id — behavioral snapshot', () => {
     it('matches the committed response snapshot', async () => {
       const res = await request(app)
         .get('/api/users/user_123')
         .set('Authorization', 'Bearer test-token');
       // Jest snapshot stored in __snapshots__/users-api.snapshot.test.ts.snap
       // This IS the behavioral contract — commit it, review diffs in PRs.
       expect({
         status:  res.status,
         body:    res.body,
         headers: {
           'content-type': res.headers['content-type'],
           'cache-control': res.headers['cache-control'],
         },
       }).toMatchSnapshot();
     });
   });
   ```

   **VCR (Ruby — cassette-based HTTP recording):**
   ```ruby
   # spec/baseline/auth_service_spec.rb
   require 'vcr'

   VCR.configure do |config|
     config.cassette_library_dir = 'spec/cassettes'
     config.hook_into :webmock
     config.default_cassette_options = { record: :new_episodes }
   end

   RSpec.describe AuthService, '#oauth_callback' do
     it 'exchanges code for a user profile', :vcr do
       # First run records the HTTP interaction to spec/cassettes/AuthService_oauth_callback.yml
       # Subsequent runs replay from the cassette — commit cassettes/ to the repo.
       result = AuthService.oauth_callback(code: 'test-code')
       expect(result).to include(:user_id, :email)
     end
   end
   ```

   **Rules for all snapshot/recording approaches:**
   - **Commit recordings to the repo.** The recordings directory (`tests/recordings/`, `spec/cassettes/`, `__snapshots__/`) is the behavioral contract. It must be reviewed in PRs like code — a recording change means behavior changed.
   - **Record once against real external services in a controlled environment.** Use a test account, sandbox credentials, or a staging external service. Never record against production.
   - **Document the recording date** in a `RECORDING_NOTES.md` in the recordings directory. Recordings become stale when external APIs change; flag them for re-recording in Phase 7 (Stabilize).
   - **Do not re-record during migration.** If an external service returns different data during migration, that is a real behavioral change — investigate before updating the recording.

8. Run the full test suite with coverage one final time. Confirm:
   - Overall line coverage for `target_module` ≥ `line_coverage_threshold`.
   - Coverage for every public API surface item = 100% (or documented as confirmed-dead / impl-coupled gap).
   - All tests pass (exit code 0).
   Write the final coverage JSON to `coverage/baseline-<target_module_slug>.json`. This file is the committed baseline artifact.

9. Configure the CI coverage gate. Add a coverage threshold check to `ci_config_path` that fails the build if coverage for `target_module` drops below the baseline:

   **GitHub Actions (c8 / jest):**
   ```yaml
   # In .github/workflows/test.yml — add after the test step
   - name: Check coverage baseline for <target_module>
     run: |
       npx c8 check-coverage \
         --lines <line_coverage_threshold> \
         --include '<target_module>/**' \
         --reporter=text-summary \
         npx jest --testPathPattern='<target_module>' --passWithNoTests
   ```

   **GitHub Actions (pytest-cov):**
   ```yaml
   - name: Check coverage baseline for <target_module>
     run: |
       pytest \
         --cov=<target_module> \
         --cov-fail-under=<line_coverage_threshold> \
         --cov-report=term-missing \
         tests/
   ```

   **Makefile target (language-agnostic wrapper):**
   ```makefile
   .PHONY: coverage-gate
   coverage-gate:
   	npx c8 check-coverage --lines $(COVERAGE_THRESHOLD) --include '$(TARGET_MODULE)/**' \
   		npx jest --testPathPattern='$(TARGET_MODULE)' --passWithNoTests
   ```

   Run the gate command locally to confirm it exits 0 on the current baseline. Then commit the CI config change.
   - If the gate fails on the current baseline: the threshold is set higher than the current coverage. Lower the threshold to match the actual current coverage, document this in the migration log as "starting below target — raise incrementally as tests improve," and proceed.

10. Commit the baseline artifacts. The commit must include exactly:
    - All new `*.baseline.test.*` files in `<test_dir>`.
    - All recording/cassette/snapshot files in the recordings directory (if `snapshot_tool` ≠ `none`).
    - `coverage/baseline-<target_module_slug>.json` — the coverage JSON artifact.
    - The updated `ci_config_path` with the new coverage gate.
    - `output/test-coverage-baseline-gaps-<timestamp>.md` — the gap analysis.
    - `RECORDING_NOTES.md` if snapshot tools were configured.

    Commit message format: `test(<target_module>): establish coverage baseline before migration`

    Do not squash this commit with migration work. The baseline commit must remain a distinct, revertable point in git history — it is the "before" that makes migration equivalence verifiable.

11. Write all outputs declared in Section 7. Run every item in Section 6 Equivalence Tests and record results. Evaluate every item in Section 9 Done Criteria; report pass/fail inline, then print the final verdict.

---

# 5. Agent Handoffs

## code-archaeologist

- **File:** `agents/code-archaeologist.md`
- **Triggered by:** Step 1
- **Prompt template:**
  ```
  TASK:        Inventory the public API surface of <target_module>. Report every:
                 - Exported function (name, parameter names and types if typed, return type)
                 - Route handler (HTTP method, path, expected request body shape, response shapes)
                 - Public class method (class name, method name, signature)
                 - Re-exported symbol from an index file (trace to its origin file and line)
               For each item, also report: file path, line number, and whether it has any
               existing test coverage (search <test_dir> for imports of the file or calls
               to the exported name).
               Flag any item that calls an external HTTP service — these are candidates for
               snapshot-based testing (Step 7).
  REPO_ROOT:   <repo_root>
  SCOPE:       <repo_root>/<target_module>
  OUTPUT_FILE: output/test-coverage-baseline-api-surface-<timestamp>.md
  FORMAT:      markdown
  ```

---

# 6. Equivalence Tests

<!--
  This skill IS the baseline that makes equivalence testing possible later.
  Section 6 here does not compare old vs. new stacks (there is no new stack yet).
  Instead, it verifies that the baseline itself is valid: tests run, coverage is real,
  the CI gate enforces the threshold, and the committed artifact matches the live run.
-->

| Test Name | Input | Expected Output | Tool |
|-----------|-------|-----------------|------|
| `baseline-tests-green` | Full test run scoped to `target_module` (Step 8 final run) | Exit code 0. Zero test failures. Any failure means the baseline captures broken behavior — unacceptable. | Bash: `npx jest --testPathPattern='<target_module>'` or `pytest tests/` |
| `line-coverage-met` | Coverage JSON at `coverage/baseline-<target_module_slug>.json`, `total.lines.pct` field | Value ≥ `line_coverage_threshold`. Read the JSON and compare — do not rely on the test runner's exit code alone. | Bash: `node -e "const r=require('./coverage/baseline-<slug>.json'); process.exit(r.total.lines.pct >= <threshold> ? 0 : 1)"` |
| `api-surface-covered` | For each item in `output/test-coverage-baseline-api-surface-<timestamp>.md`: grep its file path in the coverage JSON's `covered` lines | Coverage for every public API surface file is 100%, or each gap is documented in the migration log with one of: `confirmed-dead`, `impl-coupled`, or `snapshot-covered`. | Read + Bash: cross-reference coverage JSON `s` (statement) map against surface inventory. |
| `ci-gate-enforces-threshold` | Mutate one line of `target_module` source to an uncovered branch and run `coverage-gate` in CI dry-run mode | CI gate exits non-zero — the threshold is enforced, not just logged. Restore the mutation immediately after. | Bash: `sed` or `Edit` a dummy uncovered line → run gate → confirm non-zero → `git checkout -- <file>`. |
| `baseline-artifact-committed` | `git log --oneline -1 -- coverage/baseline-<target_module_slug>.json` | The baseline JSON exists in the git history as a distinct commit with message matching `test(<target_module>): establish coverage baseline before migration`. | Bash: git log command above returns a non-empty line. |
| `recordings-committed` | Only if `snapshot_tool` ≠ `none`: `git log --oneline -1 -- tests/recordings/` (or equivalent recordings directory) | Recordings directory appears in the same baseline commit as the coverage artifact. | Bash: git log scoped to recordings directory. |

---

# 7. Outputs

| Artifact | Path Pattern | Format | Description |
|----------|-------------|--------|-------------|
| API surface inventory | `output/test-coverage-baseline-api-surface-<timestamp>.md` | markdown | Produced by `code-archaeologist`. Complete list of exported functions, route handlers, and public methods with existing coverage status. Consumed in Steps 3–6 to prioritize test writing. |
| Coverage gap analysis | `output/test-coverage-baseline-gaps-<timestamp>.md` | markdown | Surface gaps (mandatory) and module gaps (advisory) identified in Step 3, with per-item disposition (closed / confirmed-dead / impl-coupled / snapshot-covered). Committed to the repo; reviewed by the engineer before Phase 4 begins. |
| Baseline coverage artifact | `coverage/baseline-<target_module_slug>.json` | json | Coverage JSON from the final run in Step 8. The authoritative before-state for equivalence testing in Phase 5. Committed to the repo; referenced by the CI gate. |
| Baseline test files | `<test_dir>/<module-name>.baseline.test.<ext>` | varies | One or more test files written in Steps 4 and 6. Suffixed `.baseline.test` to distinguish from pre-existing tests. Committed. |
| Recording artifacts | `tests/recordings/<cassette-name>.*` | varies | Polly.js HAR files, nock JSON fixtures, VCR YAML cassettes, or Jest `__snapshots__` files produced in Step 7. Committed. Only present if `snapshot_tool` ≠ `none`. |
| Recording notes | `tests/recordings/RECORDING_NOTES.md` | markdown | Documents when recordings were captured, against which environment, and which tests they cover. Flags recordings that will need refreshing after migration. Only present if `snapshot_tool` ≠ `none`. |
| Migration log | `output/test-coverage-baseline-log-<timestamp>.md` | markdown | All gap dispositions, impl-coupled exclusions, recording decisions, the final achieved coverage percentage, confidence level, and numbered assumptions list. Consumed by Phase 4 skills as proof that this skill ran. |
| Equivalence test results | `output/test-coverage-baseline-equiv-<timestamp>.md` | markdown | Pass/fail verdict for every row in Section 6. Required by Section 9 Done Criteria. |

---

# 8. References

- `references/migration-anti-patterns.md` — "Skipping Equivalence Validation" (§3) is the failure mode this skill exists to prevent. "Confidence Without Evidence" (§7) explains why recordings are committed, not regenerated.
- `references/strangler-fig-pattern.md` — the baseline established here is what makes the Phase 5 shadow-mode comparison meaningful; without a committed baseline, shadow-mode divergences cannot be attributed to the migration.
- `skills/04-migrate/frontend/js-to-typescript/` — requires this skill to have been run first; the `jest-pre` equivalence test in that skill uses the baseline test suite as its source of truth.
- `skills/04-migrate/frontend/cra-to-vite/` — the `jest-pre` / `vitest-suite` equivalence tests compare against the baseline established here.
- `skills/04-migrate/backend/express-to-fastify/` — the `express-contract` baseline tests are a direct output of running this skill on the Express route group.
- `skills/05-validate/` — the equivalence validator reads `coverage/baseline-<slug>.json` as the "before" artifact. Without it, Phase 5 cannot produce a confidence level higher than Low.
- `https://istanbul.js.org/docs/advanced/alternative-reporters/` — Istanbul/c8 reporter options; the `json` reporter format used for the baseline artifact is documented here.
- `https://pollyjs.dev/docs/configuration` — Polly.js configuration reference for `mode` (record / replay / passthrough) and persister options.

---

# 9. Done Criteria

<!--
  Claude evaluates each item and reports pass/fail before declaring this skill complete.
  Any unchecked item means the skill is NOT complete — do not let Phase 4 begin.
  This is a Phase 3 gate: its entire value is being a hard prerequisite for migration.
-->

- [ ] The full test suite for `target_module` passes with exit code 0 on the current (pre-migration) stack — `baseline-tests-green` equivalence test recorded as pass.
- [ ] Overall line coverage for `target_module` ≥ `line_coverage_threshold` — `line-coverage-met` equivalence test recorded as pass. Any shortfall must be documented as confirmed-dead or impl-coupled in the migration log; undocumented shortfalls are a fail.
- [ ] Every public API surface item (from the code-archaeologist inventory) is at 100% line coverage, confirmed-dead, impl-coupled, or snapshot-covered — `api-surface-covered` equivalence test recorded as pass. No surface item may be left as simply "untested" without a documented disposition.
- [ ] The CI gate enforces the coverage threshold — `ci-gate-enforces-threshold` equivalence test recorded as pass. A gate that only logs but does not fail the build does not count.
- [ ] `coverage/baseline-<target_module_slug>.json` is committed in a standalone git commit — `baseline-artifact-committed` equivalence test recorded as pass.
- [ ] If `snapshot_tool` ≠ `none`: all recording artifacts are committed in the same baseline commit — `recordings-committed` equivalence test recorded as pass.
- [ ] If `snapshot_tool` ≠ `none`: `tests/recordings/RECORDING_NOTES.md` exists and lists the recording date, environment, and which tests each recording covers.
- [ ] No new test uses `jest.mock()` / `unittest.mock.patch()` / equivalent to mock the database or internal state of `target_module` — grep `<test_dir>` for `jest.mock\|patch\|MagicMock` in `.baseline.test.` files; any match that mocks a DB call or internal state is a fail. Mocks of external HTTP services via recording tools are acceptable.
- [ ] The gap analysis at `output/test-coverage-baseline-gaps-<timestamp>.md` exists and every gap has a disposition — grep for lines without one of `closed`, `confirmed-dead`, `impl-coupled`, `snapshot-covered`; zero undispositioned gaps.
- [ ] All output files listed in Section 7 exist at their declared paths — verify each with a file read. Recording artifacts and RECORDING_NOTES.md are only required if `snapshot_tool` ≠ `none`.
- [ ] Every equivalence test in Section 6 has a recorded result in `output/test-coverage-baseline-equiv-<timestamp>.md` — no test name is missing. If `snapshot_tool` is `none`, `recordings-committed` is marked N/A, not absent.
- [ ] No equivalence test in Section 6 is recorded as **fail** — grep the results file for `fail`; N/A entries do not count as fail.
- [ ] The migration log includes a confidence level (High / Medium / Low) — grep `output/test-coverage-baseline-log-<timestamp>.md` for `Confidence:`.
- [ ] The migration log includes a numbered assumptions list — grep `output/test-coverage-baseline-log-<timestamp>.md` for `Assumptions:`.
