---
name: meta-analysis-forge
description: Designs and audits first-order meta-analyses of primary studies. Use for effect-size extraction, effect-size harmonization, fixed/random/multilevel models, robust variance estimation, heterogeneity, prediction intervals, meta-regression, publication-bias diagnostics, sensitivity checks, coding sheets, and reproducible meta-analysis reports.
---

# Meta-Analysis Forge

Use this skill when evidence synthesis requires statistical pooling of primary-study effects.

## Core Principle

A meta-analysis is valid only when the effect sizes being combined are conceptually and statistically comparable enough for the target inference.

Separate:

- effect-size extraction;
- effect-size conversion;
- dependence among effects;
- model choice;
- heterogeneity interpretation;
- publication-bias diagnostics;
- substantive conclusion.

## Intake

Identify:

- outcome construct;
- effect-size metric;
- standard error, confidence interval, p-value, or sample size availability;
- number of studies;
- multiple effects per study;
- study designs;
- expected heterogeneity;
- moderators;
- field norms.
- whether raw, participant-level, sample-level, or harmonized derived data are available.

Load:

- `references/effect-sizes.md` for effect metrics and extraction.
- `references/ipd-and-mega-analysis.md` when the task involves individual participant data, multi-site raw/derived data harmonization, small-sample dataset integration, or mega-analysis.
- `references/synthesis-models.md` for model choice and diagnostics.
- `references/meta-analysis-quality-gates.md` for pre-pooling checks.
- `templates/coding-schema.csv` and `templates/validation-rules.md` for machine-readable coding-sheet structure and validation.
- `scripts/validate_coding_sheet.py` before statistical execution.
- `scripts/effect_size_helpers.R` for transparent mechanical conversions during extraction.
- `scripts/run_meta_analysis.R` only after coding validity and pooling appropriateness have been checked.
- `scripts/install_r_packages.R` when setting up the minimal R environment.

## Workflow

1. Define the effect-size family.
2. Build the coding sheet.
3. Convert or preserve metrics with justification.
4. Identify dependence: multiple outcomes, time points, samples, or models per study.
5. Pass the quality gates before pooling.
6. Choose model: fixed, random, multilevel, robust variance, Bayesian, or narrative synthesis.
7. Report heterogeneity: tau2, I2, prediction interval.
8. Assess small-study effects or publication bias when feasible.
9. Run sensitivity checks.
10. Write interpretation with limits.

For IPD or mega-analysis, first build a dataset inventory, harmonization plan, quality-control ledger, and study/site heterogeneity model before any pooled interpretation.

## Output Modes

### Coding Sheet

Use:

- `templates/coding-sheet.md` for a human-readable table.
- `templates/coding-schema.csv` for field definitions.
- `templates/example-coding-sheet.csv` for a minimal machine-readable example.

### Analysis Plan

```text
Effect-size metric:
Inclusion for pooling:
Model:
Dependence handling:
Heterogeneity:
Bias diagnostics:
Sensitivity checks:
Software:
Interpretation limits:
```

### Minimal R Run

Use `scripts/run_meta_analysis.R` for a small reproducible demonstration when the coding sheet has one harmonized effect metric and valid standard errors.

```text
Input CSV:
Output directory:
Effect metric:
Pre-pooling checks passed:
Known limits:
```

### Validation and Conversion

Use `scripts/validate_coding_sheet.py` to check required fields, numeric estimates, positive standard errors, duplicate effect IDs, and mixed effect metrics.

Use `scripts/effect_size_helpers.R` only for transparent mechanical helpers such as CI-to-SE, log-ratio transforms, Fisher z, approximate SMD SE, and lnROM. Record formulas and assumptions in the coding sheet notes.

### IPD / Mega-Analysis

Use:

- `references/ipd-and-mega-analysis.md` for workflow and guardrails.
- `templates/mega-analysis-dataset-inventory.csv` for data access and harmonization.
- `templates/mega-analysis-audit-report.md` for audit output.

### Audit

Flag:

- incompatible outcomes;
- mixed effect metrics without conversion;
- missing uncertainty;
- multiple effects treated as independent;
- overuse of I2 without prediction interval;
- meta-regression overclaiming;
- publication-bias tests with too few studies.

## Guardrails

- Do not invent effect sizes.
- Do not pool effects solely because they are numerically available.
- Do not interpret meta-regression causally unless design supports it.
- Do not ignore within-study dependence.
- Do not treat a high pooled N as proof of high evidence quality.
- Do not use vote-counting as a substitute for effect-size synthesis.
- Do not treat the minimal R script as a full meta-analysis pipeline; it does not solve effect conversion, dependence, or certainty assessment.
- Do not run effect-size helper conversions without preserving original reported values and source anchors.
- Do not call a project a mega-analysis unless raw, participant-level, sample-level, or harmonized derived data are reprocessed or remodeled under a common framework.
