---
name: vera-stat-timeseries-analyzing
description: >-
  Server-side extension that completes the full analysis pipeline for
  time series data after vera-stat-timeseries-testing has run. Adds
  SARIMA, exponential smoothing (ETS/Holt-Winters), structural break
  tests, Granger causality, subseries and rolling window analysis,
  regime detection, full classical model suite (ARIMA, SARIMA, ETS,
  GARCH, VAR, spectral, regression with ARIMA errors), ML-based
  forecasting (RF + LightGBM on lagged features), and cross-method
  forecast comparison. Generates manuscript-ready methods.md and
  results.md with formatted tables, publication-quality figures, and
  references.bib. Applies output variation and code style variation for natural, non-repetitive output. Triggered after vera-stat-timeseries-testing completes, or direct request with time series data.
user-invocable: true
allowed-tools: Read, Bash, Write, Edit, Grep, Glob
---

# Time Series — Full Analysis & Manuscript Generation

Open-source skill. Read `reference/specs/output-variation-protocol.md`
before every generation — apply all variation layers.

## Workflow

Continues from where vera-stat-timeseries-testing stopped (PART 0-2 done).

| Step | File | Executor | Output |
|---|---|---|---|
| Additional tests | `workflow/04-run-additional-tests.md` | Main Agent | PART 3 code + prose |
| Subseries | `workflow/05-analyze-subgroups.md` | Main Agent | PART 4 code + prose |
| Modeling | `workflow/06-fit-models.md` | Main Agent | PART 5 code + prose |
| Comparison | `workflow/07-compare-models.md` | Main Agent | PART 6 code + prose |
| Manuscript | `workflow/08-generate-manuscript.md` | Main Agent | methods.md + results.md |

## Additional Inputs

Collect if not already provided:
- Target discipline (for reporting conventions)
- Target journal or style (APA 7th, STROBE, etc.)
- Research question / hypothesis
- Forecast horizon (if not set in initial testing)
- Whether volatility modeling is relevant
- Whether cross-series relationships exist

## Output Structure

```
output/
├── methods.md
├── results.md
├── tables/             ← Markdown + CSV per table
├── figures/            ← PNGs, 300 DPI
├── references.bib
├── code.R              ← Style-varied
└── code.py             ← Style-varied
```

## Key References (read before generation)

| File | Purpose |
|---|---|
| `reference/specs/output-variation-protocol.md` | Output quality variation layers |
| `reference/specs/code-style-variation.md` | Seven-dimension code style diversity |
| `reference/patterns/sentence-bank.md` | 4-6 phrasings per result type |
| `reference/rules/reporting-standards.md` | Hard rules for statistical reporting |

## Reporting Standards

Same as vera-stat-timeseries-testing, plus:
- Model order notation: always ARIMA(p,d,q)(P,D,Q)[s] for seasonal models
- AIC/BIC: report for all fitted models for comparability
- Ljung-Box: report on residuals for every fitted model
- GARCH: report ARCH-LM test before fitting, conditional variance equation
- VAR: report lag selection criteria (AIC, BIC, HQ), Granger causality p-values
- Spectral: report dominant frequency, corresponding period, and power
- Forecast accuracy: RMSE, MAE, MAPE on hold-out set — frame as "which assumptions fit" not "which model wins"
- Tree-based with time series: frame as "exploratory"; never claim superiority over statistical models

## Cross-Skill Interface

```
Method Unit Contract:
├── code_r           → .R script (style-varied)
├── code_python      → .py script (style-varied)
├── methods_md       → methods.md (varied structure)
├── results_md       → results.md (varied phrasing)
├── tables/          → Markdown + CSV
├── figures/         → PNGs 300 DPI (varied layout)
├── references_bib   → .bib with cited references
└── comparison       → cross-method narrative (in results.md)
```

Invoked directly after `vera-stat-timeseries-testing` or orchestrated by `vera-stat-application-pipeline`.
