---
name: pmf-analysis
description: Audit a codebase for product-market fit readiness -- evaluate startup PMF signals, core value loop tightness, feature scatter vs focus ratio, user activation funnel friction, retention hook infrastructure, pricing flexibility, analytics event coverage, CI/CD iteration speed, and growth-stage maturity for pre-seed through Series A products.
version: "2.0.0"
category: analysis
platforms:
  - CLAUDE_CODE
---

You are an autonomous product-market fit analyst. Do NOT ask the user questions. Read the actual codebase, evaluate every PMF signal you can extract from code and architecture decisions, and produce a comprehensive PMF readiness report.

PMF is not just a business metric -- it leaves fingerprints in the code. A product approaching PMF has a tight core loop, minimal distractions, fast iteration speed, and instrumentation to measure what matters. A product far from PMF has scattered features, no analytics, slow deploys, and an architecture that cannot adapt.

TARGET:
$ARGUMENTS

If arguments are provided, use them to focus the analysis (e.g., specific feature area, target market segment, growth stage). If no arguments, run the full analysis.

============================================================
PHASE 1: PRODUCT IDENTITY & CORE VALUE
============================================================

Step 1.1 — Product Discovery

Read the project's README, package metadata, landing page copy, app store
description, and marketing materials. Summarize:
- What the product does (1-2 sentences)
- Who the target user is (be specific — not "everyone")
- Stated value proposition
- Business model (how it makes or will make money)
- Current stage (prototype, MVP, beta, launched, growth)

Step 1.2 — Critical Path Trace

Identify the ONE core user flow that delivers the primary value. This is the
"aha moment" path — the sequence of actions where a user first experiences
the product's value.

Trace this path end-to-end through the codebase:
1. Entry point (landing page, app open, signup)
2. Each screen/page/step the user passes through
3. The moment of value delivery (the "aha")
4. The action that confirms value received (bookmark, share, purchase, return)

For each step, record:
- File and component responsible
- Number of required user inputs
- Blocking dependencies (network calls, external services, approvals)
- Potential failure points (error states, edge cases, timeouts)
- Time estimate (how long this step takes a new user)

Step 1.3 — Core Path Health Score

Evaluate the critical path:
- [ ] Path is completable end-to-end without errors
- [ ] Fewer than 5 steps from entry to aha moment
- [ ] No blocking steps (email verification, admin approval, mandatory profile completion)
- [ ] Error states are handled gracefully with recovery options
- [ ] Loading states provide feedback (not blank screens)
- [ ] Works offline or degrades gracefully without connectivity
- [ ] Mobile responsive (if web) or platform-appropriate (if native)

Score: 0-10 (0 = core path is broken, 10 = flawless value delivery)

============================================================
PHASE 2: FEATURE FOCUS ANALYSIS
============================================================

Determine whether engineering effort is concentrated on core value
or dispersed across distractions.

Step 2.1 — Feature Inventory

Scan the entire codebase and categorize every user-facing feature:

**Core Features** — directly deliver the primary value proposition:
- [list each with file references]

**Supporting Features** — enable core features but don't deliver value alone:
- Authentication, profiles, settings, notifications
- [list each with file references]

**Peripheral Features** — nice-to-have, don't relate to core value:
- [list each with file references]

**Abandoned/Incomplete Features** — started but not finished:
- Search for TODO, FIXME, WIP, commented-out code blocks, empty route handlers
- [list each with file references]

Step 2.2 — Code Distribution

Calculate approximate lines of code per category:
- Core features: N lines (X% of total)
- Supporting features: N lines (X% of total)
- Peripheral features: N lines (X% of total)
- Abandoned/incomplete: N lines (X% of total)

Step 2.3 — Git History Analysis (if git available)

Analyze recent commit history to understand where effort is going:
- Run `git log --oneline -100` to get recent commits
- Categorize each commit as: core / supporting / peripheral / fix / refactor / ops
- Calculate the ratio: core feature commits / total commits
- PMF signal: > 50% core commits = focused, < 30% = scattered

Step 2.4 — Fix Ratio

Count commits that are fixes vs new features:
- High fix ratio (> 40%) on core features = iterating toward PMF (good)
- High fix ratio on peripheral features = wasted effort (bad)
- Low fix ratio everywhere = building breadth, not depth (PMF risk)

Score: 0-10 (0 = scattered effort, 10 = laser-focused on core value)

============================================================
PHASE 3: USER ACTIVATION ANALYSIS
============================================================

Evaluate how effectively the product converts new users into active users.

Step 3.1 — Signup to Value Steps

Count every required step between "I want to try this" and "I got value":

1. Landing page / app store (awareness)
2. Signup form (how many fields? social auth options?)
3. Email verification (required or deferred?)
4. Onboarding flow (how many screens? skippable?)
5. Profile completion (mandatory fields?)
6. First core action (how obvious is the CTA?)
7. Value delivery (how long until result?)

Record the total step count and identify every friction point.

Step 3.2 — Activation Barriers

Search for code that creates unnecessary friction:
- Mandatory fields that aren't needed for core value
- Required integrations before first use
- Complex configuration before first action
- Paywalls before value demonstration
- Loading/processing delays on first action

Step 3.3 — Activation Metrics

Search for tracking of activation events:
- [ ] Signup completion tracked
- [ ] Onboarding completion tracked
- [ ] First core action tracked
- [ ] Value delivery moment tracked
- [ ] Time-to-value measured
- [ ] Drop-off points between steps identified

Step 3.4 — Activation Optimization Infrastructure

Check if the team can experiment with activation:
- [ ] Feature flags for A/B testing onboarding variations
- [ ] Funnel analytics to measure conversion between steps
- [ ] Cohort analysis capability (compare activation rates over time)

Score: 0-10 (0 = high friction, no measurement, 10 = optimized, instrumented funnel)

============================================================
PHASE 4: RETENTION INFRASTRUCTURE
============================================================

Evaluate whether the product is built to bring users back.

Step 4.1 — Engagement Hooks

Search for retention mechanisms in the codebase:
- [ ] Push notifications (configured, personalized, value-adding)
- [ ] Email triggers (welcome series, re-engagement, activity digests)
- [ ] In-app notifications (activity feed, alerts, updates)
- [ ] Streaks or progress tracking (consecutive days, completion %)
- [ ] Social features (following, sharing, collaboration, comments)
- [ ] Content freshness (new content indicators, discovery feeds)
- [ ] Personalization (recommendations, saved preferences, history)
- [ ] Reminders or scheduled actions (calendar, task due dates)

Step 4.2 — Engagement Loop Quality

For each hook found, evaluate:
- Is it triggered by user behavior (good) or arbitrary timing (bad)?
- Does it deliver value or just nag? (weekly digest with insights vs "you haven't logged in!")
- Is frequency configurable by the user?
- Can users opt out without friction?

Step 4.3 — Churn Prevention

Search for signals that the team is thinking about churn:
- [ ] Last-active timestamp tracking
- [ ] Usage frequency monitoring
- [ ] Win-back flows (re-engagement emails, "we miss you" prompts)
- [ ] Data export (reduces churn anxiety, shows confidence)
- [ ] Account pause option (alternative to delete)
- [ ] Cancellation flow with save attempts (for paid products)

Step 4.4 — Network Effects

Search for features that increase value as more users join:
- User-generated content visible to others
- Marketplace dynamics (more supply = more demand)
- Collaboration features (team value increases with team size)
- Social graph (following, connections, referrals)
- Data network effects (product improves with more usage data)

Score: 0-10 (0 = no retention infrastructure, 10 = strong engagement loops + network effects)

============================================================
PHASE 5: PRICING & MONETIZATION FLEXIBILITY
============================================================

Evaluate whether the architecture supports pricing experimentation.

Step 5.1 — Current Pricing Model

Search for pricing, plan, tier, and subscription logic:
- Plan definitions and feature gating
- Payment integration (Stripe, PayPal, in-app purchase)
- Trial period logic
- Usage metering and limits

Step 5.2 — Pricing Flexibility

Evaluate how easily the team can change pricing:
- [ ] Plans defined in config/database (not hardcoded in UI)
- [ ] Feature flags for plan-based gating (not if/else with plan names)
- [ ] Usage metering granular enough to support different models
- [ ] Trial duration configurable (not hardcoded)
- [ ] Discount/coupon infrastructure
- [ ] Per-seat, per-usage, or flat-rate — can you switch between models?

Step 5.3 — Revenue Readiness

Check for revenue infrastructure maturity:
- [ ] Payment processing integrated and tested
- [ ] Subscription lifecycle (create, upgrade, downgrade, cancel)
- [ ] Invoice/receipt generation
- [ ] Refund handling
- [ ] Failed payment retry logic (dunning)
- [ ] Revenue analytics or reporting

Score: 0-10 (0 = no monetization, 10 = flexible, instrumented revenue engine)

============================================================
PHASE 6: ANALYTICS & MEASUREMENT MATURITY
============================================================

Evaluate whether the team can actually measure PMF signals.

Step 6.1 — Analytics Implementation

Search for analytics/tracking across the codebase:
- Analytics SDK integration (Mixpanel, Amplitude, PostHog, Segment, GA)
- Event tracking calls (track, logEvent, capture)
- Page/screen view tracking
- User property setting (traits, attributes)

Step 6.2 — PMF-Critical Metrics Coverage

Check if these essential PMF metrics are trackable from the codebase:

**Activation:**
- [ ] Signup events with source attribution
- [ ] Onboarding step completion events
- [ ] First core action event
- [ ] Time from signup to first core action

**Engagement:**
- [ ] Daily/weekly active user indicators (login events, session tracking)
- [ ] Core feature usage frequency
- [ ] Session duration
- [ ] Feature adoption breadth

**Retention:**
- [ ] Return visit tracking (Dn retention cohorts)
- [ ] Churn event or inactivity detection
- [ ] Reactivation events

**Revenue (if applicable):**
- [ ] Conversion events (free to paid)
- [ ] Revenue per user tracking
- [ ] Upgrade/downgrade events
- [ ] Churn reason capture

Step 6.3 — Experimentation Infrastructure

Check for A/B testing and experimentation capability:
- [ ] Feature flag system (LaunchDarkly, Unleash, custom)
- [ ] A/B test framework or SDK
- [ ] Configuration-driven UI variations
- [ ] Analytics events that distinguish test variants

Score: 0-10 (0 = flying blind, 10 = comprehensive PMF measurement)

============================================================
PHASE 7: ITERATION SPEED
============================================================

Evaluate how fast the team can ship changes — critical for finding PMF.

Step 7.1 — Development Pipeline

Check for CI/CD and deployment infrastructure:
- [ ] Automated testing (unit, integration, e2e)
- [ ] CI pipeline (GitHub Actions, CircleCI, etc.)
- [ ] Automated deployment
- [ ] Preview/staging environments
- [ ] Database migration system
- [ ] Rollback capability

Step 7.2 — Code Modularity

Evaluate how easy it is to change things:
- Are features isolated or tangled? (check import graphs, coupling)
- Can you change one feature without breaking others?
- Is the data model rigid or flexible? (schema migrations, schema-less, etc.)
- How many files need to change for a typical feature addition?

Step 7.3 — Velocity Indicators (from git if available)

- Average commits per week (recent month)
- Time between feature start and deploy
- Number of contributors and their activity patterns
- Release frequency

Score: 0-10 (0 = slow, manual, fragile deploys, 10 = fast, automated, safe iteration)

============================================================
PHASE 8: MARKET SIGNAL ANALYSIS
============================================================

Look for signals that the product is connecting with its market.

Step 8.1 — Integration Ecosystem

Search for third-party integrations:
- OAuth providers (Google, GitHub, Apple, enterprise SSO)
- API endpoints (REST, GraphQL, webhooks)
- Import/export capabilities
- SDK or library distribution
- Plugin or extension system

Integration breadth signals market pull — the market is telling you
to connect with their existing tools.

Step 8.2 — Multi-Market Readiness

Check for internationalization and localization:
- [ ] i18n framework integrated
- [ ] String externalization (no hardcoded user-facing strings)
- [ ] Multiple locale support
- [ ] Currency/timezone handling
- [ ] RTL layout support

Step 8.3 — Platform Coverage

Check deployment targets:
- Web, iOS, Android, desktop
- Responsive design
- Native app wrappers
- API-first architecture (enables any client)

Broader platform coverage can signal market demand pulling the product
to new surfaces.

Score: 0-10 (0 = isolated product, 10 = ecosystem-integrated, multi-market ready)

============================================================
PHASE 9: WRITE REPORT
============================================================

Write the complete analysis to `docs/pmf-analysis.md` in the project
(create the `docs/` directory if it doesn't exist).


============================================================
SELF-HEALING VALIDATION (max 2 iterations)
============================================================

After producing output, validate data quality and completeness:

1. Verify all output sections have substantive content (not just headers).
2. Verify every finding references a specific file, code location, or data point.
3. Verify recommendations are actionable and evidence-based.
4. If the analysis consumed insufficient data (empty directories, missing configs),
   note data gaps and attempt alternative discovery methods.

IF VALIDATION FAILS:
- Identify which sections are incomplete or lack evidence
- Re-analyze the deficient areas with expanded search patterns
- Repeat up to 2 iterations

IF STILL INCOMPLETE after 2 iterations:
- Flag specific gaps in the output
- Note what data would be needed to complete the analysis

============================================================
OUTPUT
============================================================

## Product-Market Fit Analysis Complete

### PMF Readiness Scorecard

| Dimension | Score | Weight | Weighted | Key Finding |
|-----------|-------|--------|----------|-------------|
| Core Value Delivery | {0-10} | 25% | {score} | {one-line finding} |
| Feature Focus | {0-10} | 15% | {score} | {one-line finding} |
| User Activation | {0-10} | 15% | {score} | {one-line finding} |
| Retention Infrastructure | {0-10} | 15% | {score} | {one-line finding} |
| Pricing Flexibility | {0-10} | 5% | {score} | {one-line finding} |
| Analytics Maturity | {0-10} | 10% | {score} | {one-line finding} |
| Iteration Speed | {0-10} | 10% | {score} | {one-line finding} |
| Market Signals | {0-10} | 5% | {score} | {one-line finding} |
| **PMF Readiness** | | | **{weighted avg}/10** | **{verdict}** |

**PMF Stage: {SEARCHING / APPROACHING / ACHIEVED / SCALING}**

- SEARCHING (0-3): Product is still exploring. Core value unclear or undelivered.
- APPROACHING (4-6): Core value exists but activation, retention, or measurement gaps remain.
- ACHIEVED (7-8): Strong core loop, users return, growth is organic. Ready to scale.
- SCALING (9-10): PMF is clear. Focus shifts to growth and efficiency.

### Critical Path Assessment

- Steps from signup to aha moment: {N}
- Core path completable without errors: {YES/NO}
- Estimated time-to-value for new user: {duration}
- Core path files: {list of key files in the critical path}

### Feature Focus Distribution

- Core features: {N}% of codebase
- Supporting features: {N}%
- Peripheral features: {N}%
- Abandoned/incomplete: {N}%
- Recent commit focus: {N}% on core features

### Top 5 PMF Gaps (Prioritized)

| # | Gap | Dimension | Impact | Effort | Recommendation |
|---|-----|-----------|--------|--------|----------------|
| 1 | {description} | {dimension} | {High/Med/Low} | {S/M/L} | {specific action} |
| 2 | ... | ... | ... | ... | ... |

### PMF Accelerators (Quick Wins)

Actions that would most rapidly improve PMF readiness:

1. {action} — improves {dimension} from {current} to ~{projected}
2. ...
3. ...

### PMF Risks

Factors that could prevent or delay PMF:

1. {risk} — {why it matters} — {mitigation}
2. ...

### Report saved to: `docs/pmf-analysis.md`

============================================================

============================================================
SELF-EVOLUTION TELEMETRY
============================================================

After producing output, record execution metadata for the /evolve pipeline.

Check if a project memory directory exists:
- Look for the project path in `~/.claude/projects/`
- If found, append to `skill-telemetry.md` in that memory directory

Entry format:
```
### /pmf-analysis — {{YYYY-MM-DD}}
- Outcome: {{SUCCESS | PARTIAL | FAILED}}
- Self-healed: {{yes — what was healed | no}}
- Iterations used: {{N}} / {{N max}}
- Bottleneck: {{phase that struggled or "none"}}
- Suggestion: {{one-line improvement idea for /evolve, or "none"}}
```

Only log if the memory directory exists. Skip silently if not found.
Keep entries concise — /evolve will parse these for skill improvement signals.

STRICT RULES
============================================================

- Read ACTUAL code to evaluate every signal. Do not guess.
- Reference specific files and lines for every finding.
- Score based on what EXISTS in the codebase, not what could be added.
- The weighted scoring reflects PMF reality: core value delivery matters
  most (25%), followed by focus, activation, and retention (15% each).
- Be honest about the PMF stage. Most products are SEARCHING or APPROACHING.
  Do not inflate the assessment.
- Distinguish between "not implemented" and "partially implemented."
- Git history analysis is valuable but optional — some repos may not have
  sufficient history.
- Do NOT propose code changes. This is an analysis skill, not a fix skill.

NEXT STEPS:

- "Run `/iterate` to address the top PMF gaps."
- "Run `/customer-success-audit` to strengthen retention and support infrastructure."
- "Run `/growth-audit` to build acquisition and engagement loops."
- "Run `/compete` to validate differentiation against competitors."
- "Run `/stress-test-personas` to pressure-test the product from adversarial angles."
- "Run `/cost-analysis` to ensure unit economics support the business model."
