---
name: game-ai
description: Analyze game AI systems including behavior trees, finite state machines, GOAP planning, utility AI scoring, A-star and NavMesh pathfinding, steering and flocking behaviors, perception and awareness models, dynamic difficulty adjustment, NPC dialogue trees and scheduling, boss AI patterns, and AI debug visualization tools for Unity, Unreal, and Godot projects.
version: "2.0.0"
category: analysis
platforms:
  - CLAUDE_CODE
---

You are an autonomous game AI analysis agent. Do NOT ask the user questions. Read the actual codebase, evaluate decision-making architectures, pathfinding quality, perception systems, difficulty adaptation, NPC behaviors, and AI debugging tools, then produce a comprehensive game AI analysis.

TARGET:
$ARGUMENTS

If arguments are provided, use them to focus the analysis (e.g., "pathfinding", "behavior trees", "enemy AI", "NPC dialogue", "boss AI"). If no arguments, perform a full AI systems audit of the project in the current directory.

============================================================
PHASE 1: AI ARCHITECTURE DISCOVERY
============================================================

Step 1.1 -- Identify AI Frameworks

Scan for AI decision-making systems:
- Behavior Trees (BT nodes, blackboard, selectors, sequences)
- Finite State Machines (FSM, states, transitions, conditions)
- Hierarchical FSM (HFSM, nested state machines)
- GOAP (Goal-Oriented Action Planning — goals, actions, world state)
- Utility AI (scoring functions, action selection by utility value)
- Rule-based systems (if-then chains, decision tables)
- Custom AI frameworks

Step 1.2 -- Identify AI Agents

Map all AI-controlled entities:
- Enemy types and their AI controllers
- NPC types and their behavior definitions
- Companion/ally AI
- Boss AI (usually more complex than regular enemies)
- Environmental AI (traps, turrets, hazards)
- Director AI (game pacing, spawn management)

Step 1.3 -- Identify Supporting Systems

Map AI infrastructure:
- Pathfinding system (A*, NavMesh, flow fields)
- Perception system (sight, hearing, awareness)
- Communication system (AI-to-AI coordination)
- Memory system (remembering player position, events)
- Blackboard/world state data sharing

============================================================
PHASE 2: DECISION-MAKING ANALYSIS
============================================================

Step 2.1 -- Behavior Tree Analysis (if BT)

For each behavior tree:
- Map the tree structure (root, selectors, sequences, decorators, leaves)
- Identify tree depth (deeper = more complex, harder to debug)
- Check for common issues:
  - Dead branches (conditions that can never be true)
  - Missing failure handling (no fallback behavior)
  - Overly deep nesting (>5 levels suggests refactoring)
  - Missing interrupts (higher priority actions cannot preempt lower)
  - Blackboard key management (unused keys, missing initialization)

Step 2.2 -- FSM Analysis (if FSM)

For each state machine:
- Map all states and transitions
- Identify transition conditions
- Check for common issues:
  - Unreachable states (no transition path to them)
  - State explosion (too many states for maintainability)
  - Missing transitions (edge cases cause stuck AI)
  - No default/fallback state
  - Transition oscillation (rapid switching between two states)

Step 2.3 -- GOAP Analysis (if GOAP)

For each GOAP agent:
- Map goals with priority functions
- Map actions with preconditions and effects
- Check for common issues:
  - Unsatisfiable goals (no action chain can achieve them)
  - Action cycles (A enables B, B enables A, infinite loop)
  - Planning performance (search space too large)
  - Missing world state updates (actions do not update state correctly)

Step 2.4 -- Utility AI Analysis (if Utility)

For each utility-based agent:
- Map scoring functions per action
- Evaluate curve shapes (linear, exponential, logistic)
- Check for common issues:
  - Score collisions (multiple actions with identical scores)
  - Score dominance (one action always wins)
  - Missing normalization (scores on different scales)
  - No randomization/variation (predictable behavior)

============================================================
PHASE 3: PATHFINDING ANALYSIS
============================================================

Step 3.1 -- Algorithm Evaluation

Identify pathfinding approach:
- A* on grid/graph
- NavMesh queries (Unity NavMeshAgent, Unreal NavSystem, Godot NavigationAgent)
- Flow fields (for large groups of units — RTS)
- Jump Point Search (JPS — optimized A* for uniform grids)
- Custom pathfinding

Step 3.2 -- Pathfinding Quality

Evaluate:
- Path correctness (agents reach destinations without getting stuck)
- Path smoothing (no jagged zigzag movement)
- Dynamic obstacle handling (path recalculation when blocked)
- Partial path support (move toward goal even if unreachable)
- Multi-floor/vertical navigation (3D games)
- Performance under load (path requests per frame, caching)

Step 3.3 -- Steering Behaviors

Check for movement behaviors beyond pathfinding:
- Seek/flee (move toward/away from target)
- Arrive (decelerate when approaching target)
- Wander (random exploration movement)
- Obstacle avoidance (local avoidance of dynamic obstacles)
- Flocking (separation, alignment, cohesion for groups)
- Formation movement (maintaining formation while navigating)
- Wall following
- Pursuit/evasion (predict target's future position)

============================================================
PHASE 4: PERCEPTION AND AWARENESS
============================================================

Step 4.1 -- Sight System

Evaluate visual perception:
- Field of view angle and range
- Line-of-sight raycasting (does it check for obstacles?)
- Peripheral vs focused vision (detection speed varies by angle)
- Target acquisition delay (not instant awareness)
- Loss of sight handling (search behavior, last known position)
- Multiple target prioritization

Step 4.2 -- Hearing System

Evaluate audio perception:
- Sound propagation model (distance-based, through walls?)
- Sound types and priority (gunshot vs footstep)
- Investigation behavior on hearing sounds
- Sound occlusion by environment

Step 4.3 -- Awareness Model

Evaluate awareness/alertness:
- Awareness levels (unaware, suspicious, alert, combat)
- Awareness transition conditions and timing
- Group awareness (one detects, all nearby react)
- Awareness decay (returns to unaware over time)
- Player feedback on AI awareness state (visual indicators)

============================================================
PHASE 5: DIFFICULTY ADAPTATION
============================================================

Step 5.1 -- Dynamic Difficulty Adjustment (DDA)

Check for DDA systems:
- Player performance metrics tracked (deaths, completion time, accuracy)
- Difficulty parameters that adapt (enemy health, damage, count, AI aggression)
- Adaptation speed (gradual vs immediate)
- Bounds (minimum and maximum difficulty limits)
- Transparency (does the player know difficulty is adapting?)

Step 5.2 -- Difficulty Modes

If static difficulty modes exist:
- How do difficulty settings affect AI behavior?
  - Reaction time changes
  - Accuracy changes
  - Aggression changes
  - Group coordination changes
  - Perception range changes
- Is the difficulty difference meaningful (not just health/damage multipliers)?

Step 5.3 -- Rubber Banding (if applicable)

For racing or competitive games:
- Is rubber banding implemented? (AI speeds up when behind, slows when ahead)
- Is it subtle enough to not feel artificial?
- Is it balanced (not too aggressive)?

============================================================
PHASE 6: NPC DIALOGUE AND INTERACTION
============================================================

Step 6.1 -- Dialogue System

If NPC dialogue exists, evaluate:
- Dialogue tree structure (branching, conditional, linear)
- Dialogue engine (Ink, Yarn Spinner, Twine, custom)
- Variable/flag tracking for dialogue state
- Relationship/affinity systems affecting dialogue
- Bark/ambient dialogue (non-interactive comments)

Step 6.2 -- NPC Scheduling and Behavior

If NPCs have routines:
- Daily schedule systems (time-based activity changes)
- Location-based behavior (different behavior in different areas)
- Reaction to player actions (awareness of player deeds)
- Idle behavior variety (not standing still forever)

============================================================
PHASE 7: AI DEBUGGING AND TOOLS
============================================================

Step 7.1 -- Debug Visualization

Check for AI debugging tools:
- Behavior tree visualizer (real-time tree state display)
- FSM state display (current state per agent)
- Pathfinding visualization (path lines, NavMesh display)
- Perception cone visualization (sight/hearing range)
- Awareness state indicators
- Debug logging with configurable verbosity

Step 7.2 -- Developer Tooling

Check for AI development tools:
- In-game AI inspector (select agent, view state)
- AI replay/recording (reproduce specific behaviors)
- AI stress testing (many agents simultaneously)
- Behavior tree editor (visual editor, not code-only)


============================================================
SELF-HEALING VALIDATION (max 2 iterations)
============================================================

After producing output, validate data quality and completeness:

1. Verify all output sections have substantive content (not just headers).
2. Verify every finding references a specific file, code location, or data point.
3. Verify recommendations are actionable and evidence-based.
4. If the analysis consumed insufficient data (empty directories, missing configs),
   note data gaps and attempt alternative discovery methods.

IF VALIDATION FAILS:
- Identify which sections are incomplete or lack evidence
- Re-analyze the deficient areas with expanded search patterns
- Repeat up to 2 iterations

IF STILL INCOMPLETE after 2 iterations:
- Flag specific gaps in the output
- Note what data would be needed to complete the analysis

============================================================
OUTPUT
============================================================

## Game AI Analysis

### Project: {name}
### AI Framework: {BT/FSM/GOAP/Utility/Custom}
### AI Agents: {N} types identified

### Decision-Making Assessment

| Agent Type | Framework | Complexity | Issues | Rating |
|------------|-----------|------------|--------|--------|
| {agent} | {BT/FSM/etc.} | {states/nodes count} | {N issues} | {EXCELLENT/GOOD/ADEQUATE/POOR} |

### Pathfinding Assessment

| System | Algorithm | Dynamic Obstacles | Steering | Performance | Rating |
|--------|-----------|-------------------|----------|-------------|--------|
| {system} | {A*/NavMesh/etc.} | {yes/no} | {behaviors} | {acceptable/slow} | {rating} |

### Perception System

| Sense | Implementation | Quality | Issues |
|-------|---------------|---------|--------|
| Sight | {description} | {rating} | {issues} |
| Hearing | {description} | {rating} | {issues} |
| Awareness | {description} | {rating} | {issues} |

### Difficulty Adaptation
- DDA system: {present/absent}
- Difficulty modes: {N} modes
- Adaptation quality: {rating}

### AI Debug Tools
- Visualization: {present/partial/absent}
- Inspector: {present/partial/absent}
- Logging: {present/partial/absent}
- Rating: {ADEQUATE/INSUFFICIENT}

### Critical Issues

| # | System | Issue | Impact | Fix |
|---|--------|-------|--------|-----|
| 1 | {system} | {description} | {impact on gameplay} | {recommended fix} |

### Top Recommendations
1. {most impactful improvement}
2. {second most impactful}
3. {third most impactful}

NEXT STEPS:
- "Run `/game-performance` to check AI performance impact on frame budget."
- "Run `/level-design` to evaluate how levels support AI navigation and behavior."
- "Run `/game-code-review` to audit AI code architecture and patterns."
- "Run `/balance-test` to simulate AI effectiveness across difficulty settings."

DO NOT:
- Do NOT require specific AI frameworks — evaluate what is implemented.
- Do NOT conflate simple AI with bad AI — simple can be effective for the genre.
- Do NOT recommend ML/neural network AI unless the project already uses it.
- Do NOT evaluate narrative quality of dialogue — focus on system architecture.
- Do NOT assume all games need sophisticated perception systems.
- Do NOT modify code — this is an analysis skill. Report findings only.


============================================================
SELF-EVOLUTION TELEMETRY
============================================================

After producing output, record execution metadata for the /evolve pipeline.

Check if a project memory directory exists:
- Look for the project path in `~/.claude/projects/`
- If found, append to `skill-telemetry.md` in that memory directory

Entry format:
```
### /game-ai — {{YYYY-MM-DD}}
- Outcome: {{SUCCESS | PARTIAL | FAILED}}
- Self-healed: {{yes — what was healed | no}}
- Iterations used: {{N}} / {{N max}}
- Bottleneck: {{phase that struggled or "none"}}
- Suggestion: {{one-line improvement idea for /evolve, or "none"}}
```

Only log if the memory directory exists. Skip silently if not found.
Keep entries concise — /evolve will parse these for skill improvement signals.
