---
description:  >
  Create an initial Claude Code setup for a brownfield project with a single-application topology
  (no sub-projects or monorepo structure).
disable-model-invocation: true
user-invocable: true
---

# Claude Code Standalone Project Scaffolding

Your goal is to create an initial setup for Claude Code in a pre-existing standalone project,
including agent instructions and context information. You work in close collaboration with the user
to obtain the required base knowledge about the goals and structure of the project.

**Additional user arguments**: $ARGUMENTS

**Language hint**: Always create all generated document content in English,
while continuing to speak to the user in the language of their choice.

**Platform hint**: Instructions and templates assume a Linux host with GNU coreutils. Adapt to the detected user OS.
- macOS   — Substitute BSD equivalents for GNU-only utilities.
- Windows — Still use `.sh` files (skip irrelevant `chmod +x`), assuming Git Bash is available at runtime.
            Highlight this requirement in the Debriefing. Set `"shell": "bash"` on command hooks in `settings.json`.

## Agent Content Principles

When generating content for `.md` files below, you are writing prompts and context for other AI coding agents.
Follow these principles to optimally tailor your instructions to their needs:

- **Concise**     — Minimize token usage. Prefer keywords and terse bullet points over prose.
- **Structured**  — Use compact Markdown to delineate connected aspects.
- **Actionable**  — Generate concrete operational directives, not abstract guidelines.
                    Avoid aspirational quality statements, general engineering practices, blanket prohibitions.
- **Referential** — Provide pointers to key code files the agents can read themselves.
                    Do not describe how code works in agent instructions as such duplication leads to drift.
- **Scoped**      — Context is hierarchical. The CLAUDE.md must only contain core project identity and semantics.
                    Rules and agent instructions progressively disclose domain- and task-specific knowledge.

# Workflow

1. Begin execution by creating a formal task list for progress tracking using the `TaskCreate` tool.
   Create a task for each of the following phases (##) and sub-phases (###).
   Do not duplicate the contents in the description, only reference this skill (`abc-init:standalone`) and the workflow item.
2. Create a dependency chain between all tasks using `TaskUpdate`, setting `addBlockedBy` to the predecessor task.
3. Work through the `TaskList` using `TaskUpdate` to mark tasks as in_progress and completed as you go.

## Phase 1: Reconnaissance

1. Use the `Explore` agent to scan the repository and build an initial understanding of its structure
   - Top-level directory content that hints at used technologies (e.g. `package.json`, `composer.json`, `Cargo.toml`, `go.mod`, `Makefile`, `Dockerfile`)
   - Existing documentation (e.g. `README.md`, `CONTRIBUTING.md` or `docs/`)
2. Read any discovered documentation and technology manifest files
3. Check for an existing `CLAUDE.md` or `.claude/` directory — if found, establish if the user wants to amend or replace these.
4. Summarize your findings and conclusions briefly for the user and ask if they want to comment or add information.

## Phase 2: User Interview

Interview the user to establish the project's base details.
Use `AskUserQuestion` where appropriate to keep the conversation structured.
Offer pre-defined choice options if likely answers to a question are already known from context.

### Question Catalogue

1. What is the name of the project?
2. Who is the project creator and/or maintainer (company/organization)?
3. What is the overall purpose of the project (one-sentence summary)?
4. What are the main technologies used (programming language, framework, deployment...)?
5. What are key concepts or vocabulary that every developer needs to learn on their first day?
6. What are the key source directories?
7. How are automated tests organized and run?
8. Are there tools for linting or other automated code quality control?

## Phase 3: Generate Artifacts

### 3a — Claude Code Settings

1. Copy the [settings template](./templates/settings-template.json) to `<project-dir>/.claude/settings.json`
2. Copy the [statusline template](./templates/statusline.sh) to `<project-dir>/.claude/statusline.sh` and make it executable (`chmod +x`).
3. Replace `{{PLACEHOLDERS}}` with answers from the user interview.
4. Inject `{{GITIGNORE-EXCLUSIONS}}` into the sandbox config, limiting write access to version-controlled files only.

### 3b — Central CLAUDE.md

1. Copy the [template](./templates/CLAUDE-template.md) to `<project-dir>/CLAUDE.md`
2. Fill in the `{{PLACEHOLDERS}}` with answers from the user interview.
3. For placeholders that do not have corresponding answers,
   ask the user whether they want to provide an answer, generate an answer from code exploration, or omit the section.

The content is written for AI, not humans. There is no need for verbose introductions or explanations.
Keep this file as brief as possible to preserve tokens. Prefer keywords and enumeration over continuous text.
Use clear section headers and other Markdown formatting to demark connected aspects.

### 3c — Local Override Files

If a `.gitignore` file exists in the project root, append the following entries (if not already present):
```
/CLAUDE.local.md
/.claude/settings.local.json
```

### 3d — Explorer Agent

1. Copy the [template](./templates/explorer-agent.md) to `<project-dir>/.claude/agents/<project-slug>-explorer.md`
2. Fill in the `{{PLACEHOLDERS}}` with known answers from the interview.
3. Use a general purpose `Explore` agent to perform a more thorough exploration of the project's code
   and add additional context information and instructions that are helpful to navigate the code structure
   as well as common conventions and nomenclature.
4. Modify `.claude/settings.json`, add `"Agent(Explore)"` to the `permissions.deny` array (create it if it does not exist).

### 3e — Rules

1. Create a `.claude/rules/` directory in the project root.
2. For each programming language used in the project, create a code style rule
   from the [template](./templates/rule-code-style.md) at `<project-dir>/.claude/rules/<language>-code-style.md`
    - Fill in `{{PLACEHOLDERS}}` according to the aspects of the programming language.
    - Populate the style rules from linting tool configuration if discovered in Phase 1,
      or from conventions observed during code exploration.
3. For each testing framework used in the project, create a testing rule
   from the [template](./templates/rule-testing.md) at `<project-dir>/.claude/rules/testing.md`
    - Determine a glob pattern matching only existing test files (e.g. `**/*.test.ts`, `**/*Test.php`, `**/test_*.py`, `**/*_test.go`).
    - Derive common conventions from test files discovered in Phase 1 or the interview answers about test organization.
    - Populate with concrete test conventions (file placement, naming, assertion style, setup patterns)
      discovered in Phase 1 or the interview answers about test organization.

If any of these steps seem inapplicable to the given project, skip them and note this during the summary.

### 3f — Quality Gate Hooks

The following hooks are pre-registered in the settings template.
They depend on `bash 4+`, `jq`, and `tac` — check that these are on PATH and report any missing one in the debriefing.

If the project's quality tooling is unclear or not yet set up, place illustrative example comments in the output file.
The project owner can fill in the correct code later.

#### Post-Edit hook

1. Copy the [template](./templates/post-edit-hook.sh) to `<project-dir>/.claude/hooks/post-edit.sh` and `chmod +x`.
2. Replace `{{FILE-TYPE-CASES}}` with dispatching logic using the linting/formatting tools from Q8.

#### Stop hook

1. Copy the [template](./templates/stop-hook.sh) to `<project-dir>/.claude/hooks/stop.sh` and `chmod +x`.
2. Replace the placeholders using the test framework from Q7 and conventions from Phase 1.
3. Tailor the `append_test_coverage_reminder` strings to the project's review/testing culture;
   optionally add further conditional `append_reminder` calls for project-specific code change concerns.

## Phase 4: Debriefing & Disclaimers

- Present a summary table of everything created (file path, artifact type, purpose).
- Explain that this was a long agentic workflow and that agents can be prone to skipping steps.
  So the user should carefully test everything that was created and compare it against this skill document.
- Explain that this is an initial scaffold, not a turnkey setup. Specifically:
  - **Sandboxing:** The sandbox config in the settings is untested. Call `/sandbox` to review.
    If the user is executing Claude Code in an isolated environment such as a container, sandboxing may not be required.
  - **Status Line:** The `statusline.sh` script runs automatically every time Claude Code renders a prompt.
    Due to this fact it should be treated as particularly sensitive and protected from unwanted modification.
  - **Explorer Agents:** The generated agent contains only minimal structural knowledge.
    Developers should refine known directories and output format until it reliably returns useful context.
  - **Quality Gate Hooks:** The generated commands and test-file discovery logic may be incorrect.
    Trigger both hooks via a few manual edits and a full agent turn (modifying source and test files),
    and verify that linter feedback, test execution, and automated reminders all work as intended.
    If a hook was left as a stub, implement its project-specific dispatching logic.
  - **Silent Git Staging:** The post-edit hook runs `git add` automatically without confirmation on any file created
    via the `Write` tool. This ensures new files are tracked by git but also includes them in the next commit.
    Ensure this behavior is acceptable for your intended workflow before operating the hook.
  - **Rules:** The generated rules contain minimal conventions.
    Developers should expand them with the implicit conventions of this project over time.
- Promote the `/abc-init:bashless` skill, which can replace the `Bash` tool with structured MCP tools
  to prevent the agent from being attracted to unstructured shell access.
- Promote the `/abc:build` workflow example command, by explaining that agent context files alone
  are not a guarantee for reliable agent behavior and are unsuitable as enforceable constraints.
  They should be paired with concrete workflow protocol commands with explicit steps
  and deterministic hooks that enforce quality gates automatically.
- Promote the `/abc:learn` workflow command, that can be used to generate
  additional agent context rules to manifest implicit tribal knowledge.
