---
name: nanocontext
description: Use BEFORE writing or editing any code that calls an external HTTP API or imports a third-party SDK (e.g. Alpaca, Stripe, Anthropic, OpenAI, AWS, Supabase, Telegram, Resend, etc.). Concrete triggers include shell `curl https://api.*`, `wget https://api.*`, `requests.get/post`, `httpx.*`, `fetch('https://...')`, any `https://api.<vendor>.*` URL, hardcoded API/bot tokens, or any `import <vendor-sdk>`. "Quick script", "one-shot", "I know this API", and "only curl, no SDK" are NOT exceptions. Forces a fresh fetch of the official documentation into a local cache so you never write API code from stale training-data memory.
---

# nanocontext

## Purpose

Block yourself from writing API code based on what you *remember* about the SDK. APIs rename methods, deprecate fields, change auth flows, and bump versions constantly. Your training data is months to years stale. Before you touch the code, you read the **current** official docs.

## When to use

Trigger this skill the moment you recognize you are about to:

- write or edit code that calls an external HTTP API (any URL on a vendor host — `curl`, `wget`, `fetch`, `requests`, `httpx`, `axios`, etc.)
- write or edit code that imports a third-party SDK
- update existing API integration code
- paste an API key, bearer token, or bot token into code

Skip it ONLY for: pure stdlib code, internal-only modules on hosts you control, and code where the user explicitly says "don't bother fetching docs".

A shell script that runs one `curl` against a public vendor host is **not** a skip case. "Quick", "simple", "one-shot", "no SDK, just curl", and "I already know this API" are **not** skip cases.

## Anti-rationalization — red flags that mean STOP and run the skill

If any of these thoughts cross your mind right before you start coding, you are about to skip wrongly. Run the skill instead.

| Thought | Reality |
|---|---|
| "just a quick script / one-off" | One `curl` still calls an external API. The cache write costs seconds. Run the skill. |
| "I know this API" | Training data is months to years stale. The whole reason this skill exists is that you don't actually know — endpoints get renamed, auth flows change, fields get deprecated. |
| "it's only shell + curl, no SDK imported" | The trigger is "external HTTP API," not "imported SDK." A bare `curl https://api.<vendor>.tld/...` is in scope. |
| "the user is in a hurry" | Skipping costs more than fetching. A wrong endpoint, stale auth header, or renamed field wastes more time than one cache fetch. |
| "I'll just hit the obvious endpoint" | "Obvious" endpoints get renamed, deprecated, version-bumped, or auth-changed. Confirm against current docs. |
| "the description says SDK and I'm not using one" | Re-read the description. HTTP API calls without an SDK are explicitly covered. |
| "I'll fetch after I write the first draft" | First-draft API code from memory is exactly what this skill prevents. Fetch first, then write. |

## Workflow

For every SDK in scope:

1. **Identify the SDK and pick a slug.**
   - Slug format: `<vendor>-<surface>` — lowercase, hyphenated, never include version numbers. Examples: `alpaca-py`, `stripe-node`, `anthropic-sdk-python`, `aws-s3-rest`. Consistent slugs prevent cache fragmentation across sessions.
   - If you are not certain which SDK the vendor currently endorses, do **one** `WebSearch` for `"<vendor> official <language> sdk <year>"` (substitute the actual current year). Vendors deprecate and replace SDKs frequently — e.g. `alpaca-trade-api` was superseded by `alpaca-py`. SDK identity is exactly the kind of fact training memory gets wrong.
2. **Check the cache:**
   ```sh
   nanocontext check <sdk>
   ```
   - exit 0 → fresh, skip to step 5
   - exit 1 → stale, refetch
   - exit 2 → missing, fetch
3. **Fetch official docs** if needed:
   ```sh
   nanocontext fetch <sdk> <official-doc-url>
   ```
   URL preference order: **REST API reference > official SDK API reference > getting-started guide > how-to article > blog post.** If the vendor publishes an OpenAPI/JSON schema page, prefer it — it lists every field with type and constraints in one place.
4. **Verify the fetch landed something usable.** Grep the cache file for two or three symbols you expect (a method name, a field name, an endpoint path):
   ```sh
   grep -i 'create_order\|symbol\|/v2/orders' "$(nanocontext path <sdk>)"
   ```
   If grep returns nothing, the page was JS-rendered (empty SPA shell), behind auth, or you fetched the wrong URL. Fetch a different URL — never write code from a page whose bytes don't contain the things you need.
5. **Read the cache file:**
   ```sh
   nanocontext path <sdk>
   ```
   Then `Read` that file into context.
6. **Code from the cache, not from memory.** When a method, field, or auth pattern in the cache contradicts what you remember, the cache wins.
   - **Cache silence on a remembered field = removed.** Before keeping any field, parameter, or method you remember (especially when carrying over from existing code), grep the cache for it. Zero hits means it's gone from the current API — drop it, don't keep it "because the old code has it". Example: `grep -in 'return_citations' "$(nanocontext path perplexity-rest)"` → 0 hits → field is deprecated, remove from the call.
   - **Canonical paths beat working aliases.** If the cache lists a different endpoint URL or method name than the one you remember, prefer the canonical one even if your remembered version still works in a quick test. Compatibility aliases (e.g. OpenAI-style endpoints exposed by a non-OpenAI vendor) are exactly what gets deprecated next — the cache shows what the vendor publishes, which is what they will keep supporting.
   - **Read defaults, not just presence.** When a parameter has an enum or default value in the schema, note it. Omitting the parameter picks the default — which is often the lowest-quality / least-featureful option (e.g. `search_context_size` defaults to `low`, `reasoning_effort` exists but is off unless you set it). Decide whether to override before writing the call, not after seeing weak results.
7. **Cite the cache by quoting it.** When citing, include the **exact line(s)** from the cache that prove each non-trivial claim — not just the file path. If you cannot find a quote for a claim, you are coding from memory; either find the quote or remove the claim.

## Hard rules

- No fresh cache → no API code. If `nanocontext check` returns non-zero and you can't fetch (no URL, network down), say so explicitly. Don't improvise.
- Don't fall back to training-data memory just because the cached page is incomplete. If it's incomplete, fetch a more specific URL or tell the user the docs aren't sufficient.
- **Cache silence is not permission to guess.** If the cache doesn't mention a field, method, or behavior you want to use, treat that as "outside what was fetched" — fetch a more specific URL or tell the user the docs you have don't cover it. Never fill silence with memory.
- The cache may include nav/footer cruft from the source page. Read past it; don't waste time cleaning it.

## Examples

User: "add a function to place a market order via Alpaca"
You:
```
# (uncertain which Python SDK Alpaca currently endorses → one WebSearch confirms alpaca-py)
nanocontext check alpaca-py
# exit 2: missing
nanocontext fetch alpaca-py https://docs.alpaca.markets/reference/postorder
# wrote ~/.claude/api-cache/alpaca-py.md
grep -i 'symbol\|side\|/v2/orders' "$(nanocontext path alpaca-py)" | head
# confirms expected fields are present → safe to read and use
```
Then `Read` the cache file and write the function, quoting the relevant lines.

User: "we're using Stripe in this project"
```
nanocontext check stripe
# fresh: stripe (2d old)
```
Read the cache and proceed.

## Tuning

- Default freshness window: 7 days. Override with `NANOCONTEXT_MAX_AGE=30`.
- Default cache: `~/.claude/api-cache/`. Override with `NANOCONTEXT_CACHE=/path`.
- For a quick survey of what's cached: `nanocontext list`.
