ClaudSkillsEngineering › Page 119

Claude Engineering Skills (Page 119 of 163)

Code review, refactoring, testing, DevOps, CI/CD, databases, cloud platforms, and full-stack development skills for Claude Code.

9,773 skills · updated 2026-05-03 · showing 7081–7140 of 9,773 by quality score

Testing patterns for litefs-py and litefs-django. Use when writing tests, setting up fixtures, understanding test organization, or configuring pytest marks.
Score 70/100
Unified LLM API with LiteLLM. Call 100+ LLM providers with one interface. Use for multi-provider AI, cost optimization, fallbacks, and LLM gateway deployment.
Score 70/100
When calling LLM APIs from Python code. When connecting to llamafile or local LLM servers. When switching between OpenAI/Anthropic/local providers.
Score 70/100
Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral).
Score 70/100
Delivering real-time updates to users via WebSocket, SSE, or Push API for live notification systems with proper architecture, queuing, and delivery mechanisms.
Score 70/100
Comprehensive guide for building functional tools for LiveKit voice agents using the @function_tool decorator.
Score 70/100
Build and review production-grade web and mobile frontends using LiveKit with Next.js. Covers real-time video/audio/data communication, WebRTC connections, track management, and…
Score 70/100
LiveKit is an open-source, scalable WebRTC-based real-time communication server written in Go. It provides multi-user conferencing, streaming, and data channels with client SDKs…
Score 70/100
Build voice AI agents with LiveKit Agents SDK. Use when the user asks to "build a voice agent", "create a LiveKit agent", "add voice AI", "implement handoffs", "structure agent…
Score 70/100
Principles for writing simple, maintainable Laravel/Livewire code. Use when writing Livewire components, tests, or Blade views. Focuses on avoiding over-engineering.
Score 70/100
Navigate and load project living documentation for context from .specweave/docs/internal/. Use when implementing features and needing project context, referencing ADRs for design…
Score 70/100
Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is unavailable.
Score 70/100
Automates LLDB debugging sessions with scripted breakpoint management and expression evaluation. Uses the LLDB Python SB API (lldb.SBDebugger, SBTarget, SBProcess) for…
Score 70/100
Automatically applies when building LLM applications. Ensures proper async patterns for LLM calls, streaming responses, token management, retry logic, and error handling.
Score 70/100
Production-ready patterns for building LLM applications. Covers RAG pipelines, agent architectures, prompt IDEs, and LLMOps monitoring.
Score 70/100
Expert LLM architect specializing in large language model architecture, deployment, and optimization.
Score 70/100
Use when user needs LLM system architecture, model deployment, optimization strategies, and production serving infrastructure.
Score 70/100
LLM architecture, tokenization, transformers, and inference optimization. Use for understanding and working with language models.
Score 70/100
Multi-level caching strategies for LLM applications - semantic caching (Redis), prompt caching (Claude/OpenAI native), cache hierarchies, cost optimization, and Langfuse cost…
Score 70/100
Reduce LLM API costs without sacrificing quality. Covers prompt caching (Anthropic), local response caching, prompt compression, debouncing triggers, and cost analysis.
Score 70/100
Use when you need to reduce LLM API spend, control token usage, route between models by cost/quality, implement prompt caching, or build cost observability for AI features.
Score 70/100
Diagnoses LLM output failures including hallucinations, constraint violations, format errors, and reasoning issues.
Score 70/100
Master comprehensive evaluation strategies for LLM applications, from automated metrics to human evaluation and A/B testing.
Score 70/100
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking.
Score 70/100
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking.
Score 70/100
LLM gateway and routing configuration using OpenRouter and LiteLLM. Invoke when: - Setting up multi-model access (OpenRouter, LiteLLM) - Configuring model fallbacks and…
Score 70/100
LLM inference infrastructure, serving frameworks (vLLM, TGI, TensorRT-LLM), quantization techniques, batching strategies, and streaming response patterns.
Score 70/100
Build production LLM streaming UIs with Server-Sent Events, real-time token display, cancellation, error recovery. Handles OpenAI/Anthropic/Claude streaming APIs.
Score 70/100
Use the CLI info command to summarize what llms-txt-php-cli detects/configures for the current repository.
Score 70/100
Guide the user to generate an initial llms.txt for a repository using llms-txt-php-cli, choosing sensible defaults and verifying output.
Score 70/100
Validate an existing llms.txt with llms-txt-php-cli and guide the user through fixing validation errors.
Score 70/100
Builds and queries code knowledge graph for dependency analysis, references, implementations, and architecture overview.
Score 70/100
Creates core project docs (requirements, architecture, tech stack, patterns catalog). Use for any project regardless of type.
Score 70/100
Creates infrastructure.md and runbook.md (Docker-conditional). Use for DevOps documentation in any project.
Score 70/100
Creates reference docs (ADRs, guides, manuals) for nontrivial tech stack choices. Use when project needs justified architecture decision records.
Score 70/100
Creates test documentation (testing-strategy.md, tests/README.md) with Risk-Based Testing philosophy. Use when setting up test strategy for a project.
Score 70/100
Executes test tasks (label 'tests') through Todo to To Review with risk-based limits. Use for test task execution. Not for implementation tasks.
Score 70/100
Worker that checks DRY/KISS/YAGNI/architecture compliance with quantitative Code Quality Score. Validates architectural decisions via MCP Ref: (1) Optimality - is chosen approach…
Score 70/100
Orchestrates test planning pipeline (research → manual → auto tests). Coordinates ln-511, ln-512, ln-513. Invoked by ln-500-story-quality-gate.
Score 70/100
Checks DRY/KISS/YAGNI/architecture compliance with quantitative Code Quality Score. Use when implementation tasks are Done and need quality scoring.
Score 70/100
Performs manual testing of Story AC via executable bash scripts saved to tests/manual/. Creates reusable test suites per Story. Worker for ln-510.
Score 70/100
Auto-fixes low-risk tech debt (unused imports, dead code, commented-out code) with >=90% confidence. Use when audit findings need safe automated cleanup.
Score 70/100
Plans automated tests (E2E/Integration/Unit) using Risk-Based Testing after manual testing. Calculates priorities, delegates to ln-301-task-creator. Worker for ln-510.
Score 70/100
Analyzes application logs: classifies errors, checks log quality, maps stack traces to source. Use when logs need review after test runs or during development.
Score 70/100
Orchestrates test planning pipeline: research, manual testing, automated test planning. Use when Story needs comprehensive test coverage planning.
Score 70/100
Performs manual testing of Story AC via executable bash scripts in tests/manual/. Use when Story implementation needs hands-on AC verification.
Score 70/100
Plans automated tests (E2E/Integration/Unit) using Risk-Based Testing after manual testing. Use when Story needs a test task with prioritized scenarios.
Score 70/100
Audit code comments and docstrings quality across 6 categories (WHY-not-WHAT, Density, Forbidden Content, Docstrings, Actuality, Legacy).
Score 70/100
Architecture audit worker (L3). Checks DRY (7 types), KISS/YAGNI, layer breaks, error handling, DI patterns. Returns findings with severity, location, effort, recommendations.
Score 70/100
Use when auditing the test surface through the evaluation platform with mandatory research, coordinated test audit workers, and structured summaries.
Score 70/100
Detects tests validating framework/library behavior instead of project code. Use when auditing test business logic focus.
Score 70/100
Validates E2E coverage for critical paths (money, security, data integrity). Risk-based prioritization. Use when auditing E2E test coverage.
Score 70/100
Scores each test by Impact x Probability, returns KEEP/REVIEW/REMOVE decisions. Use when auditing test value and pruning low-value tests.
Score 70/100
Identifies missing tests for critical paths (money, security, data integrity, core flows). Use when auditing test coverage gaps.
Score 70/100
Checks test isolation (API/DB/FS/Time/Network), determinism, flaky tests, order-dependency, anti-patterns. Use when auditing test isolation.
Score 70/100
Checks manual test scripts for harness adoption, golden files, fail-fast, config sourcing, idempotency. Use when auditing manual test quality.
Score 70/100
Checks test file organization, directory layout, test-to-source mapping, domain grouping, co-location. Use when auditing test structure.
Score 70/100
Checks layer boundary violations, transaction boundaries, session ownership, cross-layer consistency. Use when auditing architecture layers.
Score 70/100
Checks redundant fetches, N+1 loops, over-fetching, missing bulk operations, wrong caching scope. Use when auditing query efficiency.
Score 70/100
Scaffolds new or restructures existing projects to Clean Architecture. Use when setting up project structure.
Score 70/100
Search all 9,773 Engineering skills →