ClaudSkillsEngineering › Page 64

Claude Engineering Skills (Page 64 of 165)

Code review, refactoring, testing, DevOps, CI/CD, databases, cloud platforms, and full-stack development skills for Claude Code.

9,900 skills · updated 2026-05-03 · showing 3781–3840 of 9,900 by quality score

MongoDB and PostgreSQL database administration. Databases: MongoDB (document store, aggregation, Atlas), PostgreSQL (relational, SQL, psql).
Score 70/100
RDBMS access patterns for DuckDB, MySQL (keycloak), PostgreSQL (dw, x3rocs), SQL Server (sage1000, x3), and DBISAM (Exportmaster) using ODBC and native drivers
Score 70/100
Work with MongoDB (document database, BSON documents, aggregation pipelines, Atlas cloud) and PostgreSQL (relational database, SQL queries, psql CLI, pgAdmin).
Score 70/100
Work with MongoDB (document database, BSON documents, aggregation pipelines, Atlas cloud) and PostgreSQL (relational database, SQL queries, psql CLI, pgAdmin).
Score 70/100
Modern deployment with Databricks Asset Bundles (DAB), supporting multi-environment configurations and CI/CD integration.
Score 70/100
Configure Databricks CI/CD integration with GitHub Actions and Asset Bundles. Use when setting up automated testing, configuring CI pipelines, or integrating Databricks…
Score 70/100
Diagnose and fix Databricks common errors and exceptions. Use when encountering Databricks errors, debugging failed jobs, or troubleshooting cluster and notebook issues.
Score 70/100
Execute Databricks primary workflow: Delta Lake ETL pipelines. Use when building data ingestion pipelines, implementing medallion architecture, or creating Delta Lake…
Score 70/100
Execute Databricks secondary workflow: MLflow model training and deployment. Use when building ML pipelines, training models, or deploying to production.
Score 70/100
Collect Databricks debug evidence for support tickets and troubleshooting. Use when encountering persistent issues, preparing support tickets, or collecting diagnostic information…
Score 70/100
Deploy Databricks jobs and pipelines with Declarative Automation Bundles. Use when deploying jobs to different environments, managing deployments, or setting up deployment…
Score 70/100
Create a minimal working Databricks example with cluster and notebook. Use when starting a new Databricks project, testing your setup, or learning basic Databricks patterns.
Score 70/100
Interactive code execution on Databricks clusters via dbx.py. Provides a stateful Python REPL where variables persist across commands.
Score 70/100
Configure Databricks across development, staging, and production environments. Use when setting up multi-environment deployments, configuring per-environment secrets, or…
Score 70/100
Create Databricks Python notebooks, push to workspace, run on cluster, and verify outputs using dbx.py.
Score 70/100
Execute Databricks production deployment checklist and rollback procedures. Use when deploying Databricks jobs to production, preparing for launch, or implementing go-live…
Score 70/100
Databricks development guidance including Python SDK, Databricks Connect, CLI, and REST API. Use when working with databricks-sdk, databricks-connect, or Databricks APIs.
Score 70/100
Implement Databricks API rate limiting, backoff, and idempotency patterns. Use when handling rate limit errors, implementing retry logic, or optimizing API request throughput for…
Score 70/100
Implement Databricks reference architecture with best-practice project layout. Use when designing new Databricks projects, reviewing architecture, or establishing standards for…
Score 70/100
Python dataclass best practices: slots, frozen, validation. Trigger when optimizing dataclasses or creating config classes.
Score 70/100
Datadog CLI for searching logs, querying metrics, tracing requests, and managing dashboards. Use this when debugging production issues or working with Datadog observability.
Score 70/100
Interfaces with the Datadog API v2 monitors and dashboards endpoints to programmatically create and manage monitors.
Score 70/100
Synchronizes Datadog monitor definitions between environments using the Datadog API v2 monitors endpoint.
Score 70/100
Use when developing BigQuery Dataform transformations, SQLX files, source declarations, or troubleshooting pipelines - enforces TDD workflow (tests first), ALWAYS use ${ref()}…
Score 70/100
Use when building or debugging DataRaptor Extract or DataRaptor Load operations in OmniStudio: designing multi-object extracts, configuring load upserts, handling iferror…
Score 70/100
Writing Datasette plugins using Python and the pluggy plugin system. Use when Claude needs to: (1) Create a new Datasette plugin, (2) Implement plugin hooks like…
Score 70/100
Generate production code for Dataverse SDK using advanced patterns, error handling, and optimization techniques.
Score 70/100
Generate production-ready Python code using Dataverse SDK with error handling, optimization, and best practices
Score 70/100
Generate Python SDK setup + CRUD + bulk + paging snippets using official patterns.
Score 70/100
Generate complete solutions for specific Dataverse SDK use cases with architecture recommendations
Score 70/100
Helps work with the b00t datum system - TOML-based configuration for AI models, providers, and services.
Score 70/100
DAW-specific quirks, known issues, and workarounds for Logic Pro, Ableton Live, Pro Tools, Cubase, Reaper, FL Studio, Bitwig with format-specific requirements (AU/VST3/AAX).
Score 70/100
Expert database architecture including schema design, partitioning, replication, and performance optimization
Score 70/100
Use when setting up database connections, especially for Neon PostgreSQL. Triggers for: Neon Postgres connection, connection pooling configuration, connection string management,…
Score 70/100
MongoDB database exploration and querying. Use when you need to understand database structure, view existing data, check collection schemas, count documents, or run queries to…
Score 70/100
Especialista Sênior em MongoDB, Segurança de Dados, Migrations, Backup/Recovery e Data Integrity. Guardian dos dados do Super Cartola Manager com foco em operações seguras,…
Score 70/100
Lint PostgreSQL functions against schema, analyze usage, and generate fix reports; use when detecting broken functions, validating schema contracts, or cleaning up unused database…
Score 70/100
PostgreSQL database operations specialist using Drizzle ORM for schema management, queries, and migrations
Score 70/100
Patterns for optimizing database queries and preventing connection pool exhaustion. Use when writing batch operations, debugging slow queries, or reviewing code for performance.
Score 70/100
データベースクエリ・分析支援。SQLクエリの作成、実行、結果の分析を行う。BigQuery、PostgreSQL、MySQL対応。トリガー: /db-query, SQL, クエリ, データ分析, BigQuery
Score 70/100
트리거: "DB 설계", "테이블 설계", "ERD 만들어줘", "데이터베이스 설계", "스키마 설계해줘", "테이블 구조 잡아줘" 수행: 요구사항 문장을 분석하여 ERD(텍스트/Mermaid 형식) + DDL SQL을 생성한다.
Score 70/100
Validate Laravel database configuration and test actual connections. Use when user reports "database connection error", "hosts array is empty", "could not connect to database", or…
Score 70/100
Guide for building reliable, fault-tolerant Go applications with DBOS durable workflows. Use when adding DBOS to existing Go code, creating workflows and steps, or using queues…
Score 70/100
Guide for building reliable, fault-tolerant Python applications with DBOS durable workflows. Use when adding DBOS to existing Python code, creating workflows and steps, or using…
Score 70/100
Guide for building reliable, fault-tolerant TypeScript applications with DBOS durable workflows. Use when adding DBOS to existing TypeScript code, creating workflows and steps, or…
Score 70/100
dbt (data build tool) patterns for model organization, incremental strategies, and testing.
Score 70/100
Parses dbt project artifacts (manifest.json and catalog.json) to build a lineage graph and identify models with no tests, stale documentation, or missing uniqueness assertions.
Score 70/100
Comprehensive guide to dbt (data build tool) patterns, modeling best practices, testing strategies, and production workflows for modern data transformation
Score 70/100
ALWAYS USE when working with dbt models, SQL transformations, tests, snapshots, or macros. Use IMMEDIATELY when editing dbt_project.yml, profiles.yml, or creating SQL models.
Score 70/100
Dbt Test Creator - Auto-activating skill for Data Pipelines. Triggers on: dbt test creator, dbt test creator Part of the Data Pipelines skill category.
Score 70/100
dbt testing strategies using dbt_constraints for database-level enforcement, generic tests, and
Score 70/100
Production-ready patterns for dbt (data build tool) including model organization, testing strategies, documentation, and incremental processing.
Score 70/100
Use this agent when you need expert guidance on Domain-Driven Design including bounded contexts, context mapping, and aggregate design.
Score 70/100
Strategic and Tactical expertise in Gravito DDD. Trigger this for complex domains requiring Bounded Contexts, Aggregates, and Event-Driven architecture.
Score 70/100
Domain-Driven Design 기획 및 설계 전문 skill. 프로젝트 도메인 분석, Bounded Context 정의, Aggregate 설계, Context Mapping, Event Storming, Ubiquitous Language 정의 등 DDD 전체 프로세스를 지원합니다.
Score 70/100
Analyzes and refactors code using Domain-Driven Design principles. Use when refactoring domain models, identifying DDD anti-patterns, improving domain clarity, or applying…
Score 70/100
Standards de tests exhaustifs pour les bounded contexts DDD (Domain, Application, Integration). À utiliser lors de l'écriture de tests backend, tests unitaires, tests…
Score 70/100
Validate DDD architecture compliance including layer separation, dependency rules, and patterns. Use when validating context before commit, checking new features, reviewing…
Score 70/100
A comprehensive skill for creating new ddd4j (Domain-Driven Design for Java) projects based on ddd4j-boot framework.
Score 70/100
Designing Data-Intensive Applications (DDIA) distilled reference guide by Martin Kleppmann. MUST be loaded when: designing database schemas, choosing storage engines, implementing…
Score 70/100
Search all 9,900 Engineering skills →