ClaudSkillsGeneral › Page 157

General-Purpose Claude Skills (Page 157 of 396)

Productivity, automation, knowledge management, integrations, AI tooling, and general-purpose skills for Claude Code.

23,726 skills · updated 2026-05-04 · showing 9361–9420 of 23,726 by quality score

Use when working with ES/NQ futures market data, before calling any Databento API - follow mandatory four-step workflow (cost check, availability check, fetch, validate); prevents…
Score 70/100
Guide for authenticating with Databricks Apps using cookie-based auth when OAuth/PAT tokens don't work. Use when connecting to Databricks Apps with User Authorization enabled.
Score 70/100
Discover, inspect, start, stop, and monitor Databricks clusters using dbx.py. Use before any skill that needs a running cluster.
Score 70/100
Configure Databricks profile and authenticate for Databricks Connect, Databricks CLI, and Databricks SDK.
Score 70/100
Optimize Databricks costs with cluster policies, spot instances, and monitoring. Use when reducing cloud spend, implementing cost controls, or analyzing Databricks usage costs.
Score 70/100
Implement Delta Lake data management patterns including GDPR, PII handling, and data lifecycle. Use when implementing data retention, handling GDPR requests, or managing data…
Score 70/100
Databricks documentation reference. Use as a lookup resource alongside other skills and MCP tools for comprehensive guidance.
Score 70/100
Configure Databricks enterprise SSO, Unity Catalog RBAC, and organization management. Use when implementing SSO integration, configuring role-based permissions, or setting up…
Score 70/100
Execute Databricks incident response procedures with triage, mitigation, and postmortem. Use when responding to Databricks-related outages, investigating job failures, or running…
Score 70/100
Install and configure Databricks CLI and SDK authentication. Use when setting up a new Databricks integration, configuring tokens, or initializing Databricks in your project.
Score 70/100
Run Databricks jobs and notebooks using dbx.py -- submit, wait for completion, get output, and manage runs. Covers both one-off runs and existing job triggers.
Score 70/100
Configure Databricks local development with Databricks Connect, Asset Bundles, and IDE. Use when setting up a local dev environment, configuring test workflows, or establishing a…
Score 70/100
Execute comprehensive platform migrations to Databricks from legacy systems. Use when migrating from on-premises Hadoop, other cloud platforms, or legacy data warehouses to…
Score 70/100
Create and manage Databricks notebooks programmatically. Use when generating ingestion code, creating ETL notebooks, executing Databricks workflows, or when user mentions notebook…
Score 70/100
Set up comprehensive observability for Databricks with metrics, traces, and alerts. Use when implementing monitoring for Databricks jobs, setting up dashboards, or configuring…
Score 70/100
Optimize Databricks cluster and query performance. Use when jobs are running slowly, optimizing Spark configurations, or improving Delta Lake query performance.
Score 70/100
Execute SQL queries against Databricks using the DBSQL MCP server. Use when querying Unity Catalog tables, running SQL analytics, exploring Databricks data, or when user mentions…
Score 70/100
Generate realistic synthetic data using Spark + Faker (strongly recommended). Supports serverless execution, multiple output formats (Parquet/JSON/CSV/Delta), and scales from…
Score 70/100
Create and populate Databricks Delta tables via interactive REPL or notebook submission. Uses dbx.py for operations and PySpark for data transforms.
Score 70/100
Add and update column descriptions and table comments in Unity Catalog. Use when documenting tables, onboarding new datasets, or enforcing metadata standards.
Score 70/100
Explore Databricks Unity Catalog tables -- list, describe, preview data, and profile columns. Uses dbx.py for metadata and the interactive REPL for data queries.
Score 70/100
Unity Catalog system tables for lineage, audit logs, billing, compute, jobs, and query history. Use when querying system.access.audit, system.access.table_lineage,…
Score 70/100
Upgrade Databricks runtime versions and migrate between features. Use when upgrading DBR versions, migrating to Unity Catalog, or updating deprecated APIs and features.
Score 70/100
Configure Databricks job notifications, webhooks, and event handling. Use when setting up Slack/Teams notifications, configuring alerts, or integrating Databricks events with…
Score 70/100
Dataclass patterns including frozen dataclasses, slots, immutability, and value objects. Activated when designing data classes or value types.
Score 70/100
Query logs, metrics, monitors, and dashboards from Datadog. Search logs, check alert status, and investigate incidents.
Score 70/100
Routes Datadog anomaly detection alerts to appropriate response channels using the Datadog Events API v2 and Monitors API.
Score 70/100
Monitors Datadog metric streams using the Datadog API v2 and applies ML-based anomaly detection to alert on infrastructure drift.
Score 70/100
Leverages the Datadog API v2 metrics and events endpoints to detect anomalous patterns. Uses the Datadog Monitors API to create dynamic thresholds and sends escalations via…
Score 70/100
Detects performance anomalies in Datadog APM traces using the Datadog API v2 metrics endpoint. Applies DBSCAN clustering on latency distributions to identify outlier service…
Score 70/100
Queries Datadog APM trace data via the Datadog Tracing API v2 to identify latency bottlenecks and error hotspots.
Score 70/100
Queries distributed traces from Datadog APM using the Trace Search API with faceted filtering. Analyzes p99 latency breakdowns across service spans and identifies slow database…
Score 70/100
Troubleshoot Datadog API authentication issues (401/403 errors), understand API keys vs app keys, and configure correct regions.
Score 70/100
Automatically detects Datadog resource mentions (URLs, service queries, natural language) and intelligently fetches condensed context via datadog-analyzer subagent when needed for…
Score 70/100
Automate Datadog tasks via Rube MCP (Composio): query metrics, search logs, manage monitors/dashboards, create events and downtimes. Always search tools first for current schemas.
Score 70/100
Fetches an active Datadog incident, retrieves associated monitors and dashboards, pulls the last 30 minutes of metric data, and walks through a runbook checklist with automated…
Score 70/100
Work with Datadog Incidents API - fetch incidents with enrichment, update incident fields, understand incident data structures.
Score 70/100
Connects applications to Datadog monitoring using the Datadog API v2 for metrics submission, log forwarding, APM trace ingestion, and dashboard JSON template management.
Score 70/100
Search Datadog logs via API - query syntax, storage tiers (indexes, flex, online-archives), pagination. Use when searching logs or using the dd search-logs command.
Score 70/100
Exports custom metrics and traces to Datadog using the DogStatsD protocol and Datadog API v2. Supports histogram aggregation, tag-based filtering, and SLO tracking.
Score 70/100
Creates and manages Datadog monitors using the datadog-api-client SDK. Configures metric, log, APM trace, and composite monitors with proper threshold types and notification…
Score 70/100
Manages Datadog monitors and dashboards via the Datadog REST API v2. Creates metric, log, and APM monitors with composite conditions and configures notification routing through…
Score 70/100
Manages Datadog monitors and dashboards via the Datadog API v2. Lists triggered monitors, mutes/unmutes alert groups, and queries metric timeseries.
Score 70/100
Monitors Datadog Service Level Objectives and burn rate alerts via the Datadog API v2. Generates SLO compliance reports and triggers remediation workflows when error budgets are…
Score 70/100
Investigates broken checks with the Datadog Synthetics API, Monitors API, and Logs Search API to connect failed browser or API tests with the signals that explain them.
Score 70/100
Automates alert triage using the Datadog Monitors API v2 and Notebooks API. Correlates metrics with traces via the Datadog APM Trace Search API and generates RCA timelines from…
Score 70/100
Reviews SQL queries and DataFrame operations for optimization opportunities including predicate pushdown, partition pruning, column projection, and join ordering.
Score 70/100
Automate Datagma tasks via Rube MCP (Composio). Always search tools first for current schemas.
Score 70/100
Convert documents (PDF, EPUB, PPTX, DOCX, XLSX, HTML, images) to Markdown using Datalab cloud API. Use when user wants to use Datalab API for document conversion, or prefers…
Score 70/100
Credit risk data cleaning and variable screening pipeline for pre-loan modeling. Use when working with raw credit data that needs quality assessment, missing value analysis, or…
Score 70/100
Build professional financial services data packs from various sources including CIMs, offering memorandums, SEC filings, web search, or MCP servers.
Score 70/100
Use when designing or reviewing OmniStudio DataRaptors, especially Extract versus Turbo Extract versus Transform versus Load, field mapping strategy, performance tradeoffs, and…
Score 70/100
Use when DataRaptor Transform operations are slow, hit governor limits, or use Apex where formula fields would suffice.
Score 70/100
Automate Datarobot tasks via Rube MCP (Composio). Always search tools first for current schemas.
Score 70/100
Construction et curation de datasets pour l'entraînement ML (nettoyage, augmentation, annotation, split)
Score 70/100
Compare two datasets to find differences, added/removed rows, changed values. Use for data validation, ETL verification, or tracking changes.
Score 70/100
Create, clean, and optimize datasets for LLM fine-tuning. Covers formats (Alpaca, ShareGPT, ChatML), synthetic data generation, quality assessment, and augmentation.
Score 70/100
Dataset Loader Creator - Auto-activating skill for ML Training. Triggers on: dataset loader creator, dataset loader creator Part of the ML Training skill category.
Score 70/100
Compare two datasets by key, isolate missing rows and field-level differences, and summarize reconciliation exceptions clearly.
Score 70/100
Extracts key specifications from component datasheet PDFs for maker projects. Use when user shares a datasheet PDF URL, asks about component specs, needs pin assignments, I2C…
Score 70/100
Search all 23,726 General skills →