Builds real-time collaborative features with Liveblocks including presence, cursors, storage, comments, and notifications.
Score 70/100
Automate Livesession tasks via Rube MCP (Composio). Always search tools first for current schemas.
Score 70/100
Belirtilen ülkelerdeki programları, yerel dilde ve Türkçe isimleri, temel özellikleri ve uygulama tarihleri içeren belirli bir şemaya göre sıralar.
Score 70/100
Query Bohrium LKM or compatible claim/evidence-chain APIs, including public search, claim evidence lookup, reasoning-chain retrieval, raw JSON preservation, and API-error…
Score 70/100
llama.cpp is a high-performance C/C++ implementation for running LLM inference across diverse hardware.
Score 70/100
Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support
Score 70/100
Comprehensive guide for building LLM applications with LlamaIndex, including data loaders, indexes, query engines, chat engines, vector stores, retrievers, agents, evaluation,…
Score 70/100
Complete llama.cpp C/C++ API reference covering model loading, inference, text generation, embeddings, chat, tokenization, sampling, batching, KV cache, LoRA adapters, and state…
Score 70/100
When setting up local LLM inference without cloud APIs. When running GGUF models locally. When needing OpenAI-compatible API from a local model.
Score 70/100
llamafile by Mozilla bundles open-source LLMs into a single portable executable that runs locally on macOS, Windows, Linux, and BSD with zero installation.
Score 70/100
Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning.
Score 70/100
Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning.
Score 70/100
Data framework for building LLM applications with RAG. Specializes in document ingestion (300+ connectors), indexing, and querying.
Score 70/100
Expert guidance for LlamaIndex development including RAG applications, vector stores, document processing, query engines, and building production AI applications.
Score 70/100
Build LLM applications with LlamaIndex. Create indexes, query engines, and data connectors. Use for RAG applications, document search, and knowledge base systems.
Score 70/100
LlamaIndex implementation patterns with templates, scripts, and examples for building RAG applications.
Score 70/100
LlamaIndex Wolfram Alpha tool for computational knowledge queries, math solving, scientific calculations, and agent integration.
Score 70/100
Large Language and Vision Assistant. Enables visual instruction tuning and image-based conversations. Combines CLIP vision encoder with Vicuna/LLaMA language models.
Score 70/100
Expert LLC operations management for ID8Labs LLC (Florida single-member LLC). 9 specialized agents providing PhD-level expertise in compliance, tax strategy, asset protection, and…
Score 70/100
Production-ready patterns for building LLM applications, inspired by [Dify](https://github.com/langgenius/dify) and industry best practices.
Score 70/100
Building applications with Large Language Models - prompt engineering, RAG patterns, and LLM integration. Use for AI-powered features, chatbots, or LLM-based automation.
Score 70/100
You are an AI assistant development expert specializing in creating intelligent conversational interfaces, chatbots, and AI-powered applications.
Score 70/100
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
Score 70/100
You are an expert prompt engineer specializing in crafting effective prompts for LLMs through advanced techniques including constitutional AI, chain-of-thought reasoning, and…
Score 70/100
Detects common LLM coding agent artifacts in codebases. Identifies test quality issues, dead code, over-abstraction, and verbose LLM style patterns.
Score 70/100
External LLM invocation. Triggered ONLY by @council,@probe,@crossref,@gpt,@gemini,@grok,@qwen.
Score 70/100
Process textual and multimedia files with various LLM providers using the llm CLI. Supports both non-interactive and interactive modes with model selection, config persistence,…
Score 70/100
Centralized AI-readable documentation repository with 245+ frameworks and tools. Use to find documentation, add new sources, or update existing docs.
Score 70/100
Write effective LLM prompts, commands, and agent instructions. Goal-oriented over step-prescriptive. Role + Objective + Latitude pattern.
Score 70/100
Configure RuVLLM local inference with model selection, MicroLoRA fine-tuning, and SONA adaptation
Score 70/100
Orchestrate a configurable, multi-member CLI planning council (Codex, Claude Code, Gemini, OpenCode, or custom) to produce independent implementation plans, anonymize and…
Score 70/100
Orchestrate multiple LLMs as a council, generating collective intelligence through peer review and chairman synthesis
Score 70/100
Optimize documentation for AI coding assistants and LLMs. Improves docs for Claude, Copilot, and other AI tools through c7score optimization, llms.txt generation, question-driven…
Score 70/100
Extract structured data from construction documents using LLMs. Process RFIs, submittals, contracts, specifications. Convert unstructured PDFs to structured JSON/Excel.
Score 70/100
트리거: "LLM 평가", "eval 만들어줘", "모델 테스트", "프롬프트 평가", "LLM 품질 테스트" 수행: 태스크 분석 → 평가 기준 정의 → 테스트 케이스 생성 → 평가 코드 작성 → 리포트 형식 설계 출력: 평가 케이스 JSON + 평가 실행 코드 + 채점 기준 문서
Score 70/100
LLM fine-tuning expert for LoRA, QLoRA, dataset preparation, and training optimization
Score 70/100
Implementing function calling (tool use) with LLMs for structured outputs and external integrations.
Score 70/100
Finding and accessing AI/LLM model brand icons from lobe-icons library. Use when users need icon URLs, want to download brand logos for AI models/providers/applications (Claude,…
Score 70/100
Use when wanting to interact with any LLM - Explains available inference endpoints so the agent selects suitable models.
Score 70/100
Guide for using LLM utilities in speedy_utils, including memoized OpenAI clients and chat format transformations.
Score 70/100
Intégration de LLMs dans des applications via API. Se déclenche avec "API OpenAI", "Claude API", "intégrer un LLM", "GPT dans mon app", "Ollama", "LLM local", "streaming",…
Score 70/100
Advanced LLM jailbreaking techniques, safety mechanism bypass strategies, and constraint circumvention methods
Score 70/100
Comprehensive guide to using LLMs as judges for automated evaluation including prompt patterns, calibration, bias reduction, and multi-judge ensembles
Score 70/100
LLM Operations -- RAG, embeddings, vector databases, fine-tuning, prompt engineering avancado, custos de LLM, evals de qualidade e arquiteturas de IA para producao.
Score 70/100
Optimize websites for AI assistant recommendations. ChatGPT, Gemini, Perplexity, Claude. Get cited in AI answers.
Score 70/100
Use when improving prompts for any LLM. Applies proven prompt engineering techniques to boost output quality, reduce hallucinations, and cut token usage.
Score 70/100
Comprehensive LLM model evaluation and ranking system. Use when users ask to compare language models, find the best model for a specific task, understand model capabilities, get…
Score 70/100
Restructure Web-UI / human-triggered tasks into CLI + file-output loops the LLM can iterate alone. Open LLM-side observability — structured logs, file dumps, addressable…
Score 70/100
Get reliable JSON, enums, and typed objects from LLMs using response_format, tool_use, and schema-constrained decoding across OpenAI, Anthropic, and Google APIs.
Score 70/100
See the main LLM Cost Optimization skill for comprehensive coverage of token economics and optimization strategies.
Score 70/100
Karpathy's LLM Wiki — build and maintain a persistent, interlinked markdown knowledge base. Ingest sources, query compiled knowledge, and lint for consistency.
Score 70/100
自動化 LLM Wiki 知識編譯技能。當啟動時,它會掃描指定目錄下的新檔案,並依照 Karpathy 的 LLM Wiki 邏輯(Ingest -> Summarize -> Compile -> Log)將其整合進 Obsidian 知識庫。
Score 70/100
Operate an LLM Wiki knowledge system in any workspace using the Karpathy pattern
Score 70/100
Interactive meal planning wizard. Use when user wants to create a meal plan, optimize their diet, set up nutritional constraints, or find foods that meet their goals.
Score 70/100
LLMs, prompt engineering, RAG systems, LangChain, and AI application development
Score 70/100
Run link checking for URLs in llms.txt and help the user fix broken or redirected links.
Score 70/100
Detect and use llms.txt files for LLM-optimized documentation. Use when checking if a site has LLM-ready docs before scraping.
Score 70/100
Check LLM Wiki health. Finds orphan pages, broken wikilinks, contradictions, stale content, missing pages, cross-reference gaps, and suggests improvements.
Score 70/100
Initialize the LLM Wiki. Creates directory structure, index, log, sets up qmd collection, and runs initial ingest of all existing content from raw/, docs/, and notes/.
Score 70/100
Optimize the LLM Wiki. Compacts verbose pages, merges near-duplicates, reorganizes misplaced content, strengthens cross-references, improves consistency, and generates missing…
Score 70/100