---
title: "Configure and interpret LaunchDarkly AI Config online evaluations with judge attachments"
description: "Attach judges to LaunchDarkly AI Config variations, create custom judges, set sampling rates, and interpret production quality signals from online evaluations."
verification: "security_reviewed"
source: "https://github.com/launchdarkly/ai-tooling/tree/main/skills/ai-configs/aiconfig-online-evals"
author: "launchdarkly"
publisher_type: "organization"
category:
  - "Monitoring & Alerts"
framework:
  - "Custom Agents"
---

# Configure and interpret LaunchDarkly AI Config online evaluations with judge attachments

Attach judges to LaunchDarkly AI Config variations, create custom judges, set sampling rates, and interpret production quality signals from online evaluations.

## Prerequisites

LaunchDarkly AI Configs; LaunchDarkly API token or SDK access; custom skill-capable agent client

## Installation

Choose whichever fits your setup:

1. Copy this skill folder into your local skills directory.
2. Clone the repo and symlink or copy the skill into your agent workspace.
3. Add the repo as a git submodule if you manage shared skills centrally.
4. Install it through your internal provisioning or packaging workflow.
5. Download the folder directly from GitHub and place it in your skills collection.

Install command or upstream instructions:

```
Copy the skill directory into the agent client’s skills path and provide LaunchDarkly AI Config credentials before attaching judges or configuring evaluations.
```

## Documentation

- https://docs.launchdarkly.com/home/ai-configs/online-evaluations

## Source

- [Agent Skill Exchange](https://agentskillexchange.com/skills/configure-and-interpret-launchdarkly-ai-config-online-evaluations-with-judge-attachments/)
