---
title: "Score model outputs with reusable evaluator prompts and metrics using autoevals"
description: "Apply reusable evaluators to model outputs when you need lightweight scoring, rationale capture, or quick eval loops in code."
verification: "listed"
source: "https://github.com/braintrustdata/autoevals"
author: "Braintrust"
publisher_type: "organization"
category:
  - "Code Quality & Review"
framework:
  - "Multi-Framework"
tool_ecosystem:
  github_repo: "braintrustdata/autoevals"
  github_stars: 861
  npm_package: "autoevals"
  npm_weekly_downloads: 1807454
---

# Score model outputs with reusable evaluator prompts and metrics using autoevals

Apply reusable evaluators to model outputs when you need lightweight scoring, rationale capture, or quick eval loops in code.

## Prerequisites

Python or Node.js, access to an OpenAI-compatible model endpoint or Braintrust proxy

## Installation

Choose whichever fits your setup:

1. Copy this skill folder into your local skills directory.
2. Clone the repo and symlink or copy the skill into your agent workspace.
3. Add the repo as a git submodule if you manage shared skills centrally.
4. Install it through your internal provisioning or packaging workflow.
5. Download the folder directly from GitHub and place it in your skills collection.

Install command or upstream instructions:

```
Install with `npm install autoevals` or `pip install autoevals`, configure an OpenAI-compatible endpoint, then call the built-in or custom evaluators from code to score outputs and inspect rationales.
```

## Documentation

- https://github.com/braintrustdata/autoevals

## Source

- [Agent Skill Exchange](https://agentskillexchange.com/skills/score-model-outputs-with-reusable-evaluator-prompts-and-metrics-using-autoevals/)
