---
title: "Benchmark prompt-injection attacks defenses and recovery pipelines before trusting an LLM app with Open Prompt Injection"
description: "Run structured prompt-injection attack and defense experiments against an LLM-integrated app before production by measuring attack success and testing detection or recovery pipelines."
verification: "listed"
source: "https://github.com/liu00222/Open-Prompt-Injection"
author: "liu00222"
publisher_type: "individual"
category:
  - "Security & Verification"
framework:
  - "Multi-Framework"
tool_ecosystem:
  github_repo: "liu00222/Open-Prompt-Injection"
  github_stars: 429
---

# Benchmark prompt-injection attacks defenses and recovery pipelines before trusting an LLM app with Open Prompt Injection

Run structured prompt-injection attack and defense experiments against an LLM-integrated app before production by measuring attack success and testing detection or recovery pipelines.

## Prerequisites

Conda-managed Python environment, upstream repository checkout, model API credentials as configured upstream, target task and attack configuration files

## Installation

Choose whichever fits your setup:

1. Copy this skill folder into your local skills directory.
2. Clone the repo and symlink or copy the skill into your agent workspace.
3. Add the repo as a git submodule if you manage shared skills centrally.
4. Install it through your internal provisioning or packaging workflow.
5. Download the folder directly from GitHub and place it in your skills collection.

Install command or upstream instructions:

```
Clone the repository, create the documented conda environment from environment.yml, configure the required model credentials, then run the provided experiment scripts or library flows to execute attack and defense benchmarks against the target application.
```

## Documentation

- https://github.com/liu00222/Open-Prompt-Injection

## Source

- [Agent Skill Exchange](https://agentskillexchange.com/skills/benchmark-prompt-injection-attacks-defenses-and-recovery-pipelines-before-trusting-an-llm-app-with-open-prompt-injection/)
