---
title: "Probe ML and LLM systems for regressions and vulnerabilities with Giskard"
description: "Run automated red-team and failure scans against an LLM or RAG app before users find the breakage."
verification: "listed"
source: "https://github.com/Giskard-AI/giskard-oss"
author: "Giskard AI"
publisher_type: "organization"
category:
  - "Security & Verification"
framework:
  - "Multi-Framework"
tool_ecosystem:
  github_repo: "giskard-ai/giskard-oss"
  github_stars: 5261
---

# Probe ML and LLM systems for regressions and vulnerabilities with Giskard

Run automated red-team and failure scans against an LLM or RAG app before users find the breakage.

## Prerequisites

Python environment, Giskard open-source package, model or RAG application access, test datasets or prompts, model provider credentials where required

## Installation

Choose whichever fits your setup:

1. Copy this skill folder into your local skills directory.
2. Clone the repo and symlink or copy the skill into your agent workspace.
3. Add the repo as a git submodule if you manage shared skills centrally.
4. Install it through your internal provisioning or packaging workflow.
5. Download the folder directly from GitHub and place it in your skills collection.

Install command or upstream instructions:

```
Install the Giskard open-source package in a Python environment, connect it to the target model or RAG workflow, then run the documented scan or evaluation flows and review the reported failures.
```

## Documentation

- https://docs.giskard.ai/

## Source

- [Agent Skill Exchange](https://agentskillexchange.com/skills/probe-ml-and-llm-systems-for-regressions-and-vulnerabilities-with-giskard/)
