
Website • Docs • Blog • Discord
AI security testing for LLMs, agents, and RAG systems
Trusted by 85 Fortune 500 companies and 200K+ developers
npx promptfoo@latest init npx promptfoo@latest eval npx promptfoo@latest viewSecurity Testing
- Red Teaming — Automated vulnerability discovery with 100+ attack plugins
- Code Scanning — Detect LLM security risks in your IDE and CI/CD
Evaluations
- CLI & Getting Started — Test prompts, models, and RAG pipelines locally
- Node.js Package — Integrate testing into your codebase
- Model Evaluation — Compare and benchmark models
- GitHub Action — Security testing in every pull request
What we detect:
- Prompt injections and jailbreaks
- PII and sensitive data leaks
- Hallucinations and policy violations
- Tool misuse and adversarial attacks
Compliance: SOC 2 Type II · ISO 27001 · HIPAA
Data model:
- Evals — 100% local, API keys never leave your machine
- Red teaming — Your target runs locally; attack generation via our API or bring your own keys
| Repository | Description |
|---|---|
| promptfoo | Test prompts, agents, and RAGs. Red teaming and vulnerability scanning for LLMs. |
| promptfoo-action | GitHub Action for CI/CD security testing |
| evil-mcp-server | Red team testing for Model Context Protocol servers |
| js-rouge | JavaScript ROUGE metrics for summarization evaluation |
Connect:Discord · X/Twitter · Bluesky · LinkedIn
Contribute:Contributing Guide · Good First Issues · Report Issues