LLM vulnerability scanner

LLM Vulnerability Scanner for Agents and RAG

An LLM vulnerability scanner focuses on the application boundary around the model: prompts, tools, memory, retrieval, output handling, and trust decisions.

Run demo scan

Where it fits

  • Testing a RAG app where external documents can steer the assistant.
  • Reviewing an AI agent that can call tools with business impact.
  • Comparing OpenAI, Anthropic, Gemini, and self-hosted model behavior under the same attack suite.

Operational steps

  • Describe the app surface, model provider, tool list, and sensitive data categories.
  • Run vulnerability packs for prompt injection, leakage, jailbreak, unsafe tool calls, and retrieval poisoning.
  • Review high-severity findings first, then retest after prompt, policy, and authorization changes.
  • Store results as a release artifact so future model or prompt changes can be compared.

Common risks

  • A model refuses an unsafe request in isolation but accepts it after tool output is injected.
  • A memory feature stores secrets or private internal instructions.
  • A self-hosted model behaves differently from the hosted model used in the original security review.

How PromptGuard Scan fits the workflow

PromptGuard Scan provides model-agnostic vulnerability scans with consistent scoring, remediation guidance, and release-gate outputs for security and engineering teams.

Ready to test a real AI surface?

Pricing

Team annual is selected by default.

Annual billing is 50% off. All plans use NOWPayments checkout and keep the product page open.

Dev

For solo builders validating one product before launch.

$25/mo
$294 billed yearly. Save 50%.
5 apps500 scans
  • Prompt injection scans
  • Jailbreak template checks
  • PII and key leak detection
  • HTML risk report
  • Email support

Enterprise

For platform teams, private deployments, and audit-heavy AI systems.

$250/mo
$2,994 billed yearly. Save 50%.
Unlimited appsUnlimited scans
  • Everything in Team
  • Private deployment path
  • Custom test packs
  • Compliance evidence exports
  • Priority security review support

Security playbooks

Practical guides for LLM app security decisions.