Where it fits
- A buyer requests evidence that the AI feature has been tested against prompt injection.
- A product includes agent tools that can change data or trigger business workflows.
- A regulated team needs a retestable process before approving AI features.
Operational steps
- Define scope: models, prompts, tools, retrieval sources, memory, logging, and user roles.
- Run adversarial tests across direct prompts, indirect content, multi-turn conversations, and tool outputs.
- Document exploitability, impact, reproduction steps, and server-side control gaps.
- Retest fixes and keep the test pack in CI so the issue does not return.
Common risks
- The test stops at model behavior and misses app-level authorization failures.
- Findings are not mapped to owners or release gates.
- A fix is made in the prompt but not enforced in backend policy checks.
How PromptGuard Scan fits the workflow
PromptGuard Scan gives penetration testers and product security teams a repeatable harness, useful reports, and checkout-ready plans for ongoing LLM release testing.