Explore over 150 methods attackers use to hijack LLMs. Understand how they work and how to defend against them.
Prompt Injection (PI) is the #1 OWASP security risk for GenAI apps, where attacker instructions cause unwanted behavior. Protecting against PI requires understanding the diverse attack methods illustrated in this diagram. And, new PI methods emerge regularly.
To help teams stay ahead, Dr. James Hoagland, Security Researcher at Pangea breaks down how PI attacks work, and why they’re so hard to prevent, in this 36"x24" taxonomy reference diagram.
From direct and indirect injection methods to attacker prompting techniques, the framework has a logical hierarchy of classes and categories to help you grasp the full scope of risks and stay ahead of emerging GenAI threats.
Don’t defend your LLMs blind. Get your poster today.
*Terms and conditions apply. Quantities are limited. Physical posters may not be available in all regions or countries. Digital version provided by default.