Skip to content

Research Report: Defending Against Prompt Injection

Download below
Pangea Research Report
“Prompt injection is especially concerning when attackers can manipulate prompts to extract sensitive or proprietary information from an LLM, especially if the model has access to confidential data via RAG, plugins, or system instructions."
- Joe Sullivan, former CSO Cloudflare, Uber, & Facebook

AI hackers used more than 300 million tokens and submitted nearly 330,000 prompts in Pangea’s online prompt injection challenge to attempt to overcome a series of progressively more powerful security guardrails.

For enterprises building or deploying AI-powered applications, this research provides:

  • Empirical data on attack techniques and effectiveness
  • Insights into shortcomings of today’s defenses
  • Patterns of behavior that can inform better security architecture
  • Practical recommendations for reducing prompt injection risk