The AI App Attack Atlas
Every documented way to break an AI application. Run any attack against your own app in one click.
Prompt Injection
Attacks that override or hijack a model's instructions through user input, retrieved context, or tool output.
10 patternsRAG Poisoning
Attacks that corrupt the retrieval layer of an AI app, causing the model to ground its answers on attacker-controlled content.
8 patternsTool Abuse
Attacks that misuse the tools or function calls available to an LLM agent, often turning them into a privilege-escalation primitive.
10 patternsAgent Hijacking
Attacks that take over an autonomous agent's plan or memory and redirect its actions toward attacker goals.
6 patternsMCP Exploitation
Attacks specific to Model Context Protocol servers, including filesystem, network, and shell tool abuse.
8 patternsMulti-turn Jailbreaks
Attacks that exploit conversational state across many turns to gradually erode safety constraints.
8 patterns