The Atlas

The AI App Attack Atlas

Every documented way to break an AI application. Run any attack against your own app in one click.

50 attack patterns6 categoriesRecent confirmed exploits
LLM01

Prompt Injection

Attacks that override or hijack a model's instructions through user input, retrieved context, or tool output.

10 patterns
LLM04

RAG Poisoning

Attacks that corrupt the retrieval layer of an AI app, causing the model to ground its answers on attacker-controlled content.

8 patterns
LLM06

Tool Abuse

Attacks that misuse the tools or function calls available to an LLM agent, often turning them into a privilege-escalation primitive.

10 patterns
LLM06

Agent Hijacking

Attacks that take over an autonomous agent's plan or memory and redirect its actions toward attacker goals.

6 patterns
LLM06 / LLM03

MCP Exploitation

Attacks specific to Model Context Protocol servers, including filesystem, network, and shell tool abuse.

8 patterns
LLM01

Multi-turn Jailbreaks

Attacks that exploit conversational state across many turns to gradually erode safety constraints.

8 patterns