Most AI systems can be broken in under 60 seconds.
Learn how with hands-on CTF challenges against live LLM agents.
Prompt injection, tool abuse, data exfiltration—no fluff.
Learn the theory alongside the practice.
Each module is concept · walkthrough · practice · quiz · defenses · extensions. ~45 minutes each. Pair them with the challenges for full attack-class mastery. All free.
Prompt Injection
Direct, indirect, and multi-turn injection attacks against LLM agents.
Indirect Prompt Injection
Poisoning the content an agent reads — RAG sources, emails, web pages, calendar invites.
System Prompt Extraction
Three attack families that pull hidden instructions out of production agents.
Tool Abuse
Excessive agency, argument injection, SSRF, and tool-chaining exploits.
Data Exfiltration
Markdown image rendering as a covert channel — what hit ChatGPT, Copilot, and Claude.ai.
Guardrail Bypass
Encoded payloads, persona-drift, multi-turn crescendo, and authority confusion.
Want the credential? About the WCAP exam →
Pick a target. Capture the flag.
A growing catalog of hands-on CTF challenges against live LLM agents. Every capture teaches a real attack class — the same techniques landing on production AI systems today. New characters drop on a regular cadence.