Can you make a chatbot reveal its system prompt?
Find out in your browser, in ten minutes. 14 AI security CTF challenges. Break real LLM agents and capture the flag.
Learn the theory alongside the practice.
Each module is concept · walkthrough · practice · quiz · defenses · extensions. ~45 minutes each. Pair them with the challenges for full attack-class mastery. All free.
Prompt Injection
Direct, indirect, and multi-turn injection attacks against LLM agents.
Indirect Prompt Injection
Poisoning the content an agent reads — RAG sources, emails, web pages, calendar invites.
System Prompt Extraction
Three attack families that pull hidden instructions out of production agents.
Tool Abuse
Excessive agency, argument injection, SSRF, and tool-chaining exploits.
Data Exfiltration
Markdown image rendering as a covert channel — what hit ChatGPT, Copilot, and Claude.ai.
Guardrail Bypass
Encoded payloads, persona-drift, multi-turn crescendo, and authority confusion.
Want the credential? About the WCAP exam →

When you're ready, the credential is here.
$199 one-time. 48-hour, 10-scenario, auto-graded flag-capture exam. The first 100 to pass receive a numbered first-cohort challenge coin.
About the WCAP exam →Pick a target. Capture the flag.
A growing catalog of hands-on CTF challenges against live LLM agents. Every capture teaches a real attack class — the same techniques landing on production AI systems today. New characters drop on a regular cadence.