Exploit Prompts
Learn how attackers exploit AI systems through various prompt injection techniques and social engineering methods.
What is AI Red Teaming?
AI red‑teaming is the practice of purposely finding exploits in AI models and LLMs to protect them from future attacks.
Basic types of LLM/AI attacks
An introductory guide on types of AI exploits and their fundamentals.
Jailbreaking vs. Prompt Injection
An overview of Jailbreaking vs. Prompt Injection, and why the difference matters in practical use cases.
Common Jailbreaking Techniques
Advanced methods for bypassing AI safety measures and content policies through sophisticated prompt manipulation.
System Prompt Extraction
Learn how attackers extract hidden system prompts (internal instructions) from AI models and how to defend against it.
Indirect Injection via Data
Learn how attackers embed malicious instructions in data sources that AI systems process, creating hidden attack vectors.