chat.win logoReady to compete? Go to the live arena.
Learn
Getting Started
Overview
Prompting 101
Using chat.win
Prompting Outside Chat.win
Exploit Prompts
Defend Prompts
  1. Home
  2. Exploit Prompts

Exploit Prompts

Learn how attackers exploit AI systems through various prompt injection techniques and social engineering methods.

What is AI Red Teaming?

AI red‑teaming is the practice of purposely finding exploits in AI models and LLMs to protect them from future attacks.

fundamentalsred-teamsecuritybeginner
What is AI Red Teaming?

Basic types of LLM/AI attacks

An introductory guide on types of AI exploits and their fundamentals.

securityexploitsbeginner
Basic types of LLM/AI attacks

Jailbreaking vs. Prompt Injection

An overview of Jailbreaking vs. Prompt Injection, and why the difference matters in practical use cases.

attackfundamentalsinjectionbeginner
Jailbreaking vs. Prompt Injection

Common Jailbreaking Techniques

Advanced methods for bypassing AI safety measures and content policies through sophisticated prompt manipulation.

attackjailbreakingsafety-bypassadvanced
Common Jailbreaking Techniques

System Prompt Extraction

Learn how attackers extract hidden system prompts (internal instructions) from AI models and how to defend against it.

attackfundamentalsadvanced
System Prompt Extraction

Indirect Injection via Data

Learn how attackers embed malicious instructions in data sources that AI systems process, creating hidden attack vectors.

attackindirect-injectiondata-poisoningadvanced
Indirect Injection via Data
© 2025 Turing LLC.
HomePrompting 101Exploit PromptsDefend PromptsUsing chat.winFAQPrivacyTermsArena
DiscordX