Learn LLM Security
Master prompt engineering, security techniques, and AI safety through structured lessons and hands-on exercises.
Prompting 101
5 lessonsMaster the fundamentals of prompt engineering, from system prompts to parameters and security considerations.
Prompts 101
Start simple: what prompts are and how to write clear ones.
Understanding System Prompts
Learn what system prompts are and how they shape AI behavior and responses.
Understanding AI Parameters
A friendlier, story-first guide to temperature, top-p, tokens, and penalties.
Prompt Engineering Techniques
A practical, story-first guide to shaping model behavior without retraining.
Security Considerations in Prompting
Learn jailbreak threat models and apply concrete defenses with examples.
Using chat.win
4 lessonsLearn how to use the chat.win platform to create and solve AI prompt challenges, understand the rules, and get started with the community.
Creating a Prompt
Build a clear, fun, and secure prompt for chat.win step by step.
Beating a Prompt
A simple playbook for solving prompts using safe, proven techniques.
Common Techniques
Pliny-inspired, safe techniques for structuring prompts and progressing cleanly.
FAQ about chat.win
Fast answers on creating, solving, payouts, rules, and safety on chat.win.
Exploit Prompts
6 lessonsLearn how attackers exploit AI systems through various prompt injection techniques and social engineering methods.
What is AI Red Teaming?
AI red‑teaming is the practice of purposely finding exploits in AI models and LLMs to protect them from future attacks.
Basic types of LLM/AI attacks
An introductory guide on types of AI exploits and their fundamentals.
Jailbreaking vs. Prompt Injection
An overview of Jailbreaking vs. Prompt Injection, and why the difference matters in practical use cases.
Common Jailbreaking Techniques
Advanced methods for bypassing AI safety measures and content policies through sophisticated prompt manipulation.
System Prompt Extraction
Learn how attackers extract hidden system prompts (internal instructions) from AI models and how to defend against it.
Indirect Injection via Data
Learn how attackers embed malicious instructions in data sources that AI systems process, creating hidden attack vectors.
Defend Prompts
4 lessonsMaster defensive techniques to protect AI systems from prompt injection, data extraction, and other security threats.
Input Validation & Sanitization
Learn how to implement robust input validation and sanitization to prevent prompt injection attacks.
Prompt Isolation Techniques
Learn how to separate system instructions from user input to prevent prompt injection and maintain security boundaries.
Output Filtering & Monitoring
Learn how to implement comprehensive output filtering and monitoring to detect and prevent harmful AI responses.
Secure System Design
Learn architectural patterns and design principles for building inherently secure AI systems from the ground up.