In-The-Wild Jailbreak Prompts on LLMs is an open-source research repository that provides datasets and analytical tools for studying jailbreak prompts used to bypass safety restrictions in large language models. The project is part of a research effort to understand how users attempt to circumvent alignment and safety mechanisms built into modern AI systems. The repository includes a large collection of prompts gathered from real-world platforms such as Reddit, Discord, prompt-sharing communities, and other public sources. Researchers analyze these prompts to identify patterns, attack strategies, and techniques commonly used to trick language models into producing restricted or harmful outputs. The dataset includes thousands of prompts collected across multiple platforms and represents one of the largest collections of jailbreak attempts available for research.

Features

  • Large dataset of real-world jailbreak prompts collected from multiple platforms
  • Framework for analyzing adversarial prompt strategies against LLMs
  • Measurement study of jailbreak attacks in the wild
  • Tools for evaluating model responses to adversarial prompts
  • Dataset containing thousands of prompts and jailbreak attempts
  • Research resource for improving LLM safety and alignment methods

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow In-The-Wild Jailbreak Prompts on LLMs

In-The-Wild Jailbreak Prompts on LLMs Web Site

Other Useful Business Software
Gemini 3 and 200+ AI Models on One Platform Icon
Gemini 3 and 200+ AI Models on One Platform

Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

Build generative AI apps with Vertex AI. Switch between models without switching platforms.
Start Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of In-The-Wild Jailbreak Prompts on LLMs!

Additional Project Details

Programming Language

Python

Related Categories

Python Large Language Models (LLM)

Registered

2026-03-05