Summary
OpenAI opened applications for a GPT-5.5 Bio Bug Bounty that asks vetted researchers to find a universal jailbreak that can beat a five-question biology safety challenge in Codex Desktop. The program offers a $25,000 reward for the first full break and positions external red-teaming as part of how OpenAI validates higher-risk model behavior.
What changed
OpenAI launched a new biosecurity bug bounty for GPT-5.5 in Codex Desktop with paid rewards, a defined testing window, and a vetted-access model for outside researchers.
Why it matters
This is a trust and safety signal with product implications. It shows OpenAI is turning biosecurity testing into a structured external validation program around a specific deployment surface instead of keeping all red-teaming internal.
Evidence excerpt
OpenAI says the model in scope is GPT-5.5 in Codex Desktop only and offers $25,000 for the first universal jailbreak that clears all five bio safety questions.