Summary
OpenAI began rolling out Trusted Contact, an optional ChatGPT setting that lets adults nominate a person who may be notified if trained reviewers determine a serious self-harm risk is present. The feature extends ChatGPT’s earlier distress-alert system for teens into an adult safety workflow with human review and limited notifications.
What changed
OpenAI added Trusted Contact to ChatGPT settings so eligible adults can designate one person to receive a safety alert if OpenAI’s systems and reviewers identify a serious self-harm concern.
Why it matters
This is a meaningful product-safety expansion from warning messages toward a consent-based escalation path into the user’s real-world support network. It shows OpenAI moving beyond refusal behavior and crisis-resource surfacing into monitored intervention design, which will shape expectations for AI companion safety systems.
Evidence excerpt
OpenAI says Trusted Contact lets adults nominate someone who may be notified if automated systems and trained reviewers detect discussion of self-harm that indicates a serious safety concern.