Summary
OpenAI released Privacy Filter, an open-weight model for detecting and redacting personally identifiable information in text. The model is positioned as local, high-throughput privacy infrastructure for training, indexing, logging, and review pipelines.
What changed
OpenAI released Privacy Filter under Apache 2.0 on Hugging Face and GitHub for local PII detection and redaction workflows.
Why it matters
This is a concrete privacy-infrastructure move rather than a general-purpose model launch. It gives teams a production-oriented building block for privacy-by-design workflows without forcing unredacted text through a remote service.
Evidence excerpt
OpenAI says Privacy Filter can run locally, supports up to 128,000 tokens, and is available today under the Apache 2.0 license on Hugging Face and GitHub.