OpenAI’s Sora Update Flips Copyright Control to Opt-Out as Launch Nears
Studios and talent agencies were notified that the new Sora can generate videos featuring copyrighted characters or material unless rightsholders actively opt out—while likeness policies for public figures remain treated separately.
What’s New
OpenAI plans to ship a new version of Sora, its text-to-video generator, that will include copyrighted characters or works in generated clips unless rightsholders ask to be excluded. Notices outlining the opt-out process were sent to studios and talent agencies in the past week, with release expected in the coming days.
- Default setting: Copyright material may appear in outputs by default; removal requires a rightsholder request.
- Scope: OpenAI has struck individual agreements with some studios to block specific characters upon request.
- No blanket exclusions: The company does not plan to accept portfolio-wide opt-outs; instead, it provided reporting links for item-level violations.
Copyright vs. Likeness: OpenAI’s Stated Boundary
OpenAI says it treats copyright and likeness differently. While copyrighted characters would require rightsholders to opt out, the updated Sora would not generate recognizable public figures without permission. Company leaders have described this separation as a core policy line.
Why This Move, and Why Now
- Race for users: Model vendors are rapidly adding creative tools. Google recently connected its Veo 3 video generator to YouTube workflows.
- Past signals: OpenAI’s image tool spurred style-based memes (e.g., “in the style of” well-known studios), revealing demand for familiar aesthetics.
- Industry deals: News and media licensing is evolving; some outlets, including the publisher of the WSJ, have content agreements with OpenAI.
How Creators and Studios Are Likely to React
Creatives have pressed AI firms to seek consent and compensation for training and outputs. Legal scholars suggest the opt-out approach reflects a “permission later” posture amid intense competition. Rights owners worry about burden shifting: monitoring and reporting infringements rather than approving uses up-front.
- Monitoring costs: Agencies received reporting links to flag violations—useful but reactive and resource-intensive.
- Partial blocks: Character-specific guardrails may still leave other IP from the same catalog exposed unless individually listed.
The Legal Backdrop: Fair Use, Training, and Outputs
Recent U.S. cases have treated certain training uses of copyrighted material as fair use when models transform inputs into something meaningfully different. Separate suits, including against image generators, continue to test the boundary between training, style emulation, and specific character depiction. Meanwhile, OpenAI and Google have urged policymakers to recognize training on copyrighted works as fair use, drawing backlash from Hollywood talent.
- Training vs. output: Courts are parsing distinctions between ingesting works, reproducing characters, and producing similar “styles.”
- Policy posture: The administration has signaled support for learning from published works while opposing direct copying or plagiarism.
- Ongoing suits: Major studios have active litigation against other AI vendors over alleged misuse of protected catalogs.
Corporate Context and Timing
The update arrives as OpenAI seeks assurances from state attorneys general regarding a potential structural conversion toward a more traditional for-profit model—timelines that, according to prior reporting, matter to some investors. The company released the first Sora in December, enabling text-to-video generation at high definition.
What to Watch Next
- Whether major studios pursue portfolio-level exclusions despite OpenAI’s item-level approach.
- How Sora enforces character blocks and public-figure restrictions in practice.
- Any new licensing deals that convert opt-outs into affirmative permissions or paid access.
- Court rulings that refine the training–output boundary, especially for character depiction.