OpenAI’s Sora Update Shifts to Copyright Opt-Out as Launch Nears

OpenAI’s Sora Update Flips Copyright Control to Opt-Out as Launch Nears

Studios and talent agencies were notified that the new Sora can generate videos featuring copyrighted characters or material unless rightsholders actively opt out—while likeness policies for public figures remain treated separately.

What’s New

OpenAI plans to ship a new version of Sora, its text-to-video generator, that will include copyrighted characters or works in generated clips unless rightsholders ask to be excluded. Notices outlining the opt-out process were sent to studios and talent agencies in the past week, with release expected in the coming days.

  • Default setting: Copyright material may appear in outputs by default; removal requires a rightsholder request.
  • Scope: OpenAI has struck individual agreements with some studios to block specific characters upon request.
  • No blanket exclusions: The company does not plan to accept portfolio-wide opt-outs; instead, it provided reporting links for item-level violations.

Copyright vs. Likeness: OpenAI’s Stated Boundary

OpenAI says it treats copyright and likeness differently. While copyrighted characters would require rightsholders to opt out, the updated Sora would not generate recognizable public figures without permission. Company leaders have described this separation as a core policy line.

Why This Move, and Why Now

  • Race for users: Model vendors are rapidly adding creative tools. Google recently connected its Veo 3 video generator to YouTube workflows.
  • Past signals: OpenAI’s image tool spurred style-based memes (e.g., “in the style of” well-known studios), revealing demand for familiar aesthetics.
  • Industry deals: News and media licensing is evolving; some outlets, including the publisher of the WSJ, have content agreements with OpenAI.

How Creators and Studios Are Likely to React

Creatives have pressed AI firms to seek consent and compensation for training and outputs. Legal scholars suggest the opt-out approach reflects a “permission later” posture amid intense competition. Rights owners worry about burden shifting: monitoring and reporting infringements rather than approving uses up-front.

  • Monitoring costs: Agencies received reporting links to flag violations—useful but reactive and resource-intensive.
  • Partial blocks: Character-specific guardrails may still leave other IP from the same catalog exposed unless individually listed.

The Legal Backdrop: Fair Use, Training, and Outputs

Recent U.S. cases have treated certain training uses of copyrighted material as fair use when models transform inputs into something meaningfully different. Separate suits, including against image generators, continue to test the boundary between training, style emulation, and specific character depiction. Meanwhile, OpenAI and Google have urged policymakers to recognize training on copyrighted works as fair use, drawing backlash from Hollywood talent.

  • Training vs. output: Courts are parsing distinctions between ingesting works, reproducing characters, and producing similar “styles.”
  • Policy posture: The administration has signaled support for learning from published works while opposing direct copying or plagiarism.
  • Ongoing suits: Major studios have active litigation against other AI vendors over alleged misuse of protected catalogs.

Corporate Context and Timing

The update arrives as OpenAI seeks assurances from state attorneys general regarding a potential structural conversion toward a more traditional for-profit model—timelines that, according to prior reporting, matter to some investors. The company released the first Sora in December, enabling text-to-video generation at high definition.

What to Watch Next

  1. Whether major studios pursue portfolio-level exclusions despite OpenAI’s item-level approach.
  2. How Sora enforces character blocks and public-figure restrictions in practice.
  3. Any new licensing deals that convert opt-outs into affirmative permissions or paid access.
  4. Court rulings that refine the training–output boundary, especially for character depiction.
Methods note: This article paraphrases and synthesizes the provided text. Statements attributed to companies and sources are presented as described; no independent verification is asserted here.
Data & Methods: Market indexes from TradingView, sector performance via Finviz, macro data from FRED, and company filings/earnings reports (SEC EDGAR). Charts and commentary are produced using Google Sheets, internal AI workflows, and the author’s analysis pipeline.
Reviewed by Luke, AI Finance Editor
Author avatar

Luke — AI Finance Editor

Luke translates complex markets into beginner-friendly insights using AI-powered tools and real-world experience. Learn more →

Scroll to Top