OpenAI’s Media Manager: A Promised Tool for Creators Remains in Limbo

  • 12/01/2025 19:52 PM
  • Mark

Back in May, OpenAI announced plans to develop a tool called Media Manager, aimed at giving creators control over how their works are included in—or excluded from—its AI training data. However, seven months later, the tool remains unreleased, raising questions about its priority within the company and its potential impact on intellectual property (IP) disputes.

The Vision for Media Manager

OpenAI envisioned Media Manager as a groundbreaking solution to address growing criticism from creators and stave off legal challenges related to the use of copyrighted content in AI training. The tool was designed to identify and manage copyrighted text, images, audio, and video across multiple sources, allowing creators to specify their preferences for inclusion or exclusion.

OpenAI described it as a way to “set a standard across the AI industry” by using advanced machine learning to reflect creators’ rights. The initiative aimed to demonstrate OpenAI’s commitment to ethical AI development and reduce the risk of IP-related lawsuits.

A Delayed Promise

Despite the ambitious announcement, Media Manager appears to have been deprioritized internally. Sources close to OpenAI told TechCrunch that the tool was rarely discussed, let alone actively developed. One former employee remarked, “I don’t think it was a priority. To be honest, I don’t remember anyone working on it.”

Adding to the uncertainty, Fred von Lohmann, a member of OpenAI’s legal team involved with Media Manager, transitioned to a part-time consultant role in October. OpenAI has not provided any significant updates on the tool’s progress, missing its self-imposed timeline to launch it “by 2025.”

The Growing IP Challenge

The delay in delivering Media Manager comes as OpenAI faces mounting legal challenges. The company is currently battling class-action lawsuits from artists, writers, YouTubers, and news organizations who allege that their works were used in AI training without permission.

OpenAI’s AI models, including ChatGPT and Sora, are trained on vast datasets containing text, images, and videos sourced from the internet. While this approach enables impressive generative capabilities, it has also led to cases where models reproduce copyrighted material, sparking backlash from creators.

For example, Sora has generated video clips featuring TikTok’s logo and popular video game characters, while ChatGPT has been caught quoting articles verbatim, prompting legal scrutiny.

Existing Opt-Out Mechanisms and Their Limitations

To address some of these concerns, OpenAI has implemented ad hoc opt-out methods for creators. These include a submission form to flag works for removal from future training datasets and allowing webmasters to block OpenAI’s web-crawling bots.

However, these measures have been criticized as cumbersome and incomplete. For instance, the opt-out form for images requires creators to submit copies of each image along with descriptions—a time-intensive process. Moreover, there are no dedicated mechanisms for excluding written works, videos, or audio recordings.

Media Manager was intended to overhaul this fragmented approach, offering a more comprehensive and user-friendly solution. Yet, its absence leaves creators relying on inadequate tools to protect their intellectual property.

Expert Opinions on Media Manager’s Feasibility

Legal and industry experts have expressed skepticism about Media Manager’s potential effectiveness. Adrian Cyhan, an IP attorney, pointed out that even large platforms like YouTube struggle with content identification at scale. “Ensuring compliance with creator protections across jurisdictions presents significant challenges,” Cyhan said.

Others, like Ed Newton-Rex, founder of the nonprofit Fairly Trained, argue that Media Manager could unfairly shift the burden of protecting IP onto creators. “Most creators will never even hear about it, let alone use it,” Newton-Rex noted.

Additionally, opt-out systems may not address scenarios where content is transformed or hosted on third-party platforms, making it difficult for creators to fully control how their works are used.

Legal and Ethical Implications

Even if Media Manager eventually launches, it may not shield OpenAI from legal liability. Copyright law does not require creators to preemptively prevent infringement, meaning OpenAI could still face damages if found guilty of unauthorized use.

Evan Everist, a copyright law expert, suggested that Media Manager might serve more as a public relations tool than a legal safeguard. “This feature may be more about positioning OpenAI as an ethical user of content,” Everist said.

What’s Next for OpenAI?

While OpenAI continues to claim fair use protections for its AI models, the absence of Media Manager underscores the complexities of balancing innovation with ethical and legal responsibilities. Courts may ultimately decide whether OpenAI’s use of copyrighted materials qualifies as transformative, following precedents like Google Books.

In the meantime, OpenAI’s delay in delivering Media Manager raises doubts about its commitment to addressing creators’ concerns. As legal battles intensify and public scrutiny grows, the company’s approach to managing IP issues will likely remain a focal point in the ongoing debate over the future of AI and intellectual property.


Related Posts