Disney Deal Shows the Way for Responsible AI Development

“Arts and entertainment are not luxuries; they are part of the cultural backbone of society. When incentives to create are eroded, everyone loses—including AI companies.”

DisneyWhile artificial intelligence (AI) companies have long maintained that copyright law poses a significant barrier to innovation, it’s getting harder for them to make that argument with a straight face. It was one thing to claim that early text-based chatbots were magical boxes that didn’t really depend on the copyrighted works used to train them—a pretense that doesn’t hold up under scrutiny. But it’s quite another to make such claims when their systems are spitting out nearly perfect audiovisual renditions of Disney’s copyrighted characters, including Buzz Lightyear from Toy Story, Darth Vader from Star Wars, and Elsa from Frozen. That’s what Midjourney was doing when Disney sued it for infringement, and it’s also what OpenAI was doing when it struck a licensing deal with Disney.

Ideally, AI developers would seek permission up front to use copyrighted works, but sometimes the “move fast and break things” ethos of Silicon Valley proves too alluring. AI companies kicked off the recent boom with a “wild west” approach to copyright law, where they copied everything they could find and pretended it was all fair use. However, as we saw with the recent Anthropic settlement, skipping the permission part means betting the farm when copyright owners inevitably strike back in the federal courts. Indeed, there are now nearly 100 lawsuits in the United States alleging that AI companies unlawfully copied protected works without authorization.

Now that we are moving into the “find out” part of things, it’s important to keep in mind that litigation was never the point. Copyright owners are not trying to thwart AI development or put AI companies out of business—that’s a divisive view of what can and should be a cooperative effort. The goal of these lawsuits is to bring recalcitrant copyists to the bargaining table, where they should have been all along. Midjourney and OpenAI seized the creative value of Disney’s works to power their own platforms, generating revenues for themselves while bypassing the licensing structures that benefit everyone. And when called out for their transgressions, only one AI company realized that lawful alliances are stronger than piratical empires.

Disney’s wildly popular live-action series The Mandalorian introduced a wonderful expression into our cultural lexicon: “This is the way.” Repeated as a mantra throughout the series, the phrase represents a verbal commitment to walking a path defined by traditions, principles, and honor. Mandalorians understand that they are part of something greater than themselves, and they bring order to a chaotic galaxy by holding themselves—and others—to the same standard of conduct. Disney’s approach to the wild-west misappropriation of its treasured works by AI companies reflects this same code—litigation to enforce boundaries, and licensing to enable progress. Filchers who take value for themselves get legal nastygrams; friends who create value for everyone get a collaborative partner. By backing its licensing strategy with litigation, Disney shows the way for sustainable AI development.

Anthropic Strays and Prays for Forgiveness

The dangers of ignoring copyright law are aptly illustrated by the authors’ lawsuit against Anthropic. With statutory damages for willful infringement reaching up to $150,000 per work, the Copyright Act sends a clear message that you should think twice before downloading millions of copyrighted works from well-known pirate sites. But like many AI developers, Anthropic couldn’t resist. The pattern is predictable: AI companies say that they need to copy millions of copyrighted works to train their large language models (LLMs), and they claim that licensing is impossible because it’s too many works. And when copyright owners finally haul them into court, they complain that it’s somehow unfair to hold them to the potentially business-ending damages awards they willingly risked. Of course, the premise that the copying is necessary merely begs the question, and it makes no sense to argue that the incredible scope of the wrongdoing somehow excuses it.

In Bartz v. Anthropic, a group of book authors filed a class-action lawsuit alleging that Anthropic had engaged in mass copyright infringement by downloading millions of pirated books and using them to train its LLM. Founded by former employees of OpenAI in 2021, Anthropic’s website proclaims devotion to “responsible” AI development “for the long-term benefit of humanity.” But the facts of this case paint a vastly different picture. As Judge William Alsup of the Northern District of California noted in his summary judgment order on fair use, “Anthropic downloaded over seven million pirated copies of books” for which it “paid nothing” in service of its own “pocketbook and convenience.” The district court held that whether Anthropic later used the copies fairly to train its LLM was irrelevant since downloading from pirate sites is “inherently, irredeemably infringing,” and Anthropic had “infringed already, full stop.”

Judge Alsup correctly determined that, consistent with the Supreme Court’s reasoning in Warhol v. Goldsmith, each use of a copyrighted work has be evaluated separately. Thus, the fair use question in Anthropic depended on whether it’s the initial downloading to acquire the copy or the subsequent use of that copy to train an LLM. The court’s holding that Anthropic was liable for merely downloading pirated copies was very bad news for the AI company. But Judge Alsup’s order granting class certification just one month later sent Anthropic—and the entire AI industry—into a tailspin. Instead of just a few plaintiffs with a handful of claims, Anthropic was now dealing with millions of works and hundreds of billions of dollars in potential damages. Judge Alsup called it “the classic case for certification” since proving Anthropic’s wrongdoing “on a classwide basis” would be easy. Rather than holding “millions of separate lawsuits with millions of juries”—as Anthropic would have liked—the court would hold “a single proceeding before a single jury, Napster style.”

Anthropic petitioned the Ninth Circuit for permission to appeal, complaining that Judge Alsup’s order certifying the class “may sound the death knell of the litigation” because the crippling size of the class and the potential damages award “put considerable pressure on it to settle.” Trade groups filed amicus briefs in support of Anthropic’s petition, sounding the alarm on the possibility for class actions to “financially ruin” the whole industry. Despite all of this noise, Anthropic quickly asked the Ninth Circuit to put its appeal on hold because it had worked out a deal with the plaintiffs. Judge Alsup granted preliminary approval of the proposed class settlement several weeks later, which included a $1.5 billion payment by Anthropic to the aggrieved authors of the books-in-suit. Importantly, while the agreement put some of the claims against it to rest, Anthropic is not out of the woods yet. Lots of other copyright owners, including music publishers, have smelled blood in the water. It’s hard to feel bad for Anthropic—live by the sword, die by the sword, and all that. But it didn’t have to be this way.

Disney Shows the Way with License-or-Litigate Strategy

Anthropic’s class-action settlement was certainly a wake-up call for the AI industry, and rightfully so. There is no license to download millions of copyrighted works just because you decided one day that you “need” them. And instead of tearing the whole industry down, the AI boom is still chugging along apace. Indeed, Anthropic recently secured its biggest round of funding yet, more than doubling the company’s valuation to $380 billion. The class-action lawsuit didn’t put Anthropic out of business—but it did bring the AI company to the negotiating table, where it finally did the right thing by paying the authors whose creative labors make its models hum. The litigation was the backstop to impose accountability; the goal was always licensing. If you want to use someone’s work, you’re supposed to ask them for permission. Maybe they agree; maybe they don’t. Either way, the copyright system is premised on the notion that best way to advance our common interests is to take care of those who contribute to our success.

The Anthropic case shows that AI companies can make deals with copyright owners that are wins all around: AI companies get to utilize protected works without risking huge awards of damages, copyright owners get paid for supplying the valuable works that make it all possible, and the public gets to enjoy the amazing possibilities of generative AI technology. When an AI system generates something that looks unmistakably like Disney’s famous characters, most people instinctively recognize that it was Disney’s creative works that made it possible. But it’s also the very thing that makes the AI system more attractive—people want to see those characters because they’re fun. Proper licensing ensures that creators are compensated fairly and allows AI companies to innovate confidently without unnecessary legal and ethical risks. It also gives copyright owners a say in how those works are used—the freedom of choice secured to all property owners.

It is worth noting that Disney has never been anti-technology. In fact, Disney’s new CEO openly embraces AI as a tool for empowering human creativity—that’s why they picked him. Pixar helped pioneer computer animation decades ago, transforming filmmaking in the process, and Disney continues to develop cutting-edge tools to streamline visual effects and production workflows. These innovations are possible because Disney earns revenue from its copyrighted works. That revenue funds experimentation, new tools, and fresh forms of storytelling. Copyright enforcement does not block innovation; it enables it by making creative investments sustainable. Building a business model on other people’s creative works without compensation isn’t technological progress. It shifts costs onto creators while allowing AI companies to capture the value. Over time, that approach weakens the very creative industries that supply the content upon which AI systems rely. Arts and entertainment are not luxuries; they are part of the cultural backbone of society. When incentives to create are eroded, everyone loses—including AI companies.

Last summer, Disney sued Midjourney over its image-generating platform that conjures up Disney’s copyrighted characters on demand. The complaint accuses Midjourney of being “a bottomless pit of plagiarism” because it could have stopped the infringement at any time—by declining to train its AI platform on Disney’s works, by rejecting user prompts directed to those works, or by utilizing technological measures to prevent infringing outputs. The fact that Midjourney is using Disney’s works is not up for debate. If you prompt the system for an image of Darth Vader, that’s exactly what you get. Midjourney doesn’t stop this from happening because it knows that Disney’s copyrighted characters make the platform better. It’s what the customers want. But the problem for Midjourney is that each and every such image it produces constitutes a separate and distinct act of infringement.

At the other end of the spectrum, Disney’s licensing agreement with OpenAI illustrates what responsible AI development should look like. Instead of being sued for scraping protected content without permission, OpenAI negotiated a license to use Disney’s characters within its Sora video-generation platform. As part of the deal, Disney would use OpenAI’s technologies throughout its business, and it would even take an equity stake in the AI company. Agreements like these allow AI innovation to move forward while respecting the creative labors behind the content. OpenAI announced last month that it was shutting down Sora and pivoting away from the AI video business, thus negating its deal with Disney. But the agreement still matters because it disproves the idea that licensing is impractical or innovation-killing. It shows that AI companies can gain access to high-quality creative material legally, while creators maintain control over how their works are used. The contrast with unlicensed AI systems could not be clearer. One path leads to partnerships and legitimacy. The other leads to destabilizing piracy and lawsuits. You don’t have to be a Mandalorian to understand which one is the way.

Image Source: Deposit Photos
Image ID: 442451752
Author: crystaleyemedia 

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

No comments yet. Add my comment.

Add Comment

Your email address will not be published. Required fields are marked *

Varsity Sponsors

Industry Events

PIUG 2026 Joint Annual and Biotechnology Conference
May 19 @ 8:00 am - May 21 @ 5:00 pm EDT
Certified Patent Valuation Analyst Training
May 28 @ 9:00 am - May 29 @ 5:00 pm EDT
2026 WIPO-U.S. Summer School on Intellectual Property
June 1 @ 9:00 am - June 12 @ 1:45 pm EDT

From IPWatchdog