Senate IP Subcommittee Mulls Federal Right of Publicity at AI and Copyright Hearing

“[R]ather than thinking of the [large language model] copying the training data like a scribe in a monastery, it makes more sense to think of it as learning from the training data like a student.” – Matthew Sag

Subcommittee on Intellectual PropertyOn July 12, the U.S. Senate Judiciary Committee’s Subcommittee on Intellectual Property held its second hearing in two months on the intersection of artificial intelligence (AI) developments and intellectual property rights. This most recent hearing focused on potential violations of copyright law by generative AI platforms, the impact of those platforms on human creators, and ways in which AI companies can implement technological solutions to protect copyright owners and consumers alike.

Artistic Community Wants Generative AI Platforms to Adopt Opt-In Training Approach

The Senate IP Subcommittee’s hearing hosted a balanced panel, although the sympathies of most Senators attending the hearing appeared to favor those representing human creators. One such representative was Karla Ortiz, a San Francisco-based concept artist and illustrator who has worked on several Marvel films, including Doctor Strange. Speaking about AI’s impacts on thousands of songwriters was Jeffrey Harleston, General Counsel of Universal Music Group. Matthew Sag, Professor of Law at Emory University School of Law, provided some perspective on fair use issues at play in generative AI’s use of copyrighted content. Representing companies developing generative AI platforms based on large data sets were Dana Rao, Executive VP and General Counsel, Adobe; and Ben Brooks, Head of Public Policy, Stability AI.

SenatSubcommittee on Intellectual Propertye IP Subcommittee Chairman Chris Coons (D-DE) illustrated some of the relevant issues by playing a short audio recording of AI, AI, a song with lyrics developed by generative AI platform ChatGPT based on the Frank Sinatra classic New York, New York and featuring vocals provided by a deepfake version of Sinatra. While recorded with permission from the Sinatra estate, such a work created without authorization creates several infringement and consumer confusion issues. The Subcommittee’s Ranking Member, Thom Tillis (R-NC), indicated that the subcommittee preferred an inclusive approach to addressing copyright issues in AI platforms, but urged interested parties to engage with legislative work groups to reach reasonable compromises.

During testimony, although Brooks acknowledged that Simplicity AI’s platform ingested publicly available content to train its language learning model, he noted that his company was developing an opt-out process to remove copyrighted works from the training models. From the artist’s perspective, however, AI platforms should rather adopt opt-in processes as opt-out measures are onerous on the artist and require machine unlearning technologies that don’t exist today. While generative AI platforms can be a boon to creative professionals, Ortiz told the subcommittee that herself and many colleagues in the entertainment industry have refused to use these platforms, which they see as exploiting artists and their original works.

Should Generative AI Be Considered a Nonexpressive Use?

Those representing generative AI developers discussed metadata solutions for preventing the incorporation of digital files into a platform’s training data. Rao noted that Adobe’s Photoshop utilizes Content Credential metadata tagging that identifies a created file as one to exclude from generative AI training. Brooks added that the further development of metadata tags would improve opt-out processes to release untrained generative AI versions more quickly. However, Ortiz noted that many artists do not know how to write a simple robot.txt file to create their own metadata tags, reinforcing the need for an opt-in approach to data set training.

The impact of copyright law’s fair use doctrine on generative AI’s use of copyrighted content in its training models was a main focus of the hearing. Whereas Brooks indicated that Simplicity AI believed its use of publicly available content was fair use, Rao said that Adobe’s Firefly generative AI platform only used images licensed to Adobe or within the public domain. Rao added that, due to the limited nature of its training set, Adobe had to more thoroughly develop the computer science of its generative AI algorithms to return high-quality outputs, but that those outputs were more commercially safe.

Subcommittee on Intellectual PropertyIn his testimony, Sag said that under fair use doctrine, generative AI outputs that do not resemble their inputs are generally safe from infringement liability. The generative platforms themselves would be considered a nonexpressive use in the same way that courts have found fair use of copyrighted content in the operation of search engines and plagiarism detection programs. Sag noted that generative AI does not copy original expression used in the training models, “so rather than thinking of the [large language model] copying the training data like a scribe in a monastery, it makes more sense to think of it as learning from the training data like a student.”

Federal Anti-Impersonation Statute Could Redress Harms from Deepfakes

The creation of a federal right of publicity or an anti-impersonation right was discussed as a solution to concerns that generative AI could mimic artistic styles. UMG’s Harleston told the panel that a great deal of deepfake content impersonating many of the record company’s music artists could be addressed by a federal right that provides more consistency and coverage than the current framework of state right of publicity laws. Senator Amy Klobuchar (D-MN) also discussed election integrity concerns posed by this February’s deepfake video of Senator Elizabeth Warren (D-MA) that was hosted on Twitter.

Though inconsistent, state publicity laws provide a few potential remedies for a cause of action arising under a similar federal right. The availability of injunctive relief was imperative to proponents of the idea. Hearkening to the example involving Senator Warren, Harleston noted that deepfakes improperly ascribing beliefs or viewpoints to musicians risks confusing the public in a way that irreparably harms a musician’s career. Sag advocated for the availability of monetary damages, although he cautioned Congress against statutory damages that could encourage litigation from opportunistic parties.

 

 

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

No comments yet.