Accelerated Innovation: In Less Than a Year, We’ve Seen a Decade’s Worth of AI and IP Developments

“The stakes are high for dealing with these issues in the next couple of years, as the rules that apply to AI and copyright law should have a major impact on the future of AI use and development, as well as the creative economy.” – Dr. Ryan Abbott

https://depositphotos.com/248363732/stock-photo-cyber-law-concept.htmlThe past year has provided decades’ worth of developments across law and policy in the areas of artificial intelligence (AI) and machine learning (ML) technologies. If 2022 was the breakthrough year for accessible AI, then 2023 can so far be deemed as the first year of likely many more to come in the era of an AI inquisition.

“After years of somewhat academic discourse,” reflects Dr. Ryan Abbott, “AI and copyright law have finally burst into the public consciousness—from contributing to the writer’s strike to a wave of high-profile cases alleging copyright infringement from machine learning to global public hearings on the protectability of AI-generated works.”

Both the U.S. Copyright Office (USCO) and the U.S. Patent and Trademark Office (USPTO) are in active litigation over the eligibility of generative AI outputs for statutory protection. Additionally, both offices have held numerous webinars and listening sessions and conducted other methods of collecting feedback from the public as they work through policy considerations surrounding AI.

For the USCO, the focus is largely on issues surrounding a narrower application of ML known as generative AI, which the USCO defines as “technologies capable of producing expressive material.” For the USPTO, the focus is largely on the question of inventorship by non-humans.

Keeping the Copyright Office Busy

For the U.S. Copyright Office, 2023 has thus far been a year of clarification and application of existing frameworks for generative AI after kicking things off with the partial cancellation of the registration for a graphic novel by artist Kris Kashtanova. The refusal to register generative works was finalized in February, with the USCO explaining the registration failed to disclaim or otherwise disclose that “the images in the Work that were generated by the Midjourney technology are not the product of human authorship.”

“In one sense this is a success in that the registration is still valid and active,” Van Lindberg, Kashtanova’s counsel for the matter, explained to IPWatchdog at the time. “However, it is the most limited a copyright registration can be, and it doesn’t resolve the core questions about copyright in AI-assisted works.”

In March, the USCO officially launched an initiative focused on examining the policy issues raised by generative AI and copyright law (“AI Initiative”). The launch of the AI Initiative included the publication of a statement of policy on the registration of works that contain generative materials.

The USCO also had the opportunity to clarify the application of its updated policy guidance on statutory licenses, specifically royalties generated through the blanket licenses under section 115 of the Copyright Act, in an April letter addressed to Kris Ahrend, CEO of The Mechanical Licensing Collective (“The MLC”).

Suzanne V. Wilson, General Counsel and Associate Register of Copyrights, explains: “Where circumstances reasonably indicate to [The MLC] that a musical work registered in its database lacks the human authorship necessary to qualify for copyright protection (for example, where a songwriter claims that they created an extraordinary number of musical works in an unusually short time period or makes affirmative statements that a musical work was created by AI), it is appropriate for  [The MLC] to conduct a timely investigation into the work’s copyrightability and hold any royalties that would otherwise be allocated to that work pending its investigation.”

As part of its initiative, the USCO also hosted four public listening sessions beginning with Literary Works and Software in April, followed by three sessions in May covering Visual Arts, Audiovisual Works, and Music and Sound Recordings. Participants were selected by the USCO and invited to speak about the impact of generative AI and copyright law on their respective industries.

In May, the Senate Judiciary Committee’s Subcommittee on Intellectual Property held its first hearing on safeguards for and oversight of AI developments. The hearing, titled “Oversight of A.I.: Rules for Artificial Intelligence,” featured testimony by Sam Altman, the CEO of OpenAI. The committee also hosted a closed-door, members-only event with House Republicans and Democrats the day before.

May also brought the USCO a letter from members of the House Judiciary Subcommittee on Courts, Intellectual Property, and the Internet requesting that the USCO provide a formal briefing on the AI Initiative plus past and future actions the USCO has taken on the use of copyrighted works by AI platforms. The letter also requested a response to a series of five questions the Subcommittee felt the USCO’s initiative was not addressing, including what authorizations or resources the USCO needs from Congress. The USCO’s response reminded the Subcommittee that the USCO isn’t an enforcement agency, and therefore, did “not plan to request any change in the [USCO’s] responsibilities to include an additional enforcement function.”

The letter was followed by a hearing on May 17, the day after the Senate hearing, during which three artists, a professor, and an attorney expressed viewpoints and concerns about the impact of these technologies on their industries and careers, as covered in-depth by IPWatchdog. This hearing, titled “Artificial Intelligence and Intellectual Property: Part I – Interoperability of AI and Copyright Law,” served as the first of a two-part session stretching across the House and the Senate. The hearing is available on demand.

The USCO’s AI initiative also featured a virtual event in June where the USCO offered clarification for applicants on registering works that contain generative materials. As covered in IPWatchdog, the three main takeaways from the event included the use of the de minimis test for human authorship as set forth in the U.S. Supreme Court’s decision in Feist v. Rural Telephone, the importance of disclosure when incorporating generative materials, and the requirement that applicants properly disclose the use of generative AI tools for registrations after March 16 (when the policy statement was published).

In July, the Senate Judiciary Committee’s Subcommittee on Intellectual Property held the second of a two-part hearing, titled “Artificial Intelligence and Intellectual Property – Part II: Copyright”, which focused on the potential for generative AI platforms to violate copyright law, and how such platforms can impact human creators or implement technological solutions that protect both creators and consumers. The hearing is available on demand.

On July 26, the USCO hosted a webinar titled “International Copyright Issues and Artificial Intelligence,” which featured academic experts from around the world discussing a global perspective on authorship, training, and exceptions and limitations to copyright laws and related legislative developments. The webinar recording will be available in the coming weeks.

(Re)Defining Innovation in an AI World

In February, the USPTO held the third (and as of now most recent) meeting on AI-driven innovation as part of its AI and Emerging Technologies (ET) Partnership Series. The meeting, held in collaboration with the Dallas Bar Association (DBA) Intellectual Property (IP) section and State Bar of Texas IP section, featured speakers representing stakeholders from across academia, industry, and law firms. A recording of the three-hour meeting is available online.

Also in February, the USPTO issued a Request for Comments regarding AI and inventorship. The comment period closed in May and the USPTO received a total of 90 comments, 69 of which are publicly accessible for review. In connection with the Request for Comments, the USPTO announced two public listening sessions, one for the East Coast on April 25 and another for the West Coast on May 8. Recordings of both sessions are available on demand.

“We’re getting closer and closer to where… we want to use these tools, and society wants us to use these tools, because it makes the process faster, it makes it more effective and makes better drugs…,” Corey Salsberg of Novartis shared during the East Coast listening session. “[S]o we’re putting the U.S. at a disadvantage if we’re going to throw things out on conception when other countries don’t even ask those questions.”

“We found that 80,000 of our utility patent applications in 2020 involved artificial intelligence –  150% higher than in 2002,” Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO, shared in a blog post from April. “AI now appears in 18% of all utility patent applications we receive, and in more than 50% of all the technologies that we examine at the USPTO.”

Attorney John Cordani recently argued that “[w]e should draw a timely lesson from HIP v. Hormel…. Let’s allow the courts to consider individual cases about who owns patents and copyrights to which AI contributed.”

AI-Powered Litigation

In April, the U.S. Supreme Court denied certiorari in Dr. Stephen Thaler’s case against the USPTO, which he lost at the U.S. Court of Appeals for the Federal Circuit in 2022. The question at issue: “Does the Patent Act categorically restrict the statutory term ‘inventor’ to human beings alone?” The denial upholds the USPTO’s reading of the statute requiring that inventors are only human. However, questions still remain as to how DABUS actually works as an inventor.

Thaler also is involved in active litigation against the USCO, which began in June 2022. In January, Thaler filed a motion for summary judgment arguing the output of his AI machine should be copyrightable. Thaler says in the motion that there is “no genuine issue as to any material fact exists and Plaintiff is entitled to judgment as a matter of law.” He also argues the only question at issue is: “Can someone register a copyright in a creative work made by an artificial intelligence?”

In February, the USCO filed its response to Thaler’s request for summary judgment. The USCO is also asking the court to grant its countermotion for summary judgment. By March, Thaler filed a response to the USCO’s Motion for Summary Judgment. In the response, Thaler argues “this is perhaps the paradigmatic case of technological evolution requiring purposive statutory interpretation.” Most recently in April, the USCO filed a reply in support of its cross-motion for summary judgment. In the filing, the USCO argues, “[the USCO’s decision] was a well-reasoned decision based on the text of the Constitution and the Act, as well as Supreme Court and appellate decisions that uniformly support a human authorship requirement.”

Nearly a dozen copyright or related rights lawsuits have been filed against AI platform service providers since the start of 2023.

“The significant uptick in litigation is helping us get a better sense of what the legal theories are there – most obviously with regards to training data (its uses, the relationship of the model to it, and the relationship of outputs to it),” explains Heather Whitney of Morrison Foerster and a participant in the USCO’s listening sessions. “So far, the focus has also been much more on model developers and less on users of those models.”

Below is a listing:

  • Andersen v. Stability AI Ltd., 3:23-cv-00201, (N.D. Cal.), filed on January 13, 2023.
  • Getty Images (US), Inc. v. Stability AI Ltd., IL-2023-000007 (EWHC), filed on January 16, 2023.
  • Getty Images (US), Inc. v. Stability AI, Inc., 1:23-cv-00135, (D. Del.), filed on February 3, 2023.
  • Walters v. OpenAI LLC, Ga. Super. Ct., 23-A-04860-2, filed on June 5, 2023 [Note: This is a defamation-focused lawsuit].
  • M. v. OpenAI LP, 3:23-cv-03199, (N.D. Cal.), filed on June 28, 2023 [Note: This is a privacy-related lawsuit focusing on key issues of ownership and rights over personal data].
  • Tremblay v. OPENAI, INC., 3:23-cv-03223, (N.D. Cal.), filed on June 28, 2023.
  • Kadrey v. Meta Platforms, Inc., 3:23-cv-03417, (N.D. Cal.), filed on July 7, 2023.
  • Silverman v. OpenAI, Inc., 4:23-cv-03416, (N.D. Cal.), filed on July 7, 2023.
  • L. v. Alphabet Inc., 3:23-cv-03440, (N.D. Cal.), filed on July 11, 2023.

A key question surrounding generative AI litigation is how training materials can be legally obtained when large language models are being designed and constructed. Attorney Kieran McCarthy explores the answer to this question in light of the Doe 1 v. GitHub, Inc. litigation, filed in November 2022.

“We can infer from this opinion that treatment of Copyright Management Information (“CMI”) will be tricky for generative AI developers,” McCarthy explains while noting the other aspects of the case are largely based on novel civil procedure issues. “Also, ignoring copyright licenses is at least arguably copyright infringement, and your fair use claim probably won’t get you out of the lawsuit at the motion to dismiss stage.”

In the Silverman class action lawsuit, a trio of authors bring a variety of claims from copyright infringement to unfair competition.

“Much of the material in OpenAI’s training datasets, however, comes from copyrighted works—including books written by Plaintiffs—that were copied by OpenAI without consent, without credit, and without compensation,” the lawsuit says.

Other copyright lawsuits filed prior to 2023 that remain active include Thomson Reuters Enterprise Centre GmbH v. ROSS Intelligence Inc., 1:20-cv-00613, (D. Del.), filed on May 6, 2020, and UAB “Planner5D” v. Meta Platforms, Inc., 3:19-cv-03132, (N.D. Cal.), Filed June 5, 2019.

Lastly, an interesting right of publicity lawsuit, Kyland Young v. NeoCortext, Inc., was filed in April that aims to explore key issues surrounding name, image, and likeness (NIL) rights against the backdrop of generative AI and deepfake content, including whether or not there’s preemption under copyright law or other conflicts with the First Amendment.

Looking Ahead

There is a lot of work to be done surrounding the education of key stakeholders across industries and governmental organizations. The USCO plans to soon release a Notice of Inquiry through which they are going to “[seek] written comments on a broad array of policy questions, including whether using copyrighted material to train AI models is protected by fair use, and whether and when outputs from generative AI systems can be the basis for copyright liability.”

“The stakes are high for dealing with these issues in the next couple of years,” shares Dr. Abbott, “as the rules that apply to AI and copyright law should have a major impact on the future of AI use and development, as well as the creative economy.”

Transparency is a leading element of proposed regulations and standards, such as the EU’s AI Act and the SAFE Innovation Framework for AI policy proposed in June by Senate Majority Leader Chuck Schumer (D-NY).

“Forcing companies to reveal their IP would be harmful, it would stifle innovation, and it would empower our adversaries to use them for ill,” Senator Schumer remarked. “Fortunately the average person does not need to know the inner workings of these algorithms. But we do need to require companies to develop a system where, in simple and understandable terms, users understand why the system produced a particular answer and where that answer came from.”

Meanwhile, metaverse and web3 technologies continue to raise novel questions surrounding intellectual property ownership, especially within the context of video games.

Mark Stallion recently argued, “Why wouldn’t the AI developer be in the same position as the party who commissioned a painting or in the same position as an employer who hires an employee to invent?… The statutes on which the court decisions are based didn’t have sophisticated AI in view. I assert, as in the past, those statutes will be revised.”

The ongoing Hollywood union strikes cast an important light on the questions many are grappling with when it comes to how generative AI can and should be used in creative industries.

“I am not sure many people appreciate how ubiquitous the use of AI already is within both the entertainment industry and (more recently) software industry, and the potential economic consequences of copyright protection not extending to (at least parts of) those works,” notes Whitney. “The speed at which people are developing tools that give users the ability to better control genAI tools is also pretty astounding.”

In Microsoft’s world, quantum computing is on the horizon and a point of focus for research and scientific discovery. During a June event, the company announced a set of new tools within its Azure Quantum offering that expand access to computing power and generative AI tools to simultaneously advance scientific discovery and lower the technological development barriers to working with quantum computing technologies. In July, the Center for Strategic and International Studies held an event, “The Future of Quantum – Developing a System Ready for Quantum”, focused on exploring the deployment of quantum-ready systems in light of the upcoming debates on the reauthorization of the National Quantum Initiative Act.

Congress and organizations continue to explore the implications of AI across numerous areas, including battlefields, geopolitics, and other national interests.

Currently, AI-powered developments offer means to “maximize a company’s efficiency in creating and maintaining its portfolio,” writes Elizabeth Manno, IP Litigation partner at Venable LLP. Company policies are getting revamped to address the generation and use of AI-generated content, both within the areas of copyright, patent, and trade secret protections.

“Companies are increasingly trying to get away from the reactive postures they found themselves in for much of 2022 and early 2023,” adds Whitney. “We are seeing more AI governance structures being developed internally (which is great). These internal governance bodies are becoming more cognizant of not just the infringement risk but how the use of these tools by workers may require a rethinking of their IP portfolio strategy.”

Until such time that the law and policy surrounding AI and ML technologies are finalized, creators, inventors, and other innovators will need to adopt flexible and practical guidelines when working with AI systems.

Image Source: Deposit Photos
Author: phonlamai
Image ID: 248363732 

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

8 comments so far.

  • [Avatar for Anon]
    Anon
    August 15, 2023 04:22 pm

    Pro Say,

    Thanks for answering. The status of just whom is writing an application is not material to the status of the legal fiction known as PHOSITA.

  • [Avatar for Pro Say]
    Pro Say
    August 15, 2023 11:01 am

    Anon — my point is, if / when AI is utilized to create / write patent apps (in whole or in part), does that necessarily modify / change just who a PHOSITA is / must be?

  • [Avatar for Anon]
    Anon
    August 15, 2023 07:55 am

    Zoom zoom zoom…

    (the larger issue of the click-wrap contract of adhesion of Terms of Service — often at discretion of vendor to change without notice)

  • [Avatar for Anon]
    Anon
    August 15, 2023 06:34 am

    Pro Say,

    I do not grasp the point that you are offering. Can you explain?

  • [Avatar for Pro Say]
    Pro Say
    August 14, 2023 04:48 pm

    Got AI?

    PHOSITA = FAUXSITA

  • [Avatar for B]
    B
    August 14, 2023 09:12 am

    Anyone dealing with quantum computing or AI patent applications is familiar with the fate of too much math in a claim.

  • [Avatar for Anon]
    Anon
    August 13, 2023 08:05 pm

    I would also add one aspect to the “looking ahead” notion — and a topic that I have long identified**:

    How will that other NON-Real person – the legal fiction – known as Person Having Ordinary Skill In The Art treat (for 103 purposes) what a non-real person is capable of generating without human invention?

    Given the cited statistic in this story of NOW 50% of applications with AI impact, will the legal fiction of PHOSITA contain its own AI engine?

    ** I did try to tell people that we should have been discussing these ramifications.

  • [Avatar for Anon]
    Anon
    August 13, 2023 07:57 pm

    The problem that I have had with the hearings so far (and especially the Copyright Office hearing), is that they have not been interactive, and the invited guests have been proselytizing what they feel the law should be — as opposed to a meaningful evaluation of the law as it is.

    What people (and typically artists) want is NOT in accord with the law OR real factual basis as it applies to the generative [BOTH input side and output side].

    I have found these to be harmfully polarizing and self-delusionally-reinforcing.