“In response to the apportionment dilemma, AI companies may be expected to develop policies to mitigate and potentially shift liability.”
Artificial Intelligence (AI) systems, such as Microsoft’s Copilot and ChatGPT, have seen a drastic increase in consumer adoption. However, this rise in use has come with challenges in applying traditional copyright principles to this new field. For a work to be copyrightable in the United States, it must be the product of human authorship. The U.S. Copyright Office (“the Office”) has recently taken the position that “[w]hen an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship.” Thus, if the material is not the product of human authorship, the Office will not register it. However, this position has complicated other areas of copyright law as they relate to generative AI. Critically, if the Office takes the position that an output can have no human author, then the question arises: how might liability be apportioned if generative AI creates outputs that infringe on copyrighted material?
Generative AI and Copyright Concerns
AI systems are built and “trained” by their creators through a process called machine learning, which uses algorithms to analyze large amounts of data, learn from the insights, and then make informed decisions. AI can be trained with datasets that come from online platforms like Google Images or from “scraping” techniques that crawl the web for recognizable content. These datasets likely include copyrighted images from around the internet. Consequently, when AI takes that data and uses it to create outputs after prompts by a user, these outputs may closely resemble copyright-protected material. The result is an end user expecting a unique, AI-generated work but instead receiving work that infringes on an existing copyright. In that situation, both the user and the AI company may be exposed to liability for the infringing work, but apportionment becomes more difficult.
Microsoft’s Customer Copyright Commitment
In response to the apportionment dilemma, AI companies may be expected to develop policies to mitigate and potentially shift liability. Microsoft recently published its Responsible AI Transparency Report. In this report, Microsoft promises to “defend commercial customers sued by a third party for copyright infringement for using Azure OpenAI Service, [their] Copilots, or the outputs they generate, and pay any resulting adverse judgments or settlements, as long as the customer meets basic conditions such as not deliberately trying to generate infringing content and using [their] required guardrails and content filters.” In other words, Microsoft promises to absorb their users’ copyright infringement liability with certain exclusions and limitations surrounding the user’s intent.
A User’s Intent
Since users are susceptible to copyright infringement through AI, the intent of a user can be a critical factor in apportioning liability between an AI company and the user. Infringement can happen to both intentional and unintentional users. Microsoft created safeguards that protect against unintentional infringement by users. These safeguards include implementing guardrails and mitigations, such as content filters. Microsoft appears confident that its safeguards are sophisticated enough to prevent unintentional infringement. Microsoft appears poised to stand behind users who innocently infringe should these safeguards fail.
If a customer tenders a claim to Microsoft for defense, the customer must demonstrate compliance with all relevant safeguards. These safeguards also serve another purpose. While this provides a basis for Microsoft to deny a defense and indemnity under its Customer Copyright Commitment, it also provides a framework for shifting liability from Microsoft to its consumer when the consumer intentionally infringes. If a user does not use the safeguards provided by Microsoft and deliberately attempts to create infringing material, the apportionment of liability for infringement may begin to shift away from Microsoft and to the user. This likely will not absolve Microsoft of liability completely but provides a basis for Microsoft to argue that its AI system was misused.
Intent is a challenging factor to interpret. For example, there is a clear distinction in the ease of deciphering the user’s intent between the two inputs: “Snoopy” versus “a white dog on a red house.” Microsoft has not provided further information on how a user’s intent may be determined. In Microsoft’s Copilot Studio, the company has developed a program called “triggering topics,” otherwise known as intent recognition. In this program, AI uses the user’s utterance and finds topics to match the user’s intent. As it is unclear how the Customer Copyright Commitment will play out in practice, it can be a cause for concern that Microsoft, like any other company, may try to avoid liability through its subjective standards of determining a user’s intent.
The Need for Regulation
Future regulations appear to be increasingly necessary. Colorado has become the first U.S. state to pass a law protecting consumers from harm arising from their use of AI systems. Senate Bill 24-205, otherwise known as the Colorado AI Act (CAIA), will take effect on February 1, 2026, and imposes further obligations on Colorado employers regarding their use of AI systems. The CAIA focuses on the use of AI systems related to employment decisions, such as “algorithmic discrimination,” but it imposes a duty of reasonable care on the creators of the AI systems and the users of these systems. The duty of care and the liabilities imposed by the CAIA create opportunities for other states to enact similar laws within the employment sector and beyond.
Image Source: Deposit Photos
Author: olanstock
Image ID: 315653592
Join the Discussion
2 comments so far.
Pat
June 23, 2024 08:28 amAren’t we supposed to be a democracy?
Corps and thieves all own our data and work now? Who said they could?
It shouldn’t be up to corps or thieves whatsoever what they do with uncompensated work. Though it’s quite clear LLM for example is more intelligent and resourcing it still steals work. And on the other hand image/song generators are just dumb collagers that have no concept of how the art they copy are made in any way shape or form.
I think there’s a bigger issue here, and a bigger reason for people in general to revolt. What on earth is going on. These corps are well out of hand and more needs to be done about it now they’re starting to really override our basic human rights in so many ways.
Anon
June 20, 2024 12:59 pmInteresting, but can the AI companies really think that their efforts (which necessarily will be contract-based) will be the deciding factor in any court case?
After all, contract terms have been routinely held invalid and non-binding if that term is unreasonable or attempts to invoke powers that neither party has.