Company Policy Issues and Examples Relating to Employee Use of AI-Generated Content

“Assuming tools like ChatGPT are here to stay, and companies need to use these tools in some capacity, companies should carefully define policies that govern where and when generative AI tools can be used to perform company work.”

Company PolicyArtificial Intelligence (AI) has become a crucial tool for organizations in various sectors, particularly in the generation of content and code by generative AI systems such as ChatGPT, GitHub Copilot, AlphaCode, Bard and DALL-E, among other tools. As the promise of incorporating these generative tools in the corporate setting is all but assured in the near term, there are a number of risks that need to be minimized as companies more forward. In particular, as AI applications grow increasingly sophisticated, they raise concerns with several forms of intellectual property (IP), such as patents, copyrights, and trade secrets. This article aims to discuss these issues and provide a sample company policy for using AI-generated content such as software code.

Patent Issues

There are several patent issues that arise relating to use of AI-generated content. For example, there are patent eligibility issues that may affect whether an AI-created software is patent eligible. AI-based tools that generate content may themselves create novel and non-obvious inventions. However, obtaining patent protection for these types of inventions can be challenging. Some patent offices may consider software, algorithms, and AI-related inventions as abstract ideas, which are ineligible for patent protection. To maximize company investment, before embarking on investment in patenting, companies should ensure that their AI inventions involve specific technical features or improvements to be eligible for patents. At least in the United States, patents abound in software and AI technologies.

A big question in many jurisdictions is whether AI-generated content or code raises an issue about the identity of the inventor. While human inventors are typically recognized, AI-generated inventions may not have clear inventorship. In most jurisdictions, AI systems cannot be named as inventors, which may create challenges in assigning patents and protecting innovations. In the United States, the Federal Circuit has held that only natural persons can be inventors. This issue creates a quandary, as AI-generated content may be valuable to the organization in terms of generating content and code efficiently, but yet the organization cannot protect it and exclude others through patenting.

Another patent-related issue relates to ownership. Companies using AI-generated content should be aware of potential ownership issues relating to software code and implementations that have been AI-generated. For instance, an AI tool may have been trained using code that is proprietary or otherwise is covered by one or more third-party patents. An employee may unknowingly be using code portions that incorporate patented technologies. To minimize this risk, companies should ensure that they have the necessary licenses to use underlying AI technologies, datasets, or algorithms. Also, companies should clear patent infringement issues by performing patent clearances and obtaining noninfringement opinions of their implementations. Companies should also establish clear ownership rights for patents relating to AI-generated content or code to avoid disputes among stakeholders.

Copyright Issues

AI-generated content, such as text, images, music, or code may be eligible for copyright protection. However, copyright laws vary across jurisdictions, and some may not grant protection to AI-generated works without human input or authorship. At least in the United States, there might be an opportunity to get copyrights to works having some human input. Companies should be aware of copyright eligibility laws in the jurisdiction where content is created to ensure compliance with those laws with the objective of maximizing value to the company.

As with patents, determining authorship and ownership of AI-generated content can be challenging. In many jurisdictions, works created by AI systems without human input may not be granted copyright protection. As a result of the human authorship standard, under current US law, an AI-created work is probably either a public domain work immediately upon creation (and without a copyright owner capable of asserting rights) or a derivative work of the training materials the AI tool used to train. Because of this situation (either no owner or some other owner), companies should establish clear policies on authorship and ownership of AI-generated content to avoid potential disputes. There have been several recent lawsuits filed in relation to copyright issues in the AI-generated art and software coding spaces, and the risk to companies may be substantial and real.

As with coding tools that are trained on licensed code and libraries, there may be licensing issues as well, as many of the open-source code license requirements minimally include attribution and notice provisions for the licensed code. If a company using such code does not comply with license requirements, even licenses for the most permissive code may be violated. In short, companies using AI-generated content should ensure that they have the necessary licenses for underlying technologies, algorithms, or datasets.

Although copyrights and licenses should be reviewed and may be necessary in some instances, the generation of some content may be permitted under US law. Fair use provisions in copyright law may allow limited use of copyrighted material for specific purposes, such as research or education, without requiring permission from the copyright holder. Further, the case law has been leaning towards fair use for uses that are transformative such as that of the “recent” fair use case relating to Java APIs. Copyright infringement may also be difficult to prove for some AI systems, as to prove unauthorized copying, the copyright holder must show that the copying party had access to the original work and that the two works are substantially similar. If a copyright holder cannot prove access or the AI-generated work has little in common with the original work, a copyright infringement may be difficult to prove. Nonetheless, prudent use of such generative systems should be the standard.

Trade Secret Issues

Another, and probably the most important AI-related IP issue, relates to preserving company trade secret information. For trade secrets to maintain their status as trade secrets, companies must take reasonable steps to protect them from disclosure, and if care is not taken, trade secret rights are easily destroyed. Generative tools such as ChatGPT have the ability to “learn” based on the prompts provided to the tool by users, and sometimes users may unknowingly provide company trade secret information during their use of these tools to perform their work. As many people have experimented and used such tools to perform real work associated with their company’s business, many company’s trade secrets are at risk. Tools such as ChatGPT may provide these trade secrets to others in a later output, or if the generative tool provider discloses the prompt information.

As the use of AI grows exponentially, many companies may choose to reap the benefits of such tools even given the potential downsides for fear of being left behind. Indeed, the benefits of reducing time and cost-saving functions positively affects the bottom line, and for some companies, these benefits cannot be ignored. In a recent Wall Street Journal article, nearly half of surveyed workers use ChatGPT or other AI tools at work, and nearly 70% said they do so without telling their supervisor. Although several companies have banned use of such tools, many have chosen to investigate these tools further and find uses where the risks are minimized.

One Approach: Create a Company Policy that Governs the Use of Generative AI Tools and Content

Assuming tools like ChatGPT are here to stay, and companies need to use these tools in some capacity, companies should carefully define policies that govern where and when generative AI tools can be used to perform company work. One interesting approach is to treat AI-generated content like any other third-party content that might be used by a company, contractors, and employees. One such well-known use is open-source software. The approach with open source is generally one of guidelines for the use of open-source code within a company to ensure compliance with legal requirements, protect intellectual property rights, and promote transparency and collaboration.

The policy may include several sections defining accepted and prohibited use of such AI tools. In some cases, there may be preferred tools where company information is not shared outside of the company or is configured in such a manner where the information cannot be accessed external to the company. Terms may be provided that address assignments of rights, compliance with IP laws, and approvals needed to use such tools within their scope of work. In short, policies that control and track the use of these tools and place burdens on their employees and contractors should be used to lessen any risk in using them.

Below is an example of several terms that might be used in a company policy that restricts employees on their use of generative AI tools to help reduce risk exposure of the company.

Sample Company Policy for Using AI-Generated Content

The Company recognizes the importance of AI-generated content and code in its business operations. To ensure responsible and legally compliant use of AI-generated content, the following policy is in place:

  1. Authorship and Ownership: [Company Name] retains ownership of all AI-generated content and code created by its employees or AI systems during the course of their employment or engagement in relation to their work for the Company. Employees or contractors are required to assign ownership rights to [Company Name] for any AI-generated content or code they produce.
  2. Compliance with IP Laws: [Company Name] is committed to complying with all applicable IP laws, including patent and copyright laws, in jurisdictions where it operates. Employees and contractors must ensure that their use of AI-generated content and code does not infringe on third-party IP rights.
  3. Licensing and Permissions: [Company Name] will obtain necessary licenses for using AI technologies, datasets, or algorithms in generating content or code. Employees and contractors must seek permission from relevant stakeholders before using copyrighted materials or patented inventions in the furtherance of Company work.
  4. Company Information: Employees or contractors are restricted from sharing Company trade secrets, data, algorithms, code, or other Company information with any AI-based tools without the express written consent of the Company. Any use of externally-provided AI systems must be pre-approved by Information Services.
  5. Monitoring and Enforcement: [Company Name] will monitor the use of AI-generated content and code within the organization to ensure compliance with this policy and relevant IP laws. Employees or contractors found to be in violation of this policy may face disciplinary action, up to and including termination.

Avoid the Risk

AI-based generative tools have the potential to improve work efficiency and productivity in multiple ways within our workplaces. Companies should take great care and pursue thoughtful planning to avoid IP risks in the use of these tools.


Image Source: Deposit Photos
Image ID: 315955022
Author: [email protected]

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

One comment so far.

  • [Avatar for Anon]
    Anon
    May 30, 2023 05:32 pm

    AI-based tools that generate content may themselves create novel and non-obvious inventions.

    That’s going to be a point of contention (not by me, mind you). It has been several years now that I admonished several naysayers that the discussion should well be underway, and the acceleration will only continue.

    As with coding tools that are trained on…

    This very much remains an item of first impression for the courts (AND any such single court case is not likely to be dispositive to set the bar one way or another for Fair Use position on ‘use’ of ANY – no matter how large a set – for training). To even suggest “licensing to be required” is to not understand just how Generative AI actually works.

    As for Trade Secrets, I have also pointed out that not only are the AI tools on the market ‘public facing’ (with NO Non-Disclosure or other protective agreements in place), the terms of use by MOST ALL of these outfits basically constitute a form of “shrink-wrap” terms given that the entities by and large reserve the right to change the terms entirely of their own accord and without notice.