AI Prompts Do Not Compromise Attorney Confidentiality Obligations

“Given how attorneys routinely use email and cloud storage services for confidential and privileged information and communications, it is difficult to understand why attorneys think that using LLMs is somehow newly problematic.”

confidentialityMany IP attorneys have expressed concerns about complying with confidentiality duties while using large language models (LLMs). For example, in a recent panel at the U.S. Patent and Trademark Office (USPTO), multiple panelists expressed the opinion that attorneys should not perform LLM queries because LLM queries are stored remotely while Internet searches are not. The goal of this article is to explore, as examples, Google’s and OpenAI’s data retention policies and the intersection of those policies with attorneys’ confidentiality duties.

How Queries Work in General

Web search engines began arising in the 1990s in response to an ever-increasing number of websites on the Internet. Web search engines have always relied on remote processing of queries from users. In particular, websites are crawled and indexed using a server (or a set of servers) operated by the search engine. Then, when a user uploads a query to the server, database software operated by the search engine runs the query against the indexed websites and returns a list of results to the user.

Therefore, whenever attorneys utilize a web search engine, their queries are shared with a third-party (that is, the server or servers operated by the search engine), at least temporarily. Whether the query is discarded immediately after returning a list of results depends on the search engine’s database software. The same process applies to using large language models (LLMs), such as OpenAI’s ChatGPT, Google’s Bard, or and Meta’s LLaMa, where a prompt to the LLM provider is shared with a third-party (that is, the server or servers operated by the LLM provider), again at least temporarily.

Google’s Terms of Service

With regard to a web search engine like Google, there are many reasons for the search engine to store queries from users for future use. Google’s Privacy Policy clearly states that “[t]erms you search for” may be collected by Google and “store[d] with your Google Account” or otherwise “store[d] . . . with unique identifiers tied to the browser, application, or device you’re using.” These stored terms may be used “to make improvements to our services — for example, understanding which search terms are most frequently misspelled helps us improve spell-check features used across our services” and “to customize our services for you, including providing recommendations, personalized content, and customized search results.” Now, Google does promise to “use encryption to keep your data private while in transit.” Additionally, Google allows users to turn off Search history, YouTube history, and other activity information, as well as to delete activity information “whenever you like.”

In addition to stored terms, Google Chrome also has various settings that “[s]ync and personalize Chrome across your devices,” “[s]ends URLs of pages you visit to Google,” and tracks “[t]opics of interest . . . based on your recent browsing history” that are “used by sites to show you personalized ads.” These settings may also be turned off, and presumably this information is deleted when these settings are turned off.

OpenAI’s Terms of Service

Similar to web search engines, LLM providers like OpenAI may store prompts from users for future use. OpenAI’s Privacy Policy states that “Content you provide,” such as “the input, file uploads, or feedback that you provide,” may be used “to improve our Services, for example to train the models that power ChatGPT.” However, OpenAI provides an opt-out form so that prompts submitted to ChatGPT are not used for training. Moreover, OpenAI promises to “implement commercially reasonable technical, administrative, and organizational measures to protect Personal Information both online and offline from loss, misuse, and unauthorized access, disclosure, alteration, or destruction.”

OpenAI provides greater privacy assurances to Enterprise customers and to API customers. Enterprise customers’ prompts are not used for training, are only available to authorized OpenAI employees “for the purposes of resolving incidents, recovering end user conversations with your explicit permission, or where required by applicable law,” and are “removed from [OpenAI’s] systems within 30 days” or deletion. Similarly, API customers’ prompts are excluded from training, are only available to “(1) authorized employees that require access for engineering support, investigating potential platform abuse, and legal compliance and (2) specialized third-party contractors who are bound by confidentiality and security obligations, solely to review for abuse and misuse,” and are “securely retain[ed] . . . for up to 30 days to identify abuse.” OpenAI also provides “zero data retention (ZDR) for eligible endpoints if you have a qualifying use-case.”

Confidentiality and Email

Generally, attorneys are bound to protect privilege and confidentiality by avoiding release of client information to third parties. See, e.g., ABA Model Rule 1.6(a) (“A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation or the disclosure is permitted by paragraph (b)”); Restatement (Third) of the Law Governing Lawyers § 68 (“[T] he attorney-client privilege may be invoked as provided in § 86 with respect to: (1) a communication (2) made between privileged persons (3) in confidence  (4) for the purpose of obtaining or providing legal assistance for the client”). However, release to “agents . . . who facilitate communications between” clients and attorneys, as well as release to “agents of the lawyer who facilitate the representation,” does not result in a violation of privilege or confidentiality. Restatement (Third) of the Law Governing Lawyers § 70. However, any release to agents must be in a communication where “the communicating person reasonably believes that no one will learn the contents of the communication except a privileged person . . . or another person with whom communications are protected under a similar privilege.” Restatement (Third) of the Law Governing Lawyers § 71. Therefore, security measures implemented by agents of an attorney must be reasonably robust in order to preserve privilege and confidentiality. In general, cloud storages and email services implement sufficient security measures to preserve attorney-client privilege. Indeed, according to one analysis, “cloud storage services present little (if any) additional risk of disclosure to third parties than do other methods of communicating or storing information, and may present even less of a risk than other methods of communication. For example, in contrast to Dropbox’s policy of not inspecting user information, FedEx’s policy explicitly provides that they ‘may, at our sole discretion, open and inspect any shipment without notice.’”

In addition to using email to exchange confidential information and cloud storages to store confidential information, attorneys routinely perform searches (e.g., for prior art or about relevant law) on the Internet with a reasonable belief that such searches will not be disclosed to third parties. Similarly, attorneys will routinely use browser features to improve inter-operability between their smartphones and their laptops (e.g., sync on Google Chrome or on Microsoft Edge) with a reasonable belief that activity information tracked by the browser will not be disclosed to third parties.

General Tips

Given how attorneys routinely use email and cloud storage services for confidential and privileged information and communications, it is difficult to understand why attorneys think that using LLMs is somehow newly problematic. Certainly, prompts for LLMs are going to be more detailed than simple queries provided to web search engines. However, if emails and cloud storage services are sufficiently secure to preserve confidentiality and privilege, then the robust security options provided by Google and OpenAI (among other examples) should be similarly sufficient to do the same.

To the extent that attorneys are worried about protecting client information, attorneys may use the options provided by Google, as described above, to reduce how much activity information is stored. Similarly, attorneys may either opt-out of training by ChatGPT or use an Enterprise or API account to improve security of stored prompts. But, just like a blanket ban on web search engines would be illogical in a world of email and cloud storage proliferation, a blanket ban on LLMs, based solely on confidentiality concerns, is an irrational response by attorneys to today’s world of technological progress.

Image Source: Deposit Photos
Image ID:52848717
Copyright:iqoncept 

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

One comment so far.

  • [Avatar for Anon]
    Anon
    December 28, 2023 08:55 am

    Your assertion that “no worry” does NOT follow from any mistakes made via cloud sourcing.

    This article may even rise to the level of malpractice.