Navigating Recent Developments in Generative AI and Trade Secret Protection

Trinidad and Heppner mark the beginning of what will likely be an extended period of judicial development at the intersection of generative AI and trade secret law.”

trade secretTwo recent federal district court decisions highlight the significant risks of sharing confidential information with a generative AI platform. In Trinidad v. OpenAI, the court dismissed the plaintiff’s trade secret claims under the Defend Trade Secrets Act (DTSA) because the plaintiff had voluntarily disclosed her allegedly proprietary frameworks to OpenAI while using ChatGPT to create them. Then, Judge Rakoff in United States v. Heppner held that documents created using publicly available generative AI are not protected by the attorney-client privilege—in part because communications memorialized through an AI platform are not confidential when the platform is not contractually bound to keep them secret. Taken together, Trinidad and Heppner are among the first decisions to establish that confidential information shared with a public AI platform is not legally protected. While this result should not surprise practitioners familiar with the foundational principles of trade secret and privilege law, it underscores the urgency for trade secret owners to assess their AI-related exposure. Before turning to the specifics of these two cases, several other issues concerning generative AI and trade secret law warrant general mention.

Generative AI and ‘Readily Ascertainable’

Among the essential requirements to qualify for trade secret protection under the DTSA is that the information not be “generally known” and “readily ascertainable through proper means.” or trade secret protection under the DTSA is that information not be or “readily ascertainable Generative AI tools—such as ChatGPT or Claude—raise a threshold concern: they may cause information that was once a trade secret to become “generally known” or “readily ascertainable” within the meaning of applicable trade secret statutes. The DTSA also requires for the owner to take “reasonable measures” to keep the information secret in order to qualify for trade secret protection, which is the issue that the Trinidad court addressed.

In general, information is readily ascertainable if it can be found in publicly available sources such as patents, trade journals, and reference books. No reported decision has directly addressed whether trade secret information that can be surfaced through generative AI queries is thereby “readily ascertainable,” but it is likely that courts will treat AI-based discovery of information the same way they treat traditional publicly available sources. If so, trade secret information that generative AI can identify or reconstruct from publicly available data could lose its protected status.

Some commentators have suggested that if generative AI becomes sufficiently capable of synthesizing available data to reconstruct a company’s trade secrets, then most trade secrets will be readily ascertainable and lose protection altogether. It is too early to know whether this concern will materialize or whether it overstates the technology’s capabilities. On the other hand, there is a plausible normative argument that this development would be beneficial: trade secret information that AI can easily ascertain from public sources arguably does not deserve protection in the first place, and such a standard would simply raise the bar for what qualifies as a protectable trade secret.

Trinidad and Heppner

Trinidad v. OpenAI: Voluntary Disclosure and Loss of Trade Secret Protection

Trinidad directly addresses the consequences of uploading potentially protected confidential information to a generative AI platform. The DTSA requires that the owner of alleged trade secret information take reasonable measures to protect its secrecy. Commentators have predicted that sharing trade secrets with a generative AI platform would result in the loss of protection, analogous to posting trade secret information on the internet. Trinidad is the first decision to directly confront this issue.

The pro se plaintiff in Trinidad asserted several claims against OpenAI, including under the DTSA, alleging that OpenAI misappropriated proprietary AI development frameworks she created while using ChatGPT. The court’s analysis focused on the foundational requirement that alleged trade secrets be subject to reasonable measures to maintain their secrecy. The court found that the plaintiff “has not alleged that she took any reasonable measures to keep these ‘protocols and frameworks’ secret.” Critically, the plaintiff admitted that she developed her frameworks using ChatGPT—which “would have required her to voluntarily share the information she now alleges is part of her ‘trade secrets’ with OpenAI.”

The court applied the principle articulated by the Supreme Court in Ruckelshaus v. Monsanto Co. that when a party “disclose[d] [her] trade secret to others who are under no obligation to protect the confidentiality of the information, . . . [her] property right is extinguished.” By accepting OpenAI’s Terms of Service and using ChatGPT to develop her frameworks, the plaintiff consented to disclosure without establishing any confidentiality protections. The court concluded that, regardless of whether the plaintiff’s consent was enforceable as a contractual matter, her voluntary disclosure without confidentiality measures precluded trade secret protection under the DTSA.

United States v. Heppner: AI Platforms and the Confidentiality Requirement

Heppner arose in a different context—criminal prosecution rather than trade secret misappropriation—and addressed whether communications memorialized through a generative AI platform are protected by the attorney-client privilege. However, the court’s reasoning on the confidentiality element speaks directly to the loss of trade secret protection for information shared with generative AI, making Heppner highly relevant to practitioners advising trade secret owners.

During the government’s criminal investigation of Heppner, the FBI seized documents from his home, including approximately thirty-one documents that memorialized his communications with Anthropic’s Claude AI. After being indicted, his counsel claimed privilege over these documents, arguing they contained information learned from counsel and were created for the purpose of obtaining legal advice.

Describing the matter as “a question of first impression nationwide,” the court held that the AI documents did not satisfy the requirements for attorney-client privilege, finding that the communications lacked “at least two, if not all three” of the privilege’s essential elements. First, Claude is not an attorney, and discussions of legal issues between two non-attorneys are not privileged. The court further noted that the same analysis would apply to any privilege because all privileges require, “among other things, a trusting human relationship with a licensed professional who owes fiduciary duties and is subject to discipline.”

Second, and more significant for trade secret analysis, the communications were not confidential. The court found this to be so not only because Heppner had communicated with a third-party AI platform, but also because Anthropic’s written privacy policy—to which users of Claude consent—provides that Anthropic collects data on users’ inputs and Claude’s outputs, uses such data to train Claude, and reserves the right to disclose such data to third parties, including governmental regulatory authorities. The court therefore declined to extend privilege protection to the AI documents.

Protecting Trade Secrets in the Age of Generative AI

The question for trade secret owners—and for counsel advising them—is what additional steps they must take to safeguard their information in the generative AI age. The rise of the internet required companies to implement enhanced protective measures, including password protection, firewalls, encryption, monitoring of employees’ internet usage, and robust contractual confidentiality provisions. Companies that were slow to adopt these measures risked losing protection for their trade secrets. Similarly, companies that are slow to adopt protective measures that address the new risks to trade secret protection posed by generative AI may expose their trade secrets.

Some companies have reacted to this threat by prohibiting employee use of generative AI altogether. This approach is both unrealistic and counterproductive. As a practical matter, employees are unlikely to forgo tools that make them substantially more productive, and blanket prohibitions invite circumvention rather than compliance. The more effective approach is to channel employee use through platforms that offer meaningful confidentiality protections.

One option is for companies to deploy an in-house generative AI platform. Under this model, information shared with the AI remains within the company’s environment, subject to the confidentiality agreements employees execute as a condition of their employment. This approach provides strong protection but requires substantial financial resources that many companies do not have. Another potential option is for companies to obtain an Enterprise License from a commercial generative AI provider. For example, for qualifying enterprise customers, Anthropic offers a license under which it does not store inputs or outputs, except as required to comply with law or to combat misuse. Anthropic also offers a Zero Data Retention addendum, under which it does not retain inputs beyond the abuse screening stage.

Whether a court will conclude that an Enterprise License alone constitutes a “reasonable measure” is an open question, but there is legal support for the proposition that it does. The Commercial Terms and a Data Processing Agreement create an express contractual obligation on Anthropic’s part not to use customer data for model training or to disclose it to third parties. This is functionally analogous to a non-disclosure agreement with a consultant or a vendor confidentiality clause—arrangements long recognized as reasonable measures in the trade secret context. Moreover, the security controls attendant to an enterprise arrangement are independently audited, which is directly relevant to a showing of objective reasonableness.

That said, the reasonable measures standard is fact-specific, and an Enterprise License alone is unlikely to be sufficient. Courts will likely examine whether other protective measures were in place, including which employees had access to the AI system, whether the company maintained internal policies governing what categories of information could be entered, and whether employees received training on proper use. Practitioners should also note that Anthropic retains some information, even under its ZDR arrangements, for “abuse monitoring” purposes, which an adversary could use to challenge the adequacy of the measures taken.

Consistent with the use of an Enterprise License, companies should implement and document internal AI governance policies. These policies should specify which categories of information may and may not be entered into AI tools, and employees should sign acknowledgments of their obligations. A critical risk—sometimes called “Shadow AI”—arises when employees use personal consumer-tier accounts for work purposes. Individual employees who accept consumer terms without authorization can unknowingly bind the organization to training-data consent, allowing proprietary information to enter AI training pipelines without the company’s knowledge or approval. A written policy prohibiting the use of personal AI accounts for company work, enforced through access controls and training, is essential.

Additionally, companies should apply the same need-to-know discipline to AI access that they apply to paper documents: limit which employees may enter trade secret information into any AI tool, and maintain access logs. In short, companies should audit the measures they already have in place to protect their trade secrets and assess where those measures must be supplemented to account for the risks that generative AI introduces.

No Special Treatment for AI Disclosures

Trinidad and Heppner mark the beginning of what will likely be an extended period of judicial development at the intersection of generative AI and trade secret law. Their core lesson is straightforward: sharing confidential or proprietary information with a public generative AI platform, without appropriate contractual and structural safeguards, is legally equivalent to disclosing it to the world. Courts apply existing doctrine—whether trade secret, privilege, or otherwise—and are unlikely to fashion special exemptions for AI-related disclosures.

For trade secret owners, this means that the reasonable measures standard must now be understood to encompass the AI environment. Companies that have not yet done so should, at minimum, move users to commercial or enterprise AI tiers with appropriate contractual protections, adopt clear internal policies governing what information may be entered into AI tools, train employees on those policies, and audit their existing trade secret protection programs to identify gaps. The legal framework is already in place; what remains is for companies to apply it with the same diligence in the AI age as they were required to do in the internet age.

 

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

No comments yet. Add my comment.

Add Comment

Your email address will not be published. Required fields are marked *

Varsity Sponsors

Industry Events

PIUG 2026 Joint Annual and Biotechnology Conference
May 19 @ 8:00 am - May 21 @ 5:00 pm EDT
Certified Patent Valuation Analyst Training
May 28 @ 9:00 am - May 29 @ 5:00 pm EDT
2026 WIPO-U.S. Summer School on Intellectual Property
June 1 @ 9:00 am - June 12 @ 1:45 pm EDT

From IPWatchdog