IPWatchdog LIVE: From ‘Sneaky AI’ to ‘Ontology’, What IP Attorneys Need to Know About Contracting for AI Acquisition

“If users don’t know the AI is there, they cannot govern it.”

AI contracting

From left: Judge Ryan Holte, Stephanie Curcio and TJ Whittle

As artificial intelligence adoption accelerates across both commercial and government sectors, traditional contracting frameworks are being stretched beyond their limits. That tension was the focus of a panel at IPWatchdog Live 2026 today, featuring Judge Ryan T. Holte of the U.S. Court of Federal Claims; Stephanie Curcio, co-founder and CEO of NLPatent; and TJ Whittle, Legal Counsel at Anduril Industries.

The discussion made clear that AI is not just another category of software. It introduces new terminology, new risk allocation problems, and new sources of ambiguity that contracts are only beginning to address.

Defining the AI Contracting Vocabulary

Curcio, speaking from the vendor side, began with the vocabulary she believes parties need before they can negotiate intelligently. One of her key distinctions was between “closed pool” and “open pool” AI systems. In a closed pool system, customer data and inputs remain contained and are not used to further train the model. In an open pool system, user inputs may be used to train the model, sometimes without the user fully appreciating that this is happening. That distinction matters immediately for confidentiality, data rights, and downstream risk.

She also introduced the concepts of “sneaky AI” and “embedded AI.” “Sneaky AI” refers to AI functionality that suddenly appears in a product that did not previously include it. “Embedded AI” describes AI woven so deeply into a product that users may not even realize they are interacting with it. Her point was straightforward: if users don’t know the AI is there, they cannot govern it.

Curcio also described what she called the “AI accountability waterfall.” Risk flows downstream, but responsibility often becomes less clear at each step, particularly when end users have little visibility into training data, model changes, or how outputs are generated. In her view, traditional software contracts often fail to account for that structure.

She identified four ways AI contracting departs from ordinary software deals. First, the product itself can change over time. Second, ownership of outputs is often unclear. Third, AI creates more complex data-rights questions. Fourth, it introduces liability gaps, especially where users cannot see whether the system was trained on problematic data or whether their own data is being reused in unexpected ways.

The GSA and the Push Toward Standardization

Whittle brought a government contracting perspective and focused on how federal procurement may shape AI contracting norms more broadly. He noted that while the government has gradually become more comfortable procuring commercial software, AI presents a different challenge because commercial practices themselves are still unsettled.

He pointed to a proposed rule from the General Services Administration (GSA) as a key development. In his telling, the proposed clause takes a strongly government-protective approach to AI data rights and could be inserted into commercial contracts with the GSA. The practical concern is that while contractors may retain rights in their own background intellectual property and datasets, data obtained from the government may not be folded back into broader model training or commercial product development.

Instead, that data may need to be siloed for government-specific use. Whittle suggested that this restriction could significantly affect the value of government AI contracts. And because federal procurement practices often influence the broader market, those restrictions could shape commercial expectations about acceptable data use.

Ontology, Definitions, and How Courts Are Responding

Judge Holte focused on how these issues are beginning to surface in disputes. He pointed to a 2024 bid protest involving Ekagra Partners and Customs and Border Protection, where the concept of “ontology” became central. The dispute turned on whether the offeror’s prior experience actually matched the solicitation’s requirement for ontology development in support of AI applications.

His point was that technical definitions matter, and they matter even more when contracting specialists and technical specialists are speaking past one another. In the Ekagra case, the protest failed because the offeror’s past work, as described in its submission, did not satisfy the technical requirement the procurement actually demanded. The outcome turned on whether the language used by the parties actually matched the language of the contract.

Holte also discussed how courts and the judiciary are responding to AI more broadly. He noted guidance warning against inputting sealed, sensitive, or confidential information into AI systems and emphasized that judges cannot delegate core judicial functions to AI. He also pointed to the growing expectation that lawyers verify AI-generated work and protect client confidences when using these tools.

That broader legal uncertainty also surfaced in Holte’s discussion of Thomson Reuters v. Ross Intelligence, the copyright dispute over Westlaw headnotes. He used the case to illustrate that courts are only beginning to sort out the legal consequences of AI training and competing uses of copyrighted material.

Contracts as the First Line of Governance

One of the most practical moments came during audience questions, when Gene Quinn asked who actually bears the risk when AI produces incorrect outputs. Curcio noted that insurance remains difficult for many AI companies to obtain because the risks are still not well understood. That means a vendor’s promise to take on liability may have limited value if the company lacks the insurance or resources to back it up.

Taken together, the panelists offered a clear message. AI contracts now do much more than set price and performance terms. They are becoming the first line of governance for systems that evolve over time, rely on uncertain data practices, and operate in a legal environment that is still being built.

 

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

No comments yet. Add my comment.

Add Comment

Your email address will not be published. Required fields are marked *

Varsity Sponsors

IPWatchdog Events

Webinar: Sponsored by NLPatent
April 2 @ 12:00 pm - 1:00 pm EDT
Virtual Artificial Intelligence Masters™ 2026
May 18 @ 8:00 am - May 19 @ 5:00 pm EDT
Patent Masters™ 2026 – Portfolios, Licensing and Enforcement
June 8 @ 8:00 am - June 10 @ 5:00 pm EDT
Women’s IP Forum 2026
September 23 @ 8:00 am - September 25 @ 5:00 pm EDT

Industry Events

PIUG 2026 Joint Annual and Biotechnology Conference
May 19 @ 8:00 am - May 21 @ 5:00 pm EDT
Certified Patent Valuation Analyst Training
May 28 @ 9:00 am - May 29 @ 5:00 pm EDT
2026 WIPO-U.S. Summer School on Intellectual Property
June 1 @ 9:00 am - June 12 @ 1:45 pm EDT

From IPWatchdog