The AI Ethics Waterfall: Disclosure, Governance, and Who’s Really Responsible

“The [AI] current is moving whether you are ready or not, and preparation is the only thing that separates those who come out intact from those who do not.”

AI EthicsFrom a trickle just a few years ago, AI use in the patent profession has become a rushing torrent. AI tools, features, and applications are now an integral and sometimes invisible part of patent practice. From invention harvesting and prior art searching to drafting, filing, opinion work, litigation, and licensing, the savvy patent practitioner almost certainly has AI embedded somewhere in their workflow.

In some contexts, AI is obvious. Generative tools used for drafting or summarization are hard to miss. In others, AI operates quietly in the background, embedded as a feature within legacy platforms or integrated into research tools with little outward indication of how results are produced. A software platform may incorporate AI features by default, often without updating contracts or giving users clear visibility. Users log in to familiar tools and find new functionality waiting. This “sneaky AI” is becoming the rule rather than the exception. Even experienced practitioners may not always know when AI is being used, when it is not, or what specific role it plays under the hood. Clients typically have even less visibility.

These overlapping and often fuzzy responsibilities give rise to what we have coined the “AI ethics waterfall” – a cascading chain of accountability that flows from managing counsel down through foreign associates, third-party vendors, and AI providers, each of whom influences and controls how patent work is performed.

The central challenge is that AI use does not stop at your own lawyers. A managing firm may not deploy AI directly, but a foreign associate might rely on AI-powered prosecution tools. A third-party vendor may use AI-driven search or analytics. Each link in the chain adds distance between the client and the technology shaping their work. In practice, disclosure tends to attenuate as responsibility diffuses, leaving clients least able to detect or control AI use precisely where oversight is weakest. That is the waterfall. And right now, almost no one is talking about it.

A Framework that Hasn’t Caught Up

Start with something as basic as the engagement letter with outside counsel – in many cases, AI is not mentioned at all. Is there a duty to disclose? In the United States, American Bar Association (ABA) Formal Opinion 512 addresses AI use through existing duties of competence and confidentiality but stops short of establishing a clear disclosure obligation. In Europe, patent attorneys are guided by the EU AI Guidance (EPI guidelines) to “establish, in advance of using generative AI in their cases, the wishes of their clients with regard to the use of generative AI”, though what that means in practice remains equally unclear. Whether what’s required is proactive disclosure, informal acknowledgment if asked, or nothing at all unless a problem arises, remains unclear. Compounding this is the lack of any settled standard for distinguishing AI as a routine tool (think spellcheck) from AI that meaningfully shapes legal judgment or work product. At what point does disclosure shift from a best practice to an ethical obligation? There’s not one source of truth and, to make matters worse, the tectonic plates are shifting daily.

Governance mechanisms could theoretically address this. Many organizations (corporations and law firms alike) already issue security or compliance questionnaires to suppliers, but this generally happens once, at the outset of an engagement. However, tech doesn’t stand still – as products change and evolve, so too do their underlying or supported technologies. The administrative burden of rehashing these questionnaires every time a new feature is released or an update is pushed into production alone is significant, and regulatory frameworks like the General Data Protection Regulation (GDPR) add further complexity when multiple intermediaries are involved.

The administrative burden falls unevenly, too. Large multinationals can dictate terms, requiring their law firms to adopt AI tools to reduce costs or prohibiting AI use altogether. Smaller companies rarely have that kind of leverage. They often inherit whatever choices their outside counsel or vendors make, with little ability to negotiate or meaningfully oversee the role AI plays in their own IP strategy, even if their concerns about transparency, accuracy, and risk are exactly the same. If disclosure and governance obligations exist, who bears them, and who is realistically positioned to enforce them?

Padding Your Barrel

Is it inevitable that we are swept away by the AI ethics waterfall then, or can we actively plot a course through it? To answer this, it’s worth knowing a true story of waterfall navigation.

In October 1901, Annie Edson Taylor became the first person to survive going over Niagara Falls in a barrel. She did not survive by accident. She padded her barrel with a mattress, used a bicycle pump to compress the air inside, and two days before attempting the journey herself, sent a cat over the falls to test the barrel’s strength. Annie survived with only a gash to her head. Without that preparation, she would have died.

Those of us navigating the AI waterfall need to adopt Annie’s mindset. The current is moving whether you are ready or not, and preparation is the only thing that separates those who come out intact from those who do not.

That starts with asking better questions of the vendors and counsel you work with. How is AI being used in your workflow? For managing counsel, the ethics waterfall places a particular responsibility at the top of the chain. What happens to client data downstream? Although most providers are not volunteering this information, that does not mean you cannot ask. At a minimum, review your terms of engagement, understand where liability sits, and negotiate changes where you need to. And choose service providers who are willing to have that conversation openly rather than treating AI as something obscure. The best providers will engage collaboratively on prompts, workflows and risk profiles. They will explain not just that they use AI, but how and why. When used well, AI is more than an efficiency tool. It can genuinely transform how IP work is done and elevate the downstream benefits to clients. IP Heads should be asking how their managing counsel and service providers are adding value beyond what AI alone can deliver. In a profession where AI is reshaping the cost and quality of legal work, opacity is not a sustainable position.

Start Asking Better Questions

We recognize this article has raised a lot of questions without providing complete answers. That is intentional. Both authors are pro-AI; however, we see a genuine gap between how AI is embedded across patent workflows today and the governance frameworks meant to oversee that work, and we do not think there are simple solutions.

How should disclosure obligations evolve across the full chain of actors, not just at the firm-client level? How can smaller organizations protect their interests without the leverage of larger ones? You may not be able to control every current in the chain, but you control whether you ask, whether you disclose, and whether your engagement practices reflect the realities of how work is actually being performed today, and that is how you keep your head above water.

Image Source: Deposit Photos
Author: 2630ben
Image ID: 50713467 

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

One comment so far. Add my comment.

  • [Avatar for Anon]
    Anon
    March 10, 2026 06:28 pm

    I found this article perplexing – and perhaps that was the intent (“We recognize this article has raised a lot of questions without providing complete answers. That is intentional.“)

    I am not certain that underlying legal ethics is – or should be – so mercurial as to be dependent on any form of technology.

Add Comment

Your email address will not be published. Required fields are marked *

Varsity Sponsors

IPWatchdog Events

IPWatchdog LIVE 2026 at the Renaissance Arlington Capital View
March 22 @ 1:00 pm - March 24 @ 7:00 pm EDT
Webinar: Sponsored by LexisNexis
March 31 @ 12:00 pm - 1:00 pm EDT
Webinar: Sponsored by NLPatent
April 2 @ 12:00 pm - 1:00 pm EDT
Webinar: Sponsored by Clearstone IP
April 9 @ 12:00 pm - 1:00 pm EDT

Industry Events

PIUG 2026 Joint Annual and Biotechnology Conference
May 19 @ 8:00 am - May 21 @ 5:00 pm EDT
Certified Patent Valuation Analyst Training
May 28 @ 9:00 am - May 29 @ 5:00 pm EDT

From IPWatchdog