AI, IP and Data Risk: Responsibly Adopting AI While Safeguarding IP | IPWatchdog Unleashed

This week on IPWatchdog Unleashed, we feature a panel discussion that took place on October 27 as a part of our annual life sciences program. Initially styled as a conversation about how artificial intelligence (AI) is transforming life sciences, it became quickly apparent that the conversation was not going to be limited to the life sciences sector. Instead, the discussion evolved into a robust discussion about data risk and intellectual property, focusing on what every innovative company should have front of mind when considering the adoption of AI tools.

Indeed, the discussion this week is really about what all companies must do to allow or even encourage the use of AI by researchers and scientists without opening catastrophic IP and data security failures, risk and liability. And we say it that way because whether you know it or not your employees are already using AI, and if you make it difficult or impossible for them to use AI tools at work, they will simply do things at home or on their smartphone. So, against this backdrop, we address data protection, trade secret risks, employee monitoring, AI governance, the importance of using AI responsibly, reasons to provide your workforce with safe, vetted AI tools, and much more.

I moderated the panel, which included Brian Cocca, who is Executive Director & Assistant General Counsel for IP at Regeneron; Gary Lobel, Chief IP Counsel; Dhruv Sud, Senior Director & Assistant General Counsel at Regeneron; and Giulia Toti, co-founder of Anchor AI. Collectively, we unpack the real-world collision between AI adoption, IP protection, data governance, competitive strategy and risk.

AI: The Productivity Engine That Also Keeps You Up at Night

Brian kicked off our discussion with a simple truth: AI sparks both excitement and fear—and both emotions are warranted. Most organizations are already using AI more than they realize. Some of it is sanctioned. Some isn’t. That complexity alone creates operational and legal exposure.

Gary doubled down on the point, explaining that one must assume your teams are already experimenting with AI tools behind the curtain—regardless of whether they have been given safe vetted tools to use, or they are taking it upon themselves to use widely available tools like ChatGPT that generally have no safeguards. Gary warned that companies trying to block AI altogether are simply ceding competitive ground to those who harness it intelligently and likely driving their employees to use unauthorized AI tools that pose very real data security and IP risks, which can include proprietary information about innovations being shared in environments where what is ingested by the AI tool will be used for training purposes. This means that if leadership isn’t ahead of use by internal teams, risk is already compounding and rights, such as trade secrets, are already being lost.

Giulia made it clear that AI tools introduce a new class of vulnerabilities—data leakage, privacy violations, unauthorized model training, and inadvertent disclosure of confidential or trade-secret material.

The bottom line: AI is a double-edged sword. Powerful. Fast. And dangerous if unmanaged.

Data Protection: The Non-Negotiable Foundation

AI, IP & Data Risk: Responsibly Adopting AI While Safeguarding IPThe panel hammered home a critical operational truth: AI’s value depends entirely on the integrity and security of the data it touches.

Brian reminded the audience that AI systems often ingest enormous datasets—and without disciplined data controls, it’s easy for sensitive material to slip into places it shouldn’t. Once data leaks into an external model, the loss is irreversible. Dhruv and Giulia outlined how leading companies are building governance frameworks to avoid data contamination, align AI use with business priorities, and ensure compliance with internal policies and external regulations.

The message was straight forward: if your AI program isn’t anchored in a thoughtful and deliberate data-governance framework, you’re gambling with your intellectual property.

The Competitive Imperative: Use AI or Fall Behind

Giulia highlighted the strategic upside for AI usage, explaining that when implemented responsibly, AI can compress timelines, sharpen decisions, and optimize workflows across R&D, legal, and operations.

But adoption without due diligence is reckless. Brian stressed the need to rigorously vet AI vendors—not just for technology, but for their data policies, their security posture, and their history of breaches. Equally important is instilling a culture where employees understand AI boundaries and the risks of improvising with unapproved tools.

The Path Forward: Innovate Boldly, Govern Aggressively

The panel closed with a clear message. AI is unavoidable, and the organizations that thrive will be the ones that operationalize it responsibly. That means embracing AI as a core capability, implementing governance frameworks that actually work, protecting IP and confidential data with uncompromising rigor, and shaping internal policy rather than reacting to external failures.

Companies of all sizes and in every sector are standing at an inflection point. AI can unlock extraordinary advancements for organizations disciplined enough to manage the risk while scaling the opportunity. And for those organizations who fail to thoughtfully adopt responsible AI, potentially catastrophic risk and liability is waiting around the corner.

More IPWatchdog Unleashed

You can listen to the entire podcast episode by downloading it wherever you normally access podcasts or by visiting IPWatchdog Unleashed on Buzzsprout. You can also watch IPWatchdog Unleashed conversations on the IPWatchdog YouTube channel. For more IPWatchdog Unleashed, see below for our growing archive of previous episodes.

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

No comments yet. Add my comment.

Add Comment

Your email address will not be published. Required fields are marked *

Varsity Sponsors

IPWatchdog Events

Webinar – Sponsored by DeepIP
December 16 @ 12:00 pm - 1:00 pm EST
Webinar: Sponsored by ClearstoneIP
January 27, 2026 @ 12:00 pm - 1:00 pm EST

From IPWatchdog