Weighing the Risks and Rewards of Generative AI for Legal Departments

“It’s incumbent upon us as learned intermediaries to help organizations understand what [generative AI] can do and what it can’t do.” – Allison Gaul, Boston Consulting Group

Generative AI

From left: John Zevitas, Allison Gaul, Elina Litvak Cohen and Ray Chohan

Generative artificial intelligence (AI) platforms are already reshaping work life for many professionals, including those in the legal industry. On Day 3 of IPWatchdog LIVE 2023, a panel discussion titled “Impact of Generative Artificial Intelligence on Law and Innovation” explored ways that in-house legal teams can advance their company’s use of generative AI to improve productivity while balancing the need to protect confidential data and intellectual property.

Trend Characterization and Document Summaries Among Leading Use Cases for AI

The past year has seen tremendous growth in both the capabilities of generative AI platforms as well as corporate issues caused by the use of such systems. Consumer tech giants Apple and Samsung have both restricted the use of ChatGPT on company-owned devices in response to disclosures of confidential information. During a fireside chat on Day 3 of IPWatchdog LIVE 2023, U.S. Patent and Trademark Director Kathi Vidal discussed an internal memo to patent examiners restricting their use of ChatGPT in drafting office actions and other documents.

While OpenAI’s flagship generative AI platform may receive the greatest amount of media attention, Ray Chohan, Co-Founder of PatSnap, told the audience that his company was focused on narrow use cases for AI that can unlock value on daily corporate tasks. John Zevitas, Vice President and Managing Legal Counsel of T. Rowe Price, saw three main use cases for generative AI for implementation by his legal department: summarization of content from large documents like Form 10-Ks; characterization of trends from large data sources; and creation of memos and other documents that demonstrate thought leadership in the field.

More popular implementations of generative AI carry the potential for creating greater problems within the corporate context. Allison Gaul, Legal Counsel at Boston Consulting Group, discussed issues with external consumer-facing implementations of generative AI such as chatbots. “Everyone’s favorite thing to do is add a chatbot,” she said, “but understandably a chatbot can go wrong in a lot of ways.” Gaul added that personally identifiable information received from consumers can cause chatbots to introduce bias based on consumer demographics, resulting in a higher level of care that corporations must exercise when collecting consumer datasets. Elina Litvak Cohen, General Counsel of Skeps, noted that chatbots are often the first point of corporate contact for consumers, making it necessary for legal professionals to assess the use of chatbots by marketing and other corporate teams.

Taking a proactive stance regarding the legal implications on corporate use of generative AI platforms was an important step discussed by multiple panelists. Zevitas discussed T. Rowe Price’s use of a steering committee on generative AI that not only evaluates data hallucinations and biases but also builds what he called a “playground” allowing corporate personnel to experiment with those platforms without risking IP disclosures. Gaul cautioned against legal professionals buying into the hype cycle around generative AI. “It’s incumbent upon us as learned intermediaries to help organizations understand what the tech can do and what it can’t do,” she said, adding that engagement with technology vendors to understand the information security vulnerabilities of those platforms is critical to preventing the risk of IP leakage outside the company.

Should Legal Counsel be Early Adopters or Early Educators on AI Platforms?

However, not every attendee of the panel felt that slow adoption of generative AI was the proper approach to technological development. “It’s easy for lawyers to say that there are issues with this, but there are also issues with human programming,” said Karmanya Singh Sareen, Partner at Kommit Techno-Legal LLP. “The business advantage comes from early adoption and not early education.” While disclosure risks though AI platforms aren’t ideal, Sareen noted that similar risks were at play from the corporate use of other online services like Google Search. In response, Gaul noted that while she wasn’t advocating for in-house counsel to avoid the use of generative AI entirely, it’s important for the legal industry to educate itself when adopting such platforms from a risk mitigation standpoint.

Although Gaul acknowledged that currently the development of generative AI and large language models is in its “wild west” phase, AI vendors are starting to become more cognizant of the legal issues at play. Adobe Firefly was mentioned as one such platform that was diligent in ensuring the use of licensed and public domain images in training generative AI models. Zevitas said that, while some companies like OpenAI strongly defend their use of pirated and copyrighted data sources available on the Internet to train their platforms, some vendors creating enterprise-level platforms were beginning to offer indemnification for IP infringement issues created by platform outputs. He added that the United States had more restrictions around data scraping than countries like Japan, which in 2018 amended its copyright laws to expand opportunities for data scraping to train AI platforms.

Whereas other countries are changing their legal systems to better embrace generative AI, a great deal of dismay was exhibited over a recent AI bill introduced into the U.S. Senate by Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT). If enacted as drafted, the bill would remove immunity for generative AI platforms under Section 230 of the Communications Decency Act in an effort to dissuade the use of such platforms to develop deepfakes. Both Chohan and Gaul commented on the challenging nature of the proposed legal framework, with Chohan in particular noting that it would be incredibly difficult to implement the provisions of that draft legislation. Gaul noted that the rate of technological change will only increase in the years to come, meaning that the rate of regulatory response to new technologies will increasingly lag behind the cycle of technology adoption.

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

No comments yet.