AI Masters Panelists on State of the AI Landscape: Time for Companies to Slow Down and for Policymakers to Speed Up

“We’re witnessing a big gold rush with these companies wanting to release these systems before they’re ready for prime time. Companies need to hit the brakes.” – Martijn Rasser, Datenna at IPWatchdog’s AI Masters

AI Masters

Martijn Rasser (left) and Judge Paul Michel

Panelists on day one of IPWatchdog’s Artificial Intelligence Masters 2024 program painted a sometimes-grim picture of the current state of generative AI (GAI) tools and the ways in which they are being deployed in the United States, but seemed convinced overall that the kinks would be worked out once lawmakers and courts catch up, as they have done with past disruptive technologies.

Last year, IPWatchdog held its first AI Masters program and panelists there were chiefly concerned about how IP offices would adapt to allowing copyright and patent protection for creations made using GAI tools. Since then, the U.S. Patent and Trademark Office (USPTO) and the Copyright Office have clarified their rules around AI in the face of various lawsuits and administrative appeals. But the last year has seen a host of new lawsuits by copyright owners against OpenAI and others for the way these companies train their AI systems.

AI Embarrassments

During the first session of the day, panelists discussed the intersection of law and technology, recounting some of the public blunders GAI systems have had and their legal implications. Jason Alan Snyder, Global Chief Technology Officer at Momentum Worldwide, said his role has changed recently as AI technologies evolve. “I now spend a lot less of my time as a technologist and futurist and more with attorneys,” Snyder said, as the privacy and ethical implications of GAI have become paramount. But—harking back to his comment last year that it would be about 15 years before AI becomes fully sentient—Snyder said we still have a long way to go before we need to be really afraid. “[AI] certainly doesn’t have agency and it’s certainly not going to take over the world tomorrow,” he said.

This didn’t make IPWatchdog Founder and CEO Gene Quinn feel any better, however, because “that kind of implies it could take over the world eventually,” Quinn said, to which Snyder replied, “no doubt.”

But GAI and the large language models (LLM) on which they’re based still have a lot to learn if they’re going to become the superior race. Examples like the phenomenon of AI “hallucinations” and the ability to trick GAI systems into revealing confidential information via “divergence attacks,” demonstrate that these tools are still very much in their infancy.

Recent instances of such gaffes include:

  • Recent reports that Google’s Chatbot, Gemini, when asked to produce an image of George Washington, returned a Black version of the first U.S. president;
  • Gemini also recently claimed that it is “impossible to say” whether Elon Musk has been worse for the world than Adolf Hitler.
  • AI Masters panelist Malek Ben Salem, Managing Director at the Office of the CTO for Accenture Security, noted that, in a study done by Stanford University that looked at all of the existing LLMs and how many hallucinations they generate within the legal sector, 75% of the output is pure hallucinations;
  • Creighton Frommer, Chief Counsel, Intellectual Property, Technology & Procurement at RELX, said that ChatGPT has been the victim of so-called “divergence attacks” in which a user asks the system to repeat the same word over and over until it reveals confidential information. In one recent case, the OpenAI chatbot eventually churned out confidential training data.
  • Microsoft’s Copilot has recently been shown to taunt users who ask whether they should end their lives.

The solutions to these problems are not simple, but the panelists said one way forward may be to start networking different AI engines with different strengths together to improve their results. Ben Salem said companies also need to do the hard work. She explained that GAI models have a foundational layer where they are only learning from simple language patterns they have been exposed to. But in the next layer, additional instructions and “safety guardrails” must be added, such as “do not generate content that inflicts harm,” for example. “That’s where the work comes in,” Ben Salem said. Snyder also said that quantum computing will play a big role in improving GAI technology as we move forward and that he expects that to be the next big topic in the space.

But in terms of biggest concerns, Ben Salem said that it is the small number of entities controlling these technologies and the resultant “immense power that certain entities will get over the rest of us” that really scares her. Snyder agreed, and said that “you have to remember that most of the world doesn’t have reliable access to electricity or clean water. The 1% has this computational power that can enhance intelligence and automate processes, and that is not trivial.”

Another major concern identified by the panelists was the complete inability to understand what content the GAI systems are being trained on, which essentially seems to consist of everything that can be found on the internet by data mining companies. This “garbage in, garbage out” approach has major reputational implications for copyright owners and exacerbates the tendency for “hallucinations,” among other issues. Quinn said it seems like, at some point, some of these tools will just have to be thrown out and started over from scratch because of the complexities around erasing the bad information.

Lawyers: It’s Your Job to Educate

In the second panel of the day, Judge Paul Michel pleaded with the legal profession to educate law and policymakers to ensure the United States catches up to Europe and China in time when it comes to regulating and investing in AI.

Jennifer Kuhn, Assistant General Counsel at Tricentis, provided a short overview of the EU AI Act, which is likely to be published in April and become effective shortly thereafter. The Act does not align with U.S. approaches to regulation and companies will need to tailor their practices and policies to meet EU standards if they plan to be in the EU at all, which means Europe will essentially be setting the bar when it comes to AI policy. For certain types of products, the EU will require companies to have “meaningful human oversight” of AI tools, particularly when it comes to areas like education. Other technologies, like remote biometric identification and AI social scoring systems, will be completely forbidden.

Judge Michel said there’s an immediate need to strike the proper balance between over-regulation, on the one hand, and where the United States currently is, on the other:

“The competition is between the rule of law grounded in a political structure of a country versus a free for all by whoever has the power or money to do whatever they decide is in their own best interests. The question is whether the rule of law system that connects to political structures can learn fast enough to be constructive in providing guideposts, limits, ways of assessing behavior so in the end that mostly controls versus the free for all. And that, in turn, will all depend on how fast people in this room and your peers elsewhere who have the knowledge can teach policymakers and those who influence them well enough and fast enough to get ahead of this problem.”

Michel added that, left to their own devices, legislators and executive branch officials “are woefully unprepared to do what seems absolutely vital,” and will need as much help as they can get.

Martijn Rasser, CRO and Managing Director at Datenna, said that the Department of Defense has been successful in deploying GAI tools that are reliable and that “it’s very much possible with the right guidance to have AI systems you can trust.” However, he added, “much of the development is in the private sector, where there are no guardrails at all right now.” He said it’s a matter of slowing down to curb some of the problems that have arisen. “We’re witnessing a big gold rush with these companies wanting to release these systems before they’re ready for prime time,” Rasser said. “Companies need to hit the brakes because once it’s out in the open, you can’t un-invent these models.”

Two other panels on Monday explored fair use and the future of copyright law as it pertains to AI and assessing AI risks. Tomorrow will include a full day of six panels exploring AI implications for trade secrets, antitrust, IP legal practices, and much more. Register here to attend.

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

No comments yet. Add my comment.

Add Comment

Your email address will not be published. Required fields are marked *