AI-Driven Innovation: Can & Should it Be Regulated?

In a recent post on Global Futurist, author Matthew Griffin details a recent breakthrough by Google’s DeepMind engineers whereby they have developed artificial intelligence with a human-like memory system. Typical artificial intelligence (AI) systems suffer from an amnesiac like flaw of failing to retain knowledge acquired through previous tasks. With each new task assigned to an AI system, previous lessons would need to be learned all over again.

By overcoming this issue of “catastrophic forgetting” as some call it, this new AI can effectively build and retain memories. With machine and deep learning systems becoming increasingly well-suited for complex tasks, the range of applications for which some technology can be used grows by the day.

One area experiencing explosive growth in recent years is the use of AI in the computer-generation of inventions. Due to the need to analyze massive sets of data, or because of the specter of incredibly complex problems, AI facilitates innovation that previously could not be accomplished through human ingenuity alone.

The Spectrum of AI-Driven Innovation

On one of the spectra of AI-driven innovation are the present-day applications of the technology. Futurist Maurice Conti described such processes as intelligence augmentation in a recent article on the Autodesk publication, Redshift.

“One means of augmentation is generative design, a computational system that allows engineers and computers to co-create things they couldn’t have accomplished separately. Engineers start by creating a problem statement and inputting goals and constraints. Then they use machine-learning algorithms and cloud-computing power to churn through tens of thousands of options, yielding solutions that no human alone could have designed.”

In the near future, Conti sees humans and industrial robots regularly working side-by-side on production lines to co-fabricate goods. By leveraging the different, yet complementary skill sets of humans and robots, industry will become smarter and more efficient. Beyond industry, other business leaders see artificial intelligence playing an instrumental role in traditional office jobs. Peter Schwartz of Salesforce recently proclaimed AI to represent the future of sales, as AI-powered personal assistants will soon assist with tasks ranging from lead generation to pipeline management and more.

Farther down the road, Conti envisions a world where augmentation enables products and structures to be grown and harvested rather than fabricated or constructed. By using biomimicry to “grow” goods rather than building them, AI would increase human capabilities exponentially. Along the same lines, but requiring less imagination, graduate student Erica Fraser recently wrote for Script-ed on the possibilities of AI-driven innovation. In her paper, she stated, “At the far end of the spectrum, a computer could autonomously generate outputs that would be patentable inventions if otherwise created by a human.”

AI & New Paradigms in the AI Landscape

Historically, legal decisions have dismissed the notion of a machine or autonomous program being the inventor. In fact, as a recent contribution to Lexology states, “Congress has stated that the Patent Act is intended to ‘include anything under the sun that is made by man.’”

However, with the increasing use of AI to augment the pace and scope of innovation, questions arise as to who owns the invention and subsequent IP rights in cases where AI is involved. Consider the hypothetical scenario raised by the Lexology post mentioned above:

“Company A develops an AI program or machine, which it sells to Company B. Company B operates that AI on resources owned by Company C, such as servers in a cloud computing environment. Company B also obtains data from Company D that is used to train the AI. After training, the AI produces an invention – so who is the inventor?”

Another issue raised in the same post is that of AI acting as an infringer on protected IP. Through the process of machine or deep learning, it’s entire possible that an autonomous system might infringe on a protected process or system. How does determine which party is the infringer in an example such as the one laid out above.

Similarly, how can one determine who induced the infringement as current laws suggest that an alleged inducer “must have knowingly aided another’s direct infringement of a patent.”

AI-Related Legislation

With growing adoption of artificial intelligence in products, machinery, and systems, there is concern about who bears responsibility for injury or financial loss caused by AI. As AI systems learn new behaviors with the intent of improving efficiency or other pre-defined goals, who is liable for unintended consequences? A recent TechCrunch article discussed this very issue, stating:

“How will the legal system treat reinforcement learning? What if the AI-controlled traffic signal learns that it’s most efficient to change the light one second earlier than previously done, but that causes more drivers to run the light and causes more accidents?”

A 2007 case in the state of New York regarding a robotic gantry loading system that injured a worker appears to set some precedent. In that case, because the manufacturer followed all applicable regulations, the were not determined to be liable. In most cases, liability is only assumed if it is proven that the manufacturer showed negligence or could foresee harm.

However, as a new and emerging technology, artificial intelligence is a hot topic of discussion and calls for greater oversight and regulation of the technology are growing louder. Going beyond liability and public safety concerns, some pundits suggest that if legislators don’t act quickly, AI could lead to the most dramatic erosion of IP that the modern era has seen thus far.

Again, the issue comes down to recourse, as you can’t really charge a machine or algorithm with committing patent infringement or other crimes and haul it into court. Culpability must be placed with the party responsible for the use of the AI, and as suggested early, that responsibility is often shared among several entities in a complex manner.

Ultimately, artificial intelligence has the capacity to increase the pace and scope of innovation to a degree never before seen in human history. However, without guidelines and legislation to establish culpability for actions performed in conjunction with AI, this technology may be opening Pandora’s Box to infringement on existing intellectual property.

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

10 comments so far.

  • [Avatar for Benny]
    Benny
    May 5, 2017 01:02 pm

    Angry,
    No worries. Your dissatisfaction with th US patent office is well known around here and needs no reiteration..

  • [Avatar for angry dude]
    angry dude
    May 5, 2017 12:51 pm

    Benny,

    I never ever defended USPTO’s quality of work which is the quality of patent application examination in the first place and nothing else

  • [Avatar for Benny]
    Benny
    May 5, 2017 12:37 pm

    Angry,
    Code generators exist, and they don’t work in the manner you described. They can easily reproduce results (by combination of library routines) which are patent protected because the USPTO is still granting patents which cover non-inventive disclosures.

  • [Avatar for angry dude]
    angry dude
    May 5, 2017 10:48 am

    Andrew @4

    I hope you understand that this is just a BS to make waves

    All present-day “AI” is just a bunch of clever programming by *humans*
    The faster the machine the more clever it seems to uneducated

    The true AI, if possible at all, will probably eat your lunch (or you) before you know it 🙂

  • [Avatar for angry dude]
    angry dude
    May 5, 2017 10:40 am

    “If I run a code-generation program whose output infringes a patent”

    Benny, your lack of education shows

    In theory every monkey typing on keyboard can eventually type e.g. “War and Peace”

    I practice, however, this is impossible

    If your computer monkey (aka code-generation program) can infringe on a patent AND produce a working useful code then that patent has serious validity problems

    I challenge you or your monkey “AI” computer to infringe on mine which boils down to just one(1!) math formula – about 15 characters in total

    The fact that that formula can be eventually typed by pressing random keys does not change the fact that you will never be able to choose a single working line of code from zillions of non-working lines or zillions upon zillions of garbage character sequences

  • [Avatar for Benny]
    Benny
    May 5, 2017 08:32 am

    I think the question is not, who is at fault, but who is NOT at fault. If I run a code-generation program whose output infringes a patent, can I point to the computer, whose performance I neither control nor can anticipate, as the guilty party? Wilful infringement? Who? Me?

  • [Avatar for Andrew]
    Andrew
    May 5, 2017 01:31 am

    Hey angry dude – Here’s a great TED Talk on what AI-driven innovation is, and how computers are now capable not just of processing data, but of designing structures and things: https://www.ted.com/talks/maurice_conti_the_incredible_inventions_of_intuitive_ai

  • [Avatar for Night Writer]
    Night Writer
    May 4, 2017 02:30 pm

    Tort law already is there as well other laws for building machines that do harm. The person that builds the machine and unleashes it is liable.

  • [Avatar for angry dude]
    angry dude
    May 4, 2017 10:50 am

    I would be weary of any AI-driven innovation (whatever is meant by that) developed by Google
    manipulated web search results is already more than enough if AI innovation coming from google

  • [Avatar for Anon]
    Anon
    May 4, 2017 08:02 am

    Is this not a “law of the horse” question?

    In other words, would not more general laws, not specifically tied to AI, suffice?