Artificial Intelligence Can’t Patent Inventions: So What?

“If you are developing an artificial intelligence that is making more and more judgment calls, and you want to seek patent protection for any resulting inventions, ensure there’s still a human involved in interpreting the final results.”

Artificial IntelligenceThe USPTO’s recent landmark decision (16/524,350) concluding artificial intelligence (AI) cannot be a named patent inventor perhaps sparked fears of super-robots inventing critical technologies that, alas, receive no patent protection. If an AI identifies new, more efficient battery chemicals, will that new battery be unpatentable? If an AI builds chemical compounds that become the next wonder drug, will that drug-maker receive no patent? Are innovators who increasingly rely on AI to analyze data and generate solutions atop a slippery slope?

This article explains why the USPTO’s decision doesn’t preclude patentability of AI-assisted inventions and identifies questions IOT innovators must consider to best position themselves as AI evolves.

USPTO: Patent Inventors Must Be Human

In April of 2020, the USPTO issued a decision concluding that an AI named DABUS could not receive a patent for a fractal light signal with pulses designed to match human brain waves, making beacons using the signals easier to see. Dr. Stephen Thaler, DABUS’ inventor, is the patent’s assignee, but DABUS was the sole named inventor.

The Artificial Inventors Project, which lists this patent application as part of a global push for AI inventorship, is led, in part, by UK law professor Ryan Abbott, who has written for years about the patentability of AI inventions. Interestingly, and somewhat ironically, DABUS itself is patented. The issue before the USPTO was whether something DABUS “invented” was also patentable.

In declining to issue a patent to DABUS, the USPTO reasoned that a patent may only issue to a natural person, citing patent statute language and Federal Circuit decisions.

Even If AI Can’t Be a Named Inventor, There’s a Human Involved in the Process Who Can Be

Though at first glance, the decision may appear to eliminate patent protection for AI innovations, its ultimate impact is much more modest. To understand why, consider three things.

First, AI used in the IOT field typically isn’t used to invent patentable systems or machines.

Pharmaceutical researches are actively using AI to generate novel molecules that may become new, patented drugs. In contrast, the IOT industry generally uses AI to assist, reduce, or eliminate human involvement in existing processes developed by a human inventor. Most uses of AI will not result in the AI itself generating novel, patentable inventions.

For example, an AI that predicts when smart factory equipment needs maintenance has performed useful work. But the AI’s output—a notification with a date/time prediction—is not “a new and useful process, machine, manufacture, or composition of matter.”  The AI itself is, of course, a “useful machine” that has performed a “useful process,” and may be patented, if novel. But what the AI produced here is not.

Second, even if the AI is designed to produce a patentable output, there will generally still be a human inventor.

Patent inventorship turns on who makes the “inventive leap” that produces a novel, non-obvious innovation. In the AI context, this usually means the inventor is whoever interprets the results. Patents are awarded to “[w]hoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof.” (emphasis added). The word “whoever” indicates that the person who interprets the results is the inventor and not the technology conducting important data harvesting, organization, and analysis.

For instance, if an AI analyzing data from smart mobility devices identifies and ranks five possible new processes for routing traffic through a major interchange, a human being will still review those rankings and select and implement the methods she finds most helpful based on training and experience. The human being who reviewed the ranked processes and selected one worth using would be the named inventor.

Third, the most common type of AI today is likely to have a human interpreting the results.

Most AI today are machine learning devices. Machine learning involves feeding an AI data, having the AI score or rank the data, then presenting the AI with new data to analyze based on its learned responses to the prior datasets. With machine learning, a human being almost always interprets whatever new analysis the AI neural networks generates. This is true for most fields using AI, including IOT. Because the human will be the named inventor on any patentable invention, the USPTO decision that AI cannot be a “natural person” named on a patent will not obviate patenting inventions conceived by a human based on data processed by a machine learning device.

AI development still remains a long way off from creating a machine capable of mimicking the many thoughts a human has when originating a truly novel invention. As AI advances, however, innovators should consider the following.

Question for IOT Innovators to Consider as AI Evolves

Are You Building a Neuro-Symbolic Hybrid?

In symbolic or classical AI, human programmers attempt to create artificial intelligence by expressing human knowledge as a series of facts and rules that a machine can follow. (Until around the 1970s, people thought symbolic AI would dominate; today, machine learning does.)  In theory, a machine expressly designed to mimic human thinking could conceive something novel and non-obvious, the fundamental patentability requirements. Symbolic AIs are not yet sophisticated enough to do so, but some envision a future involving a neuro-symbolic hybrid that might be.

If you are developing an AI that is making more and more judgment calls, and you want to seek patent protection for any resulting inventions, ensure there’s still a human involved in interpreting the final results who can be the named patent inventor.

Is Trade Secret Protection Viable?

Trade secret protection is broad, covering not just secret formulae, but any information that has economic value because it is not generally known to others who can use it (e.g., valuable business and financial information; unique processes, plans, or schematics; Google’s search algorithm). Unlike patent protection, trade secret protection requires no government process and lasts as long as the information retains value and secrecy. These advantages mean new, ever-changing advanced algorithms rooted in AI and inventions “conceived” by AI may soon be protected as trade secrets not patents.

For Software: What About Copyright?

AI helps write software, and software can receive copyright protection. But the U.S. Copyright Office only registers copyrights with a human author. Copyright only protects works from “the creative powers of the mind.”  Courts have said corporations, formed by humans, can own a copyright, but an animal (as an actual example) cannot. Having a human being review code and make any final analytical decisions should ensure a human author can be named and the copyright enforced.

IP Rights for AI Inventors?

If AI progresses to a point where a machine is the one mind involved at every stage of an invention, patent protection may theoretically no longer be available. We must then consider alternative forms of IP to capture the value of AI-generated inventions, whether companies must be forced to keep AI inventions a trade secret, and whether we want a world that denies inventors of a different stripe credit for their inventions.

 

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

6 comments so far.

  • [Avatar for Anon]
    Anon
    July 14, 2020 10:31 am

    I am not sure that the following will make it through, as posts with relatively high number of hyperlinks are often caught be filters.

    But other conversation points that may be of interest:

    January 7, 2020: https://ipwatchdog.com/2020/01/07/epo-ukipo-refuse-ai-invented-patent-applications/id=117648/

    January 28, 2020: https://ipwatchdog.com/2020/01/28/epo-provides-reasoning-rejecting-patent-applications-citing-ai-inventor/id=118280/

    May 4, 2020: https://ipwatchdog.com/2020/05/04/uspto-shoots-dabus-bid-inventorship/id=121284/

    May 21, 2020: https://ipwatchdog.com/2020/05/21/dear-uspto-patents-inventions-ai-must-allowed/id=121784/

    By the way, did anyone else catch the WIPO three day event last week?

  • [Avatar for Anon]
    Anon
    July 14, 2020 10:26 am

    Thank you Mr. Lewis for the new viewpoint along the lines of “discovery” which has been attempted to be ‘read out’ of the statutory law by none other than the Supreme Court. See the writings of Sherry Knowles.

    I would be interested in your views of my prior provided ‘black box’ experiment which rains a bit (even) on your ‘discovery’ notion.

    See https://ipwatchdog.com/2020/05/04/uspto-shoots-dabus-bid-inventorship/id=121284/ as but one of several threads in which the discussion has been taking shape.

  • [Avatar for David Lewis]
    David Lewis
    July 13, 2020 04:26 pm

    I think that there are some points related to this article that deserve being articulated more clearly.

    On a practical level, an AI does not generally decide what problem to solve, but a human operator does (by way of analogy, even when the solution to a problem is obvious, once the problem is identified, being the first to identify the problem may make the solution “unobvious”). Additionally, it is quite likely that the human operator made significant input in limiting the parameters within which to search to a solution to the problem. Thus, I think that there is more than just interpreting the results that can be relied upon in identifying the human inventor(s). By way of analogy, again, if I have the inventive concept, and I give the idea to a “routineer” to develop and build, as long as the “routineer” is only taking routine steps to put the idea into practice, I am still the inventor. Admittedly neither of these analogies are perfect, since the AI may be doing something that would not just be routine, were it performed by a human. However, the AI’s contribution is totally predetermined by its programming and/or neural circuitry (or other circuitry), and could be computed and predicted (given enough time to do so), and the AI is just implementing that which is predetermined by the programming and/or neural (or other) circuitry, which would seem to be a reasonable analogous to a routineer.

    Also, when all else fails, 35 USC 101 states, “[w]hoever invents or discovers any new and useful…” invention – I would like to emphasize the word “discovers.” So, I don’t have to actually invent the invention (and then be the first to file) to be the inventor, I can merely “discover” the invention (e.g., invented by my AI, before anyone else files) to be the inventor. Further, regarding “discovering” an invention, no one would argue that if I throw a bunch of chemical into garbage pail and accidentally create a cold fusion process, thereby creating a cold fusion reactor (and then document how to reproduce the process in a patent application), that I am the inventor of that cold fusion process (and the same clearly applies for any other process/device/compound/manufacture). So, it logically follows that if I throw a problem and solution parameters at an AI, and it spits out a solution, that I thereby “discover,” I am an inventor of the invention embodied by the solution.

  • [Avatar for Pat]
    Pat
    July 13, 2020 01:55 pm

    I respectfully disagree with David Stein. DABUS does not function according to either scenarios. It has intentionality and truly conceptualizes, rather than manipulating its environment or exploiting the results of lab accidents. You need to drill down a little more.

  • [Avatar for Anon]
    Anon
    July 13, 2020 01:30 pm

    This article is a FAIL given the past conversations on the topic (including my own “black box” experiments that show that mere ‘evaluation’ cannot suffice to supply ‘Devisor’ status to a human that has NOT done that actual ‘devising.’

    Many of the “pre-conditions” (caveats) have also been discussed and distinguished.

  • [Avatar for David Stein]
    David Stein
    July 13, 2020 11:42 am

    These scenarios about “AI-created inventions” are resolved pretty easily by analogy to more familiar scenarios.

    Example #1: A researcher is looking for a pharmaceutical that acts as an antibiotic for a certain microorganism. They create a range of samples of different chemicals and chemical solutions, and arbitrarily add them to petri dishes in different quantities. One of those samples demonstrates antibiotic properties.

    Example #2: A researcher is working with a certain microorganism and happens to notice that a particular chemical that was accidentally introduced to a petri dish acts as an antibiotic. (Think: penicillin.)

    In both of those examples, there is no question that the researcher is the inventor of the use of the chemical as an antibiotic (presuming novelty, etc.) In Example #2, the researcher didn’t even develop a theory or initiate an experiment – it was a serendipitous discovery.

    All of the “AI-as-inventor” scenarios posited to date are directly analogous to one of these two examples. In most, the machine learning component is a complex framework for running experimental simulations at the direction of the researchers – much like Example #1. In some (allegedly such as DABUS), the machine learning component is arbitrarily choosing solutions on its own and producing results – such as Example #2.

    In general, two key ingredients are missing from these “AI-created” inventions (including DABUS):

    (1) Initiative. The motivation to initiate a search for solutions to a particular problem, including an identification of the requirements and conditions of candidate solutions, and the overall criteria for considering whether a solution is “good.”

    (2) Perspective. A broader recognition that not only is a candidate solution likely to be “good” for solving a problem according to the (limited and artificial) conditions of the experiments, but that it has practical value in the real world, taking into consideration factors such as usability, cost, ethical considerations, public acceptance, and suitability for the field of technology.

    (Counterpoint: Cyc, a famous classical artificial intelligence algorithm, once suggested that one way to get to space is to “build a very tall building.” Perhaps theoretically true, but… lacking in perspective.)

    The problem of “AI inventors” will simply not arise until those two features can be incorporated into machine learning. Given that machine learning researchers have sought those features *throughout the entire history of machine learning* and appear to be no closer today, this is probably not a problem of critical urgency.