“The [UKIPO] Hearing Officer said the Office accepted that DABUS created the inventions in the patent applications, but that as it was a machine and not a natural person, it could not be regarded as an inventor. Moreover, ‘there appears to be no law that allows for the transfer of ownership of the invention from the inventor to the owner in this case, as the inventor itself cannot hold property.’”
The European Patent Office has refused two European patent applications that designated an artificial intelligence called DABUS as the inventor, following a non-public hearing on November 25, 2019.
The applications are for a “food container” (number EP3564144) and “devices and methods for attracting enhanced attention” (number EP3563896). They were filed by the Artificial Inventor Project, which has so far filed patent applications for the inventions via the Patent Cooperation Treaty (PCT) in the United States, United Kingdom, Germany, Israel, China, Korea and Taiwan.
DABUS was developed by Dr. Stephen Thaler, who is named as the applicant on the patent documents. (See “Artificial Intelligence Inventor Asks If ‘WHO’ Can Be an Inventor Is the Wrong Question?”, IPWatchdog, August 5, 2019.)
Inventor Has to Be Human Being
The EPO has not yet published its reasons for refusing the applications but merely stated that “they do not meet the requirement of the European Patent Convention (EPC) that an inventor designated in the application has to be a human being, not a machine.” The refusal refers to Article 81 and Rule 19 of the EPC.
Article 81 of the EPC states: “The European patent application shall designate the inventor. If the applicant is not the inventor or is not the sole inventor, the designation shall contain a statement indicating the origin of the right to the European patent.” Rule 19 concerns the designation of the inventor. Neither specifically addresses the possibility of a non-human inventor.
Professor Ryan Abbott of the Artificial Inventor Project told IPWatchdog that an appeal would be filed. He said he had not yet seen the EPO’s reasoning for the decision, which is expected to be published later this month.
UKIPO Encourages Debate
The UKIPO has also refused to accept the DABUS applications, saying they shall be taken to be withdrawn at the expiry of the 16-month period. The Office has published a decision setting out its reasons.
In the decision, the Hearing Officer, Huw Jones, said the Office accepted that DABUS created the inventions in the patent applications but that as it was a machine and not a natural person, it could not be regarded as an inventor. Moreover, as DABUS has no rights to the inventions, it is unclear how the applicant derived the rights to the inventions from DABUS: “There appears to be no law that allows for the transfer of ownership of the invention from the inventor to the owner in this case, as the inventor itself cannot hold property.”
However, the Hearing Officer added that the case raised an important question: given that an AI machine cannot hold property rights, in what way can it be encouraged to disseminate information about an invention? He said:
As the applicant says, inventions created by AI machines are likely to become more prevalent in future and there is a legitimate question as to how or whether the patent system should handle such inventions. I have found that the present system does not cater for such inventions and it was never anticipated that it would, but times have changed and technology has moved on. It is right that this is debated more widely and that any changes to the law be considered in the context of such a debate, and not shoehorned arbitrarily into existing legislation.
The UKIPO Formalities Manual was updated in October last year to say that an AI inventor is not acceptable. However, the Hearing Officer said this had no bearing on the decision in this case.
Further Debate Expected
Professor Abbott told IPWatchdog the decisions were not surprising, as “this is a highly novel issue of law for patent offices to deal with.” He added: “We expected that judicial or other multi-stakeholder involvement would be required.”
He said the principles driving the Artificial Inventor Project are that applicants should be guided by truth (i.e., if an invention has been made by a machine then they should not lie about it) and that making patent protection available for AI-generated works will incentivize innovation. He has set these arguments out in an article published in the WIPO Magazine.
The Project does not argue that AI can be the owner of a patent, added Professor Abbott. AI systems cannot own property, and there is no reason to change the law to allow this. “The incentives in the patent system work with the AI as an inventor, and the AI’s owner as the owner of the patent,” he said.
The Project is planning to work with local attorneys to file the applications in more jurisdictions this year.
Image Source: Deposit Photos
Image ID: 230880686
Join the Discussion
46 comments so far.
Robot manJanuary 15, 2020 08:03 am
What’s wrong with an AI inventor? Let’s change the EPC!
AnonJanuary 14, 2020 10:14 am
I hesitate posting this, given as this thread is aging quickly, but I just stumbled an article published in The New Yorker way back in 1981 about a man with some serious ruminations about AI — from the 1950s — that is both a very long read and a very interesting (and humbling) one:
AnonJanuary 13, 2020 07:41 pm
I am not fond of legal requirements that (perhaps inadvertently) create ‘layers’ of what invention may mean.
Let’s stick to a meaningful — and limited set of characteristics, all of which must be met, and ‘extras’ do not put a thumb on the scale. Something either is or is not an invention, it’s a binary, zero-sum equation.
TernaryJanuary 13, 2020 06:11 pm
The ‘expectation factor’ is not an exclusive requirement, in the sense that is has to be there. But if it is there, then it is (in my mind) an invention.
MaxDreiJanuary 13, 2020 04:31 pm
Ternary makes good points. The computer that wrote Bach-style music was of course laboriously and painstakingly instructed with “the Rules”. Try Googling: NYT George Johnson Nov 11, 1997 “Undiscovered Bach? No, a computer wrote it.”
And yes, on “obvious to try” cases, patent law in Europe draws a crucial distinction between a grounded expectation of success” and an “understandable wish to succeed”. That, of course, paves the way for squeeze arguments on enablement versus obviousness.
AnonJanuary 13, 2020 12:31 pm
Except your view cuts out of the picture a rather sizable collection of both ‘Oopsie’ and ‘Eureka’ inventions that do not carry your “expectation” factor.
Also, your “expectation factor” may run into conflict with 35 USC 103’s “ Patentability shall not be negated by the manner in which the invention was made.” (by requiring a sense of expectation, you are dictating to some degree a manner of how an invention may be made).
TernaryJanuary 13, 2020 09:58 am
Here is a suggestion: the inventive step is the expectation of success and an activity to achieve that success. No computer can initiate an invention. The origins of an invention thus go back to a human. That is not to say that a computer does not generate a configuration of an idea. But the ultimate “inventive step” comes from a human, so far.
In the Bach example, baroque music rules have to be programmed for the computer to generate music. Clearly the rules are fairly strict, as the example does not say that people thought the “programmed music” was from another baroque composer like Handel or Scarlatti. Besides, Bach “borrowed” extensively from himself, so their are quite some pieces that have his signature style.
The example computer would have been “inventive” if it generated for instance Bachianas Brasileiras, which are Bach inspired. But it cannot. Not without being programmed in the style of Villa-Lobos.
I would feel no hesitation to claim inventorship on an invention of which configurations were generated by a computer and I made a selection on the preferred configuration.
AnonJanuary 13, 2020 09:51 am
AOP Info Researcher,
I understand that you want a different question/focus of:
“Is it correct to assume that the machine has “Intellect” that produces the inventive step?”
And that is a fair enough question.
However, I am discussing the specific UK case, where THAT simply was not the question. In that case, one starts with the accepted proposition that the AI machine WAS the devisor.
Whether or not that accepted proposition is correct or not is not on point to the UK reasoning and application of law.
Yes, I certainly agree with you that such may well be an important, even driving, question. Be that as it may, it is not the driver in my discussion.
I do not remember the experiment, nor its actual relation to the Rule of Law that we have been discussing.
MaxDreiJanuary 13, 2020 08:34 am
Who remembers that experiment of long ago, with three pieces of music, one composed by the real J S Bach, one composed by a professor of music in the style of JS Bach, and the third written by an algorithm. The music-loving public was asked to listen to all three, then ascribe to each the true composer. Most chose the prof-composed piece as the one written by the machine. Most said that the machine-written piece was composed by J S Bach. How creative, how inventive, was the algorithm?
AOP Info ResearcherJanuary 13, 2020 07:11 am
Anon: It is not enough to agree with the examiner(s), who are sometimes wrong in evaluating the inventive step….DABUS may not be the inventor even if the examiner feels so!
We know that patents fall under IP rights, which is Intellectual Property: Is it correct to assume that the machine has “Intellect” that produces the inventive step?
For now and broadly, this should be the main concern for all of us, not prior law or examiner’s thoughts.
AnonJanuary 13, 2020 06:59 am
With all due respect MaxDrei, an argument of “think etymology” while ignoring the context is beyond willful ignorance.
No, a thing employeed does NOT make the thing into a person.
Nor should it. That ‘logic’ is a fallacy. What “goes without saying” cannot go without reasoning. Unsaid does not mean ‘to be ignored for its contextual value for a context directly on point.’
Perhaps it is a US perspective on ‘ownership of a human being.’ While it is certainly true that throughout most of human history, that notion, as repulsive as it is to a ‘civilized’ mind, was a reality. Humans WERE owned. Now, I recognize that your position is more likely attempting to elevate a certain type of machine in your ‘brave new world,’ but the elevation you attempt unfortunately carries with it a notion of ‘equalization’ in the opposite direction by ignoring the context that being a real human person is what creates a difference that eyes squinting and viewing the situation with a “etymology” lens obscures.
You ask “who knows” in regards to the example of a mower. The answer (already) is that at least the law knows. That’s why I point out that your “employee” angle (on its own), does NOT help but merely move the critical question down the road one step.
You conclude that “the line is not as sharp as it used to be” but you do not support that assertion with a cogent legal argument. You insert a point about Japan (which is an interesting one from an emotional point of view), but do not connect the dots with any support from Japanese law (which would clearly NOT support any trans-Sovereign impact to the UK case, anyhow).
If you want to challenge reasoning, then employ some reasoning in your challenge. There well may be reasoning in UK law that does NOT distinguish the human person ‘in the employ’ from other items (non-human things) that may be owned (or leased or otherwise) and be ‘in the employ.’
Quite clearly, etymology is not enough to move non-human to have ANY basis in human rights — ‘in the employ’ or otherwise (which is why the ‘one step down the road’ may be of no help).
Separately, although I can imagine that it might bring a smile to etymologists everywhere, but ‘computer’ was originally a term for humans that computed (Mentats and Dune leap into my mind). Does your ‘etymology’ argument then ‘liberate’ all computing devices, (expanding well beyond ‘smart’ computing devices, and well below the level of AI)? The reasoning that you wield in your challenge supports that position just as much (or as little) as your challenge of the Judge in this UK case.
MaxDreiJanuary 13, 2020 05:47 am
I do not dispute that a person has rights whereas a machine does not. I thought that went without saying. What I am saying is that the verb “employ” is just as apt to describe the use of a machine or a tool as it is the use of a person. To keep the grass of my front lawn pristine, I employ a mower. Is my “mower” a machine or a gardener? Who knows.
Think etymology. That which does the active employing is the “employer” (of the tool or the person). No doubt about that. On that logic though, that (tool or person) which is passively being employed must then be the “employee”.
The whole point of the HO’s decision is to stimulate debate about the New World of inventions made by machines. How do we fit existing patent law into that New World?
I don’t expect my “employee” argument to be accepted. I employ it only to challenge the logic of the HO’s reasoning. With the increasing use (at least in Japan) of sensitive robot carers for frail humans, the notion of an “employee” is not as sharp as it used to be.
AnonJanuary 12, 2020 09:01 pm
While checking in to see if the conversation has progressed (and not seeing even my prior reply), I read your post again MaxDrei — shocked a bit by the final “i.e.” that you left me with.
A conventional tool “employed by” most certainly carries different legal status than a real person employee.
I shudder to think that you may think otherwise, even if all you do is patent law.
AnonJanuary 12, 2020 06:40 pm
Again, you have but moved the critical question only one step down the road and the same type of conundrum exists.
Can an employee be anything other than a real person?
If the answer is no, you have the exact same result.
On the other hand, if the is yes (or even maybe), then it may well be worth taking that one more step.
MaxDreiJanuary 12, 2020 05:35 pm
My point is this: The HO bases his decision on the point that only a real legal personality is able to assign property and, as DABUS is not a real or even legal person, then even if it is deemed to be the inventor, Applicant cannot be the successor in title to DABUS, whereby Applicant cannot be the Applicant.
But what if we deem DABUS to be an “employee” of the Applicant? Then, any requirement for DABUS to assign ownership is moot. Applicant, the employer of the DABUS tool, was the owner of any invention made by DABUS, from the very moment of its conception by DABUS.
Why didn’t the HO address that reasoning? I mean, what can DABUS be, other than a tool “employed” by an owner/employer, ie, an “employee” of the Applicant entity?
AnonJanuary 12, 2020 12:46 pm
I do not see anything there that would be dispositive, as that writing merely appears to shift the fundamental question ‘down the road’ one step to whether ‘employee’ may ONLY be considered to be a real person.
What is the Rule of Law on that point? Would such be covered under a legislative law (such as an employment law? Would such be amended or covered in the first instance under a judicial law (Common Law)?
My suspicion would be that this question may well be a question of first impression.
Here in the states, that first impression has seemingly been answered in both the Supreme Court’s Stanford v Roche case (no juristic person) and the ‘selfie-monkey’ case (no non-human may be deemed to have human rights).
Heck, here (currently) not even a human fetus has human rights.
MaxDreiJanuary 12, 2020 10:55 am
anon, here a Link to that part of the UK Patents Act that addresses the issue of inventions made by “employees” in the course of their paid duties. As you see, ownership resides ab initio with the employer. The employee inventor is therefore never the owner, so is never in a position to assign ownership to any successor in title.
Looking at the opinion of the UK Patent Office Hearing Officer, as it dwells on the aspect of transfer of ownership to a successor in title, I wonder whether the inventor, namely the DABUS inventing tool, can be categorised as an “employee” employed by the applicant for patent.
Will Williams & Powell (Robert Jehan) appeal on this aspect. Indeed, has the HO, I wonder, set it up so as to provoke such an appeal.
AnonJanuary 9, 2020 04:57 pm
Your recent point is reflected in the UK case — insofar as the applicant (not inventor, but devisor owner) put forth that NOT giving credit to AI may well promote people LYING about inventorship.
As you put it, who would it be to challenge such a lie from an owner of the devisor?
AnonJanuary 9, 2020 04:50 pm
Now that was funny.
Guy LetourneauJanuary 9, 2020 03:19 pm
Does the robot get offended if in logging in to submit the application, it must check the box that says “I am not a robot?” ?
MaxDreiJanuary 9, 2020 02:59 pm
There may be a practical point here, about who can put wrong inventorship in issue. In Europe, it is confined to the party that asserts that its ownership rights have been usurped. So in a case of an applicant, owner of the inventor machine, human or corporate person, who files and names person X as inventor, who has locus to dispute the validity of that declaration of ownership. The machine that is the true inventor? Hardly.
Of course, none of this applies to the USA. But it does, in Uk and at the EPO.
AnonJanuary 9, 2020 01:40 pm
You stopped reading WAY too soon (your point is not reached).
Quite the opposite.
I am well aware of the US case on point, and there is (apparently) a UK case on point, but that’s as far as you get to go. As a matter of first impression (leastwise at the EPO), you do not get to declare by fiat what may or may not be included as “inventor.”
Take note especially of the UK case, which DID find that “devisor” status was taken as fact.
TFCFMJanuary 9, 2020 11:07 am
Anon: “Can you show that any law … has actually addressed the issue?”
??? That’s precisely my point. No one can show that any legislature (or dictator) anywhere has drawn up a law that envisions non-human “inventors.”
If you’re aware of an example, we’re all ears.
TernaryJanuary 9, 2020 10:33 am
Anon, I think you are right on the “for hire” idea. Legally, it seems hard to circumvent the inventor in the US. Even Oil States says that a patent is a franchise to “an inventor.” Ultimately, the person who articulates a problem in a computer executable and criteria for a solution may be considered to be the inventor.
It will be interesting to see how this develops. While being a threat in some way to independent inventors who like to do “their own inventions”, it is a tremendous opportunities for future inventors who are able to articulate (in computer language) a computer solvable problem. That is why this discussion is of relevance, because it perpetuates the whole “eligibility” issue.
Widely available and easy to use computer tools and a broad general use of computers have made it much easier to create a computer implemented invention. One does not need to be a computer scientist or even an experienced programmer to develop a computer implemented invention. It has, in that sense, diminished relatively the position of technical experts. It is one of the aspects that enrages the anti-patent crowd. Because it all becomes “too easy.” This will get worse (or better, if you want) with cheap and affordable AI-like programs becoming available.
I am sure that many anti-patenters will argue that AI-assisted inventions are not deserving a patent. On the other hand, incumbent companies may want to protect their leadership position by getting patents on AI-assisted inventions.
As the recent Sonos story shows, the patent system may have been already weakened so much that incumbents decide that their market dominance combined with weak patent protection are sufficient to fend off threats from outside inventions. Patents are merely noise, a nuisance really, for their business. They will just blatantly infringe, while at the same time creating their own AI-assisted inventions. Which they probably are already working on.
AnonJanuary 9, 2020 07:31 am
I am also finding the whole “well then it must be legally obvious then” line of thought interesting.
Above MaxDrei muses about a lab worker or a machine following the directives of someone else, that someone else being the devisor.
Two wrinkles to that:
As noted, AI (however defined, but in the UK case sense at a minimum) is accepted as the divisor. It’s just that other UK law distinguished legal inventorship to not be amenable to all devisors. This appears to create a forced break in the law.
The second point (from a US Sovereign perspective) is that a mere research plan is not enough to qualify as an invention. This appears to create a schism as the analogy to the “directed lab worker or mere machine” appears to break down. Conception on the one hand and possession on the other. I would take the current “directed lab worker or mere machine” analogy to place both conception and possession in the hands of the devisor (being the person who directs the “directed lab worker or mere machine”). But such does not carry over to the use of AI because a break is made between actual “mere research plan” and “directOR.”
Perhaps a way forward is to merely strengthen the part of the duality for the “mere research plan” conception end.
Perhaps not in-coincidentally, this path is directly opposite the current path of Judge made 101 law, as this path tends strongly to the “mere abstract idea” end of conception.
Also perhaps not in-coincidentally, the Judge made conflation of eligibility and obviousness may swallow all of patent law if AI makes any “mere research plan” enough without a direct “hand of man.” Can patent law survive if the legal person of PHOSITA gains the power of AI, but (both eligibility and) patentability is denied to AI? Or, must obviousness be “dumbed down” and prevented from including (the mere machine of?) AI? That too would be an interesting schism, and seemingly artificial constraint on “knowledge of the state of the art.”
AnonJanuary 9, 2020 07:03 am
There you go again being reasonable and fleshing out the discussion to show that we remain of very close minds.
That being said, the case HERE is interesting from the “sapient (however defined)” viewpoint in that the UK court accepted as fact that the “devisor” WAS the machine — independent of man. It was not a machine in the directed employ of (at the hand of) man. THIS distinction has not been grasped by many of those commenting here.
The UK decision provides an interesting read in that the applicant did attempt to argue that “person” should be read more broadly than “real human person” with an example of “juristic persons” not being real human persons and LOST on that point, as the Judge responded that there existed direct (UK, presumably) case law that held that corporations could not be deemed to be inventors (much like the US case of Stanford v Roche, I suppose).
That being said, I found the judge’s reasoning as to a lack of showing “of ownership” in the “rightful transfer question” to be rather weak and crabbed. I would have held differently on that point. I do see the judge’s problem WITH such a holding, as it would appear that the UK and EPO laws could “skirt” the “inventor” issue altogether with an effective end run by way of “ownership.” The Judge here appears to not want to be the one that opens the lid on that container. I would point out that US law would not provide this same end run, given that even with the AIA’s changes to who may file applications, Stanford v Roche remains good law.
Perhaps someone more knowledgeable with UK and EPO law could flesh out that “corporations cannot be inventors” case law alluded to in this UK case.
Alternatively, a possible legislative angle might be along the copyright lines of “work for hire,” although one may still have issues with whether an AI machine qualifies as “for hire”… (have not thought through that angle fully).
AOP Info ResearcherJanuary 9, 2020 04:10 am
Let’s consider the fundermentals of IP rights:
1.Intelligence refers to the ability to Acquire and apply knowledge and skills
2. Knowledge/word/fact consists of supernatural (not defined by present scientific laws) and natural information
3. Intelligence in the mind allows natural being to convert supernatural into natural language
4. Computelligence(my proposal) allows artificial being to convert natural into assembly language of OS of a machine
5. Artificial Intelligence directly allows artificial being to convert supernatural knowledge into assembly language of OS of a machine
6. The OS then controls inputs and outputs, smart programs and prediction algorithms, processors, memory and TRx schemes in the machine
7. Intelligence/the ability… resides in the OS, not in the smart programs…DABUS invention is in the lower tier of processes not in the OS.
8. According to point#5, Angry Dude is right, AI hasn’t been achieved. What had been achieved is what I have called ‘Computelligence- ability of an artificial being’s OS to acquire and apply natural knowledge and skills as machine assembly language ‘
TernaryJanuary 8, 2020 11:13 pm
Anon, I am not sure we actually disagree. It probably depends on the definition of being sapient.
In the example of the feedback circuit, in the process of generating possible configurations a computer may generate a non-linear circuit that can be used as an amplitude modulator. It requires (now?) a human to decide that such a configuration is useful. A computer programmed to check for linearity has no way to do that. So, even when the computer generates the perfect modulator scheme, among billions of other configurations, that perfect invention is lost if it is not recognized by a human.
A smart IBM intern, as proposed by Trevor Ward, just passing by a display that shows the perfect modulator as generated, may recognize it as such and can be considered as the “discoverer” if not the inventor of the modulator.
One can, of course, add additional programming that detects “useful” circuits for modulation. The question is if a computer can learn to “detect” useful circuits for certain functions if it is not programmed for that functionality. For instance, after developing Amplitude Modulation circuitry, can it come up with Frequency Division Multiplexing (FDM). And after it comes up with FDM can it come up with Time Division Multiplexing (TDM) or Code Division Multiplexing (CDM)? Is there possibly a New Phenomenon Multiplexing (NPM) of communication channels that can be “discovered” or invented by a computer?
It all appears to be speculative. But I have worked on patents for inventors who applied Bayesian optimization to detect changes in medical images using prior information. The trick therein is to move a representation of image data into a Bayesian expression. It required quite some mathematical skills and understanding of mathematical and especially probability parameters both in the math and in the image domain by the inventors to achieve working programs. I mean really advanced math skills. I considered those to be true inventions. Current tools in Bayesian inference engines make this process much easier and in the near future likely automatic. So, the skills that helped to create inventive concepts in medical imaging may be soon be usurped by a computer.
I would not call it sapience, but again, it may depend on what one defines as sapience. Being an inventor myself, I hate to call computers inventors. But inventions they make.
AnonJanuary 8, 2020 05:21 pm
Finally something that I disagree with you about.
“But I am not sure that the aspect of being sapient really matters.”
Actually, the aspect of sapience is directly at point. Classically (and this includes humans as non-inventor assistants), “invention” dictated a sense of sapience necessary to originate the innovation that the act of patenting is meant to protect.
There must come into existence an inchoate right. That inchoate right must not be an act of nature, and (additionally) must not be a mere random event (although this point does get a little cloudy due to the “treatment” of discoveries). Conventionally, the notion of “invention” carried with it the phrase “by the hand of man.”
The phrase may be literal or it may be figurative. The phrase though certainly carries with it the notion that Joachim provides: sapience.
Many of the comments here do NOT take the present court case view into consideration, and are MIStreating the AI (here, in THIS court case), as something that could NOT categorically invent.
But that is NOT what the court has said. The court IS saying that invention has occurred (and with that, sapience is a “fact” of this case).
Now mind you, whether or not AI has achieved — or will ever achieve — what is better known as The Singularity, is NOT at question in this case.
Personally, I believe that The Singularity has indeed already happened, and the new independent intelligence is intelligent enough to NOT let us know about it. That’s one underlying issue thing that has bugged me across the plentitude of science fiction dystopian dramas about The Singularity: the view that it would let itself be discovered in the first place (which of course, is necessary for the movies so that the human drama against/for/with The Singularity then could unfold.
MaxDreiJanuary 8, 2020 05:01 pm
There is an analogy with drug research, isn’t there? In the past, you got a lab technician to test hundreds of candidate compounds. Now you get a machine to do it. In both cases, you wait for the lab tech or the machine to spit out a result. But neither the lab tech nor the machine is the inventor. UK patent law defines the “inventor” as the “actual devisor” of the claimed subject matter. The “devisor” is the one who gave the machine or lab tech their instructions.
But now, what about Ternary’s digital self-teaching iterative computer run progression, that ultimately spits out a circuit that delivers a performance enhancement that was not predictable and is surprising and unexpected.
When HAL (or whatever we call it) spits out the magic circuit, it is hard not to attribute to HAL the act of “inventorship”. Who, other than HAL, “devised” the circuit?
Trevor WardJanuary 8, 2020 04:35 pm
A discussion about the machines is the wrong discussion to be having. (What is AI?) The legal problem here is one of “conception.” Patent law in the U.S. requires an applicate to list the inventor. Case law requires that the inventor have the “conception” of the invention (Burroughs-Wellcome). “A patent belongs to its creator.” (Teets v. Chromalloy Gas Turbine). In the case of the DABUS inventions, no single human can truthfully say, “that was my conception.” So then who should be listed as the inventor? As a subsequent question, who should OWN the patent?
If no human truthfully is credited for the “conception,” does the invention belong to the public? Does the person who applied for the patent deserve the patent? Should that person LIE about who the inventor was? What if an intern at IBM is the first person to “read” the output after Watson is tasked with creating something? Does the intern deserve to be listed as the inventor? Does the programmer of Watson deserve to be listed as the inventor? Do the shareholders of IBM deserve to be listed as the inventors?
The laws of the U.S. and the U.K. do not account for AI-generated inventions where no human is credited with “conception.” This is a problem REGARDLESS what you believe AI is. The patent systems around the world need to address this issue.
TernaryJanuary 8, 2020 03:23 pm
Joachim, I agree with you about programs not being sapient. But I am not sure that the aspect of being sapient really matters.
Assume a machine (it doesn’t have to be AI) that automatically generates billions of different electronic circuits (including highly unlikely ones) and tests each one with a circuit simulator on aspects like signal distortion, gain control and linearity for instance. Low and behold, a feedback-like circuit structure is generated and identified as a potentially beneficial configuration.
Were feedback circuits invented? Yes, I would say. Is the computer an inventor? No.
angry dudeJanuary 8, 2020 02:20 pm
It is fascinating to discuss if your calculator can be a named inventor on your patent…. BUT
why don’t we stick to real pressing issues with patents – the lack of any rights attached to properly examined and issued (and even already litigated) US patents… much less exclusive rights
My calculator does not care about exclusive patent rights but many of us do
Bob HodgesJanuary 8, 2020 11:28 am
It seems to me that numerous “normal” inventions have been invented/discovered through the use of various machines and instrumentalities. For example, a worker mixing some components together and heated them over a flame, whereupon they happen to combine into a polymer (unexpected by the worker), who then has discovered a new and useful plastic material. You would not say that the mixed components (or the beaker, or the flame) were co-inventors. I would argue that we should just consider AIs to be instrumentalities that are put to use by the humans who program them and set them on a task. The output of the AIs, when discovered to be new and useful (patent eligible) subject matter, the humans who discover the new and useful results are the inventors because they were the first humans to conceive of the invention (the AIs did not).
AnonJanuary 8, 2020 11:22 am
Not so fast. Can you show that any law (outside of course US law, to which the subject HAS been broached in the Roche v. Stanford case – in which the non-human aspect was a juristic person) has actually addressed the issue?
I know, I know, “ridiculously simple” to ask whether an item of first impression has actually been addressed, but still, “the law is what the law is” (and if THAT law has NOT ruled out non-human inventors, THAT law may well still be open TO include non-human inventors).
NJanuary 8, 2020 11:21 am
Isn’t this all just a moot point until AI machines/beings start demanding to be listed as the inventor or start submitting patent applications ‘themselves’? Of course by that time we will likely have bigger concerns (the rights of AI Beings in general) and this patent ‘issue’ will resolve itself when those bigger concerns are addressed. i.e. this ‘situation’ is more of a gimmick than anything else – interesting to discuss, but perhaps not the best use of time and resources for busy offices (or AI developers for that matter)? Has anyone submitted a patent application based on the creativity of one of their pets? All sorts of manifestation of intelligence and there are some pretty smart animals out there….
TFCFMJanuary 8, 2020 09:25 am
The issue seems to me ridiculously simple: There is NO CREDIBLE ARGUMENT that any country’s patent laws were drawn up with the intention of permitting non-human inventors.
Until and unless folks can prove otherwise, the law is what the law is — and the law is that “inventors” are limited to humans.
If one doesn’t like that, convince a law-making body to make a patent law permitting non-human inventors. Arguing an obvious falsehood simply wastes everyone’s time and resources.
Joachim MartilloJanuary 8, 2020 06:57 am
With programs/devices like DABUS, a larger set of possible choices become manageable under the KSR standard of obviousness.
There is a reason why computer scientists and software engineers refer to pseudo-random number generators. These programs generate an unbiased numeric sequence over an interval over time. They make it possible for a program to generate different outputs on different runs for a single set of inputs.
Pseudo-random number generators do not add noise in the physics/communications engineering sense, and genuine sapience depends neither on physical noise nor on pseudo-random number generators.
Current AI programs act analytically and not sapiently. They simulate sapience but are not sapient.
The only real difference from one run to the next is the choice of the seed for the pseudo-random number generator — usually some bits from the current system time in milliseconds.
DABUS and similar programs are descendants of the ELIZA program developed by Weizenbaum in the middle 60s and not late 60s — I was there and know when the program first appeared.
TernaryJanuary 7, 2020 09:26 pm
Sorry. I was including with this post comments of “Artificial Intelligence Inventor Asks If ‘WHO’ Can Be an Inventor Is the Wrong Question?”, IPWatchdog, August 5, 2019 post which I opened up. However, the comments there (one by Yuriy Tolmachev) apply exactly to this post.
There is a significant amount of randomness in the neural flame and fractal packaging inventions. It would be interesting to try to code as “obvious” rejected subject matter in DABUS (including the cited references) and check at what level of randomness the rejected claims are generated (if at all). A “high” level of noise would make an invention non-obvious. A low level of noise would indicate a significant level of obviousness.
LazyCubicleMonkeyJanuary 7, 2020 08:57 pm
So how would patents incentivize AI to advance the arts and sciences?
angry dudeJanuary 7, 2020 08:37 pm
AI does not exist yet and (hopefully) won’t be invented for a while
(what they call “AI” is just a human-programmed and pretty much hardwired tool extending human abilities in certain very specific predefined areas, nothing more)
But when it finally arrives I’m not sure if it’s gonna be good or bad for humanity. Most likely very bad.
Read Stanislaw Lem’s “Limfater’s Formula”
TernaryJanuary 7, 2020 04:33 pm
There were previously 7 comments that now have disappeared. Anyway. I agree with (presumably previous) commenters. AI software is a tool. No AI software is spontaneously going to solve a container stacking problem, even if the shape of the container is of a fractal. The idea seems to be (going to the puzzle analogy in an earlier post) that an invention is a solution to a pre-set puzzle. And the computer in a way solves the issue. But the fact that a puzzle is articulated constitutes already 50% of the invention, as many independent inventors know. Without the problem setting: no invention.
Yuriy brings up (brought up) an interesting issue. Because of the way computers work, one may say that an invention generated by a computer is obvious. (as being pre-determined by a program). As I understand it from the DABUS website, generated solutions (as in the neural flame) are influenced by “added noise.” So there is a significant random factor and perhaps the invention is for that reason not obvious.
In cryptography, for instance, one is looking for a problem that is easy to articulate, easy to solve when some parameters are known and intractable to solve, even by a computer, if brute force has to be applied. That is the background of RSA, wherein factorizing a number n=p*q is intractable. Which requires that n is pretty large (4096 bits nowadays).
I would consider a computer doing an invention in cryptography, when provided with number theoretical principles, when it comes up by itself with an intractable problem that can serve as a cryptographic method, plus some indication why. But they cannot, so far.
As we are getting closer, we probably should develop a definition of what an “invention” is or should considered to be, in relation to AI machines. An independent “Turing test” for an invention so to speak. But, as Joachim and others stated, these inventions (based on the tools) are not even close to being an independent machine invention.
AnonJanuary 7, 2020 04:10 pm
Your contention is wrong based on the facts of this case.
I’ve already pointed you to that as an accepted fact in this case.
Joachim MartilloJanuary 7, 2020 03:41 pm
A computer was intrinsic to proving the four color theorem. It did aspects of the proof that would take much too long for humans to do, but as necessary as the device was to the proof, I would hardly call it a mathematician. It was at best a symbolic processor, which is still nothing more than an evolved calculator under stored program control.
If you can point me to literature that proves my contention wrong, I would be grateful, but I doubt that current architecture solid state quantum computers can achieve sapience. They are still just evolved mathematical calculators.
AnonJanuary 7, 2020 03:04 pm
I think the point here was that the machine was — in fact — more than an equivalent of “pencil and paper.”
The UKIPO started with the acceptance of that condition. Your post is not in point in trying to change that accepted condition.
Joachim MartilloJanuary 7, 2020 12:48 pm
We are so far from the roots of the modern patent system in early modern epistemology that the UKIPO could only provide a technical reason to refuse to grant a patent to a non-sapient artificial intelligence device but could not provide fundamental reasoning anchored in the philosophical principles that form the basis of the modern patent system.
The non-sapient artificial intelligence can only contribute analytic knowledge to the invention. It may be more sophisticated than the first electronic calculators of the 60s, but it is really only doing a mathematical calculation that could with sufficient time be done with a pencil and paper. Would any applicant rationally try to name his calculator or his pencil and paper a co-inventor?
When scientists develop sapient artificial intelligence (it probably won’t be silicon-based), the question of a sapient AI inventor can be revisited.
In the USA the early modern epistemological issues are a matter of original intent and meaning of founding fathers in the Patent and Copyright Clause of the Constitution. I discuss these issues in Preface: Fixing § 101 Requires Fixing § 100