‘AISITAs’ and Written Description Requirements: Considerations and Guidance for AI Patent Applications

“As AI becomes more prevalent in research and development, and as AI systems become more autonomous, some commentators believe that the POSITA for an AI-related invention should instead be an “artificial intelligence skilled in the art” (AISITA).”

AISITA - https://depositphotos.com/168275348/stock-photo-artificial-intelligence-idea.htmlArtificial intelligence (AI) is everywhere, touching nearly every aspect of our daily lives, including how we work, communicate, shop, travel and more. The term “AI” is generally understood to encompass computerized systems that perform tasks ordinarily perceived as requiring some form of human intelligence. Many AI-based systems are able to recognize trends, patterns and connections, test hypotheses using available data sets, and continuously improve decision trees based on user input. As such, AI has been shown to have near endless applications, driving a surge of inventions and related patent application filings.


Under 35 U.S.C. § 103, an invention is not patentable if the “differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious.” Notably, obviousness is determined from the perspective of a hypothetical person of ordinary skill in the art (POSITA) as inventions have historically been conceived by people, not machines. In KSR Int’l Co. v. Teleflex Inc, the Supreme Court has also stressed that a POSITA is “a person of ordinary creativity, not an automaton.”

With the growth of AI, however, machines significantly contribute to inventive activity, and are capable of storing and analyzing vast amounts of information at speeds unachievable by humans. The use of AI technology in research and development raises the question of whether a POSITA’s perspective is appropriate for assessing the patentability of inventions in a field in which AI is used. Just as the design of a pharmaceutical might not be obvious to a layperson but obvious to an experienced research chemist, certain AI-generated inventions might not be obvious to a human POSITA but obvious to an AI system.

In considering whether a POSITA’s perspective is appropriate for assessing patentability, it has been suggested that the POSITA for an AI-related invention should be a person who specifically understands and works with AI. However, as AI becomes more prevalent in research and development, and as AI systems become more autonomous, some commentators believe that the POSITA for an AI-related invention should instead be an “artificial intelligence skilled in the art” (AISITA).


If applied to all inventions, an AISITA perspective would have dire consequences for patentability. For example, under the current legal standard articulated in KSR Int’l Co., an invention may be obvious if the problem to be solved has “a finite number of identified, predictable solutions” that can be tested by a POSITA. That number is somewhat limited for a human POSITA, but an AISITA could rapidly test an enormous number of solutions, thereby drastically expanding the scope of what is obvious and potentially invalidating many previously patented inventions.

Nevertheless, an AISITA perspective might be used in a more limited fashion to assess obviousness of an AI-related invention where an AI system was employed during development and plays a role in the operation of the invention. In this situation, the scope of what is obvious could be more limited because, for example, an AI-related invention developed with built-in features of randomness may achieve a result that would not have been predictable to an AISITA and could thus be considered non-obvious.

To facilitate application of the most appropriate standard for assessing obviousness (POSITA or AISITA), the U.S. Patent and Trademark Office (USPTO) could require applicants to disclose the extent to which an AI played a role in the inventive process. Such a disclosure requirement could mirror the current requirement for identifying inventors and introduce a more tailored approach in assessing obviousness.

AI technologies can also impact the scope of prior art considered when determining obviousness. This is relevant because the more prior art that is available for an examiner’s consideration, the harder it will be for an applicant to overcome the obviousness hurdle.

In general, relevant prior art includes “the art to which the claimed invention pertains” (i.e., arts from the same field as the invention) as well as analogous arts (arts from a different field that solve the same problem as the invention or are directed to the same purpose). Unlike flesh-and-bone inventors, AI systems are not influenced by human preconceptions as to the specific field of the invention. Further, due to their immense computing power, AI systems are capable of analyzing vast amounts of information from a variety of different fields. As a result, AI systems can more easily identify patterns when comparing prior art from completely unrelated fields, finding inventive inspiration in non-analogous art that human inventors might not have considered. As such, for inventions that have been primarily produced by an AI system rather a human, the distinction between analogous and non-analogous prior art may be irrelevant.

For the time being, the POSITA perspective remains the standard for assessing obviousness of patent claims directed to AI-related inventions. Yet, to the extent an AI technology includes the implementation of human input (e.g., initial selection or processing of information, identification of the problem to be solved, selection of the final design from several choices, etc.), it would be helpful to highlight such features in a patent application in an effort to strengthen the applicant’s position with respect to non-obviousness.

Enablement and Written Description

Under 35 U.S.C. § 112(a), a patent “specification shall contain a written description of the invention … in such full, clear, concise, and exact terms as to enable any person skilled in the art … to make and use the [invention].”

In traditional software programming, an inventor devises a mathematical function that converts an input into a desired output. In contrast, an AI system generates a “learned function” that allows the system to produce a desired output based on a given input. For example, an inventor might develop a basic AI model, provided with input and output data derived from a “training set.” Through multiple rounds of iteration, the AI system identifies the complex rules that map input data to output data. The system achieves this by adjusting basic parameters underlying the machine learning algorithm to improve the accuracy of outcome predictions. Additionally, the system may identify suitable statistical weighing values that reflect the relative importance of each input feature for the prediction of a correct output. This iterative process of testing, assessing error, and re-adjusting underlying parameters is referred to as “training.” The resulting “learning model” can then predict output values for a real-work dataset not previously seen by the system.

In the context of written description, it is important to note that a POSITA might not readily be able to implement the “learning model” simply based on the description of the basic AI model. Different training data and model settings can lead to different learning models that achieve different results. As such, to meet the written description requirement for AI-related inventions, patent practitioners should include detailed information, such as a description of the component configuration and type of initial AI model, how the AI model was trained (e.g., by using mathematical formulas, flow charts and/or pseudocode), what types of data are used for training, whether the AI transforms the input data into a form more suitable for downstream processing, any learned coefficients and weights that the learned model uses for providing the desired output, and/or how the data is eventually output. If the training data is not proprietary, it can also be helpful to describe the actual training data set. In general, patent practitioners should provide enough detail for a POSITA to reproduce an inventive training technique when using other types of data.

Because it is not always clear how an AI model arrives at a given output based on certain input data, some have argued that AI-related inventions should be subjected to more stringent written description requirements. While the USPTO has not yet implemented any such enhanced requirement, patent practitioners are well advised to provide a thorough description of the claimed AI system in an effort to meet the written description requirement.

Watch This Space

As the use of AI continues to play an increasing role in innovation and patent filings, the legal landscape is likely to evolve, with courts and patent offices around the world addressing some or all of the foregoing issues, among others. To the extent possible, it can be critical to consider such issues when drafting patent applications to best present the disclosed invention as the law develops.

Fox Rothschild Partners Gunjan Agarwal and Brienne S. Terril also contributed to this article.

Image Source: Deposit Photos
Image ID:168275348


Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com. Read more.

Join the Discussion

9 comments so far.

  • [Avatar for Anon]
    July 30, 2021 09:52 am

    lol ipguy – you remind me of my own offered link on the intersection of patents AND (human) slaves…

    (with a dystopian twist – but you miss the twist of the current dystopia of Corporatocracy; we NEED to emasculate the legal fiction of personhood for corporations, as they wield outsized power and influence)

  • [Avatar for ipguy]
    July 29, 2021 05:58 pm

    Corporations are legal persons under the law. Let’s get AIs recognized as legal persons under the law. But then we’d have to deal with the whole slavery issue, subsequent Cylon Rebellion, and the next thing you know, Skynet has initiated a nuclear strike to wipe out humanity.

  • [Avatar for Anon]
    July 28, 2021 05:27 pm

    FYI: https://www.globallegalpost.com/news/south-africa-issues-worlds-first-patent-listing-ai-as-inventor-161068982

    South Africa grants patent with inventor being AI (DABUS)

  • [Avatar for Max Drei]
    Max Drei
    July 27, 2021 01:53 pm

    I’m intrigued by references in this piece to “the inventive process” because I don’t know what that is. Does it matter?

    Being in Europe, I’m at home with a TSM enquiry into obviousness, which (I guess) renders it unnecessary to enquire into the nature of “the inventive process” or for that matter the manner in which the invention was made.

    I can imagine an AI with a capability to interrogate a mass of data and divine from it a useful pin point solution to a technical problem, a solution for which there is no hint or suggestion evident to a human skilled person in the relevant art. So we are increasingly going to see an embodiment of a patentable invention served up on a plate by one or other AI. But will the AI draft the claim to the concept, I wonder.

    The question of ownership of the patentable invention is important, but easy to solve. The question who to name as inventor is NOT easy to solve but (at least outside the USA) need not be addressed.

    And as to the attributes of the skilled addressee, that are relevant to a contribution by an AI to the art, over the next ten years we shall see emerging from the thousands of decisions of the thirty or so Technical Boards of EPO Appeal a consensus what they are. Presumably, the enquiry will be about the capability of that PHOSITA, so defined, to enable the subject matter of the claim.

  • [Avatar for Mark Nowotarski]
    Mark Nowotarski
    July 26, 2021 07:38 pm

    This article raises some very important questions about effective patenting of inventions in the field of AI.
    The links the authors have provided are very helpful. The first link is to an article by Ryan Abbott, “Everything is Obvious”, UCLA Law Review 66, Rev. 2 (2019). The second is an article by Susan Y. Tull and Paula E. Miller, “Patenting Artificial Intelligence: Issues of Obviousness, Inventorship, and Patent Eligibility”, The Journal of Robotics, Artificial Intelligence & Law, Vol 1 No 5, October 2018.
    I’m particularly keen on issues of eligibility. As I look at court cases related to AI patents (i.e., CPC code G06N), those that don’t settle quickly are getting attacked for failure to recite statutory subject matter. Those attacks are succeeding where the claims recite little more than “measure some stuff and train an AI”. As the authors recommend, practitioners need to be quite through in their disclosure of (and the corresponding claiming of) “a description of the component configuration and type of initial AI model, how the AI model was trained (e.g., by using mathematical formulas, flow charts and/or pseudocode), what types of data are used for training, whether the AI transforms the input data into a form more suitable for downstream processing, any learned coefficients and weights that the learned model uses for providing the desired output, and/or how the data is eventually output. If the training data is not proprietary, it can also be helpful to describe the actual training data set.”

  • [Avatar for Trekker 4ever]
    Trekker 4ever
    July 26, 2021 02:14 pm

    What if the computer is run by Spock’s Brain instead of an AI?

  • [Avatar for Gene]
    July 26, 2021 09:06 am

    it is difficult to find human algorithmic invention activity in AI applications. AI itself avoids algorithms by using brute force number crunching, covering a huge percentage of statistical possibilities, which are mostly not useful. training a neural network does not result in fixed coefficients or weights. they will vary with each problem presented to the system. the training only teaches the system the criteria for evaluating a result as acceptable or not. the system will then self evaluate its results until it finds one or more (sometimes it can’t) that satisfy the criteria. these aspects are “already invented” as they are pre-built into the AI systems. there is not much inventiveness in humans commanding the AI system: here is what we want, here are the inputs to use for getting what we want; then press the start button.

  • [Avatar for Anon]
    July 26, 2021 06:52 am

    … hit ‘submit comment’ too quickly…

    I did want to say that I appreciate the authors highlighting other patent aspects of how AI may impact patent law (the usual — and first — aspect being inventorship remains murky, especially for the notion of co-inventorship), but certainly inventorship is not the only concern.

    Without the Graham touchstone, I am not sure that AI’s reach into obviousness could be constrained as the authors here would intimate. The DABUS case for example “used” AI for some very UN-AI areas of innovation after all.

  • [Avatar for Anon]
    July 26, 2021 06:47 am

    No disrespect intended, but this article is a bit “light” on its legal heft.

    The attempt to constrain the reach of obviousness effects misses its Supreme Court decisional base of Graham v. Deere (383 US 1, 1966, the Graham Factors).

    The “black box” problem is known in AI (and effects not just patent applications). Solutions and innovations are being sought, but such is not necessarily either easy to capture, nor may necessarily be necessary to do so (if properly referenced).

    I have not yet jumped to the links for “it has been suggested” and “some commentators,” but judging from the contents of this article, I may not bother to do so.