IP Practice Vlogs: Responding to the USPTO’s Request for Public Comments

“What the USPTO should really do is try to implement examination procedure to better standardize the practice of patent claiming and examination in order to induce consistency in the patent system.”

The United States Patent and Trademark Office (USPTO) would like public comments on how to update the 2019 Subject Matter Eligibility Guidance. The agency is also seeking comments on how to improve the robustness of the patent system overall. This article/video is in (unofficial) response to both of these requests for comments.

The current mess surrounding subject matter eligibility in the United States is an offspring of a much deeper problem in patent law, which is that there is practically no standardization in patent practice. In medicine, U.S. doctors are trained by standardized practices through rotations and residency programs such that when they begin practicing, a doctor graduating in Florida will not practice medicine vastly different from a doctor graduating from medical school in California, for instance. Instead, the idea is that all the graduates will approach medical treatment in a standardized way so that the public has a lot more faith in the medical community.

Law is simply not like that. Different attorneys can claim the same invention in a myriad of different ways. Different examiners can search and apply the prior art to the same claims in a myriad of ways. So, as a result, courts can interpret cases with roughly the same issues in vastly different ways. One of the questions we keep asking ourselves is, “Why does the Federal Circuit constantly issue inconsistent rulings?” I think the answer may be that it’s just so easy. The factfinder can actually assess the same facts in quite a few different ways because of the lack of standardized practices in patent law.

This “problem” actually occurs in all types of law practice (lawyering is not exactly a standardized profession). But the problem with patent law is that the public, rightfully and understandably so, assumes that a patent issued from the USPTO would be of a consistent quality and standard. Unfortunately, they have not been getting that.

What should the USPTO do in terms of updating the Subject Matter Eligibility Guidance? Well, I’m sure you remember what happened with the 2019 Guidance is that it’s not binding outside of the agency, so unfortunately, the principles of the Guidance were not followed.

COMMENT 1 – Do not attempt to fix 101 by actually trying to fix 101. Instead, standardize.

It won’t go anywhere – you are bound by too much judicial precedent at this point. What you should really do is try to implement examination procedure to better standardize the practice of patent claiming and examination in order to induce consistency in the patent system.

Standardizing Claim Construction

The agency should standardize the practice of how to determine what the invention is “directed to” (i.e., Step 2A of the Alice test) because people are all over the place with this issue. The question of what the invention is directed to should be based primarily on the preamble of the claim. But right now, there is way too much latitude when it comes to this test. The USPTO specifying clearer standards that are grounded in actual claim language (rather than purely at the examiners’ discretion) will not only induce clearer examination, but also better patent claiming by the applicants.

Back in 2019, the Federal Circuit found that a garage door opener was an abstract idea in Chamberlain v. One World Technologies.  The Chamberlain decision seemingly followed the Berkheimer test by assessing “directed to” in Step 2A and then looking for a technical improvement in Step 2B. (If there is a technical improvement to what the invention is directed to, then we have more than an abstract idea).

The invention at issue in this case was really directed to a garage door opener. More specifically, it was directed to a garage door opener equipped with a wireless communicator. That’s actually what the claims said – the applicant had an apparatus claim and a method claim, and the apparatus claim actually claimed a “movable barrier operator.”

Instead, the Federal Circuit found that in Step 2A, the claims were directed to just the wireless communicator! And since the patentee only disclosed a conventional wireless communicator, then there was no technical improvement.

One thing about the patent at issue in Chamberlain that made it susceptible to a Section 101 ruling is that the patentee did not show any physical structure or article of manufacture that is equipped with the wireless communicator. Instead, they only showed generic boxes marked with “movable barrier” etc. They really should have shown a garage door or at least one of those remote controls equipped with the wireless communicator. But I don’t want to be too hard on this applicant – these comments are provided with a lot of hindsight bias – plus the patentee’s application was also filed in 2003. Alice came out in 2014. There is no way that the applicant foresaw at the time of filing that their claims would actually be determined to be directed to an abstract idea! But now, knowing the state of the law, you should be showing the physical article equipped with the software always because you want to avoid that impression of abstractness. Also, because you can’t claim what you don’t show in the figures, and black boxes are just not good enough, if you want to cover a garage door operator, you should show the operator.

But, back to the USPTO, they should really come up with some rules as to how “directed to” is determined and those rules need to be grounded in claim language and based on what is positively claimed.

Standardizing Software Claim Examination

“Hey will our functional language get patentable weight?”

“I don’t know, depends on the examiner we get.”

USPTO, can you please tell your examiners that if an applicant recites a computer structure that is “programmed” to execute a series of steps, those steps get patentable weight? Stop making it such a guessing game, it creates inconsistent examination and therefore inconsistent patent quality.

Standardizing Mechanical Claim Examination

Means-plus-function interpretation (based on 35 USC §112(f)) is invoked when the applicant claims a means in combination with a nonce word. Examples of nonce words (words that don’t mean anything, kind of like space fillers) include “unit”, “module”, “member”, “structure”, etc. In this case, the claim term gets the claimed structure disclosed in the application and the structural equivalents thereof. For example, applicant claims a “fastening means” and discloses and describes a nail, for example. Under a means-plus-function interpretation, “fastening means” will be interpreted to probably also cover a screw or a bolt as structural equivalents. “Fastening means” will not be interpreted to cover tape or glue because those structures cannot be considered equivalent to a nail.

The primary reason an applicant might want a means-plus-function interpretation occurs when showing a series of embodiments for “fastening means” and the applicant wants to cover all those embodiments with a generic claim. But showing all the different types of equivalent structures in the disclosure is necessary.

If you get a means-plus-function interpretation and the examiner finds that you have not shown sufficient structure for the means, your claim is indefinite. A means-plus-function interpretation greatly enhance the chances of an indefiniteness rejection. The irritating thing is that means-plus-function can be invoked post-grant/post-issuance to render an issued patent invalid for indefiniteness when it was not deemed indefinite at any point during prosecution.

Here’s my comment: unless the claim explicitly recites “means,” means-plus-function should not even be a consideration. This should just be codified.

Diebold v. ITC is about an ATM machine issued in 2018 in which the Federal Circuit reversed an International Trade Commission (ITC) decision, the court finding that the patentee violated Section 337 of the Tariff Act on the grounds that the asserted patent was invalid for indefiniteness. In particular, the court found that the claim term “cheque standby unit” was a means-plus-function term and that the specification failed to provide sufficient supporting structure. “Unit” is the nonce word here and “cheque standby” is a function so the combination invoked a means-plus-function interpretation at appeal where the claim was deemed to be indefinite. The court found that the term “cheque standby unit” was depicted by only by a box in the Figures and the patentee didn’t show any structure for it.

I found the merits of this decision to be deeply flawed because the point of finding a patent invalid for indefiniteness is that it fails to put to competitors on fair notice of the scope of a patent. But the defendant’s ATM had this cheque standby unit, whatever it was, a place to hold checks temporarily.  They had this.  On top of it all, this was a case of direct infringement in which the products were essentially identical. Why is the claim indefinite now?

The agency should codify that means-plus-function gets invoked only when the applicant specifically uses “means” and is asking for the structural equivalents thereof.

Standardizing Process Claims Examination

Product-by-process claims occur when the applicant invents a new process, but they don’t only want to cover the process by itself because in that case they can only sue the company that makes the product for direct infringement. You want to also cover the product because you want to sue the sellers of the product that is being made by the new process. (You gotta be inclusive about who you sue, ok—don’t leave anybody out).

The problem with examination of product-by-process claims is that determination of patentability is based on the inventiveness of the product, not the claimed process – the process does not get patentable weight. If the product is the same as the prior art, the claim is not novel or obvious. This is a judge-made law and it’s stupid. If the applicant claimed a process, then examine the process.

Also, examiners are all over the place on what they consider to be the same product. They consider a two-piece product the same as a one-piece product; they consider products that are molded together to be the same as products that are welded together; products that are collapsible to be the same as rigid products if they look the same.

These are not the same products! The agency should set some clear limits as to what is considered a same product.

The agency should also direct its examiners to examine the patentability of product-by-process claims based on the patentability of the product and the process, not just the product by itself. Examination based on the novelty of the product alone endangers innovations to manufacturing techniques.

COMMENT 2 – Institute a special examination procedure for AI. 

The combination of a first to file system and the need to deploy machine learning modules in a public use or commercialization setting has created a race to the Patent Office. They are going to be deluged with these applications that are susceptible to unresolved law, which will be examined in all sorts of ways and then the public is going to freak out that the USPTO is issuing patents that are too broad or unenforceable. The Office needs to create an art unit just for the examination of AI so that examination will be consistent for this field of technology. (Have they already?? Please comment).

When it comes to updating the 2019 Guidance with AI in mind, we need to ask whether the AI is directed to foundational (or core) AI versus just applied AI, which is AI as applied to a particular system or process.  Foundational AI is directed to a model trained or for training on a vast quantity of unlabeled data at scale and the resulting model can be adapted to a wide range of downstream tasks. The patentability of foundational AI is much more instrumental towards inducing capital investment in a particular industry and is an enabling tech, whereas AI as simply applied to a particular process without further training may unduly restrict competition. (Speaking generally here, by the way, it’s not always the case).

The problem is that a lot of applications actually disclose both – they disclose a method of training a large data processing model and they also disclose the application of that model in a particular setting because these components are oftentimes developed together. As a result, the problem with AI applications is both intrinsic and extrinsic in nature with respect to the current patent law.

The intrinsic problem is that foundational AI, being directed to big data labeling and processing, will unfortunately sounds a bit like mental processes, like the applicant is claiming a super brain. So, foundational AI is much more susceptible to a Section 101 abstract idea rejection. Applied AI will more likely involve an actual machine or apparatus, which places applied AI claims in better condition to circumvent the 101 issue.

The extrinsic problem stems from applicants’ own incentives. The applicant is incentivized to claim the application of the AI rather than the building of the AI itself, because you claim what you make, use and sell. The applicant is more likely to capture parties who use or implement the AI versus parties actually training a whole new AI based on the same principles disclosed in their application. The claim to the applied AI is probably a broader claim (again, speaking generally). Therefore, the problem with U.S. law is that the patent system does not ensure protection of AI, and to the degree that it does, the system promotes the protection of applied AI, but not so much the protection of core AI, which is actually where more inventiveness occurs.

So, what should we do about this? It’s a tough one. The agency should not disrupt the ability of applicants to claim in their best interests – that would disrupt the foundations of claiming practice and the entire system. But to start with, examiners should probably look to the integration of the large data processes with the application of the model in the claim. The claim should have both and should be examined based on both aspects. The current 101 Guidance actually provides a lot of concrete examples as to how to assess the integration of data processes with its application when it comes to examination. Maybe the best thing for the agency to do is to just follow through and enforce the Guidance internally and consistently, despite whatever the Federal Circuit says.

These comments will be submitted for the “Request for Comments on USPTO Initiatives to Ensure the Robustness and Reliability of Patent Rights.”

To hear more, check out the latest episode of IP Practice Vlogs

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

2 comments so far.

  • [Avatar for PA Crier]
    PA Crier
    October 14, 2022 03:46 pm

    Standardization is a fresh idea injected into the patent reform debate.

    The very first thing request made in the Robustness RFC is prior art. The biggest beehive of inconsistency in the entire process can use some sort of guiding principle.

  • [Avatar for Model 101]
    Model 101
    October 14, 2022 09:21 am

    It’s a crooked system.

    Get rid of 101 and all the crooks.