The Inspector General’s Report Alleging PTO Examiner Time and Attendance Abuse Has No Merit

uspto-building-angle-335 copyOn August 31, 2016, the Office of the Inspector General (OIG) of the U.S. Department of Commerce (DOC) released a report titled Investigative Report on Analysis of Patent Examiners’ Time and Attendance, (the Report).  While substantial improvements in examination quality, procedures and conduct are necessary, the Report provides no useful or relevant information to further this goal.  The Report lacks a sound statistical basis for its conclusions and recommendations. It should be withdrawn and its errors corrected before further dissemination. Otherwise, the OIG’s credibility will be irreparably damaged and it will deserve not to be taken seriously by the public, the U.S. Patent & Trademark Office (PTO), or other DOC operating units.

The Report alleges that substantial examiner time and attendance abuse is occurring and that the PTO “is paying production bonuses to examiners who are possibly defrauding the agency.” Report at 22. The OIG alleges that examiners were paid for 288,479 hours that they did not work, equating “to over $18.3 million in potential waste.” Id. at 3. The OIG alleges widespread abuse—that 5,185 out of 8,067 examiners claimed to have worked some unsupported time. Report at 9.

These and other unjustified assertions of examiner time and attendance fraud are the subject of my detailed petition for correction to the OIG under the Information Quality Act (IQA). In addition to the unjustified harm done to the morale and perceived integrity of the examination corps, if left uncorrected the Report could be used by detractors of the patent system to attenuate the presumption of patent validity by claiming that widespread examiner fraud means that patents are issued after shorter examination time than expected, resulting in inferior patent quality.

 

OIG’s “findings” are not supported by the OIG’s own analysis

The OIG did not monitor nor audit actual examiner activities and compare them to the time examiners claimed on the PTO’s electronic time and attendance tracking system (WebTA). Instead, the OIG obtained digital “proxies” for work activities from various systems examiners use when entering or exiting their office building or while connected to the PTO’s computer and network systems. OIG’s analysis was limited to examiner-related computer activities that were covered by 66 activity codes. This work activity amounted to an average of 6 hours per examiner work day, leaving an average of 2 hours per day untracked by the study. But total OIG-derived unsupported hours were on average only 6 minutes per examiner work-day.

This minimal discrepancy is well within rounding error and mere misclassification errors across activity codes. Unlike similar previous OIG reports, this Report did not show that the OIG collected or analyzed corroborating evidence for such findings. Nor is there evidence to ascertain these measures’ reliability from OIG-conducted interviews with (i) examiners found to have claimed substantial number of “unsupported hours,” (ii) their supervisors, or (iii) PTO Employee Relations Office that records such discrepancies regularly and thus has experience in the reliability level of such “digital footprint” for hours worked.

The reliability of the “unsupported hours” estimates is further undermined because the OIG relied on low quality indicia of work activity that the OIG admits to having manipulated, ostensibly so as to interpret “the data in the light most favorable to the employees.” Report at 5. But the OIG’s methodology for counting “supportable” hours appears arbitrary and result-oriented. It considered (and could have used) a different badge-out methodology that would have trebled the total number of unsupported hours (Id.), or perhaps reduced this number using a third method. Yet each counting method would have been arbitrary and the Report gives no rational basis for using one method and not the other. The IQA requires that such critical statistics be objective, not settled by mere whim.

Even if the OIG’s method for deriving “unsupportable hours” were objective, it produces estimates having substantial but unacknowledged statistical variance. Several factors likely contributed to the inherent variability—for example, in examiners’ patterns of computer system use, the degree to which they over-report the share of their work day dedicated to any of the 66 activity codes and consequently under-report the rest of the work day in other activity codes. And there is inherent variability due to PTO system errors, which have been known to drop IT and workstation data. In short, “unsupportable hours” is a random variable resulting from the interplay of multiple stochastic processes, yet the Report does not disclose any statistical variance. If the variance is unknown, there is no way to infer that the OIG’s “findings” are statistically significant.

image001

 

Faced with this key missing information, I reverse-engineered from the Report the mean and the standard deviation of the ratio between unsupported work hours and reported work hours over the entire examiner ensemble. I demonstrate that the difference between the mean “unsupported hours” percentage of ?1.6% and zero is statistically insignificant. This is because the estimated standard deviation for this term is 4.66% ¾ almost three times the reported percentage of unsupported hours.

My analysis shows that the probability p that the null hypothesis (that examiners do not engage in time and attendance abuse) is true given the observed statistics is about 0.36. Traditional practice in hypothesis testing calls for rejecting the null hypothesis only when there is strong evidence that it should be rejected—e.g., if p is less than 0.05. This means that OIG’s allegation that examiners engage in time and attendance fraud is not supported by OIG’s own data and analysis.

The OIG reports “quarterly peaks” and “fluctuation by day of the week” of “unsupported hours” (Report, at 18-20), but conspicuously does not report the variance of these estimates to enable a determination of their statistical significance. Due to the smaller number of observations, the relative variance of monthly or daily tallies of “unsupported hours” must be necessarily higher than that for the entire study period. For some of its results, the OIG even declares “statistical significance” (Report, at 14, 16), but conceals the variance of the results and the significance level probability it used in making such declarations. These “findings” have no credibility in view of the lack of disclosure and given the large unacknowledged variance in OIG’s measures.

The Report creates the unproven impression that the percentages of unsupported hours it derived are examiner-specific; in other words, that the spread in such percentages accurately reflects the actual spread in examiner meritless claims of hours worked.  Commentators were thus tempted to conclude that it does not matter that the spread in percentages includes some examiners on the positive side (having worked more hours than claimed) because those on the negative side should be stopped regardless of offsetting contributions by examiners on the positive side.

The problem with this view is that it presumes noiseless estimation.  Nowhere in the Report does the OIG show that its estimates are true individual attributes of the examiners in question, and not merely a particular realization of a measure subject to random perturbations inherent in the OIG’s “noisy” measure of the discrepancy in hours, which if measured over a different period will produce different results for the examiners in question.  Stated differently, the OIG did not establish that the bulk of the 5,185 examiners that had at least some unsupported time during the 9-month period (Report at 9) are also those that would make up this category of examiners with unsupported time during a different non-overlapping period, say, the 6 months preceding period. Absent such a proof, examiner realizations on the negative side cannot be isolated and the unbiased statistics across the full distribution must be applied.

 

The alleged extreme time and attendance abusers apparently are also the most productive

In my petition for correction I show that the Report is structurally biased. The Report says that for 296 examiners, 10% or more of their claimed hours were unsupported over the 9-month sampled period. These examiners were paid more than $4.8 million in compensation and bonuses for those unsupported hours. Report, at 10. The activity of these examiners is depicted in Group A in the accompanying probability density plot for the full examiner ensemble. But the Report ignores an equal-sized group of examiners for which more hours were supported than claimed. These examiners’ activities are depicted in Group B and their unpaid activity enriched the PTO according to OIG’s criterion.

Examiners in Group A, who because they claimed 10% or more unsupported hours are deemed by the OIG to be the worst abusers, are also examiners who received above-average, “commendable,” or “exceptional” performance ratings. Report, at 22. Yet it seems highly implausible that top performing examiners working for the PTO are also committing the highest levels of apparent time and attendance fraud on the PTO.

Even if the findings are shown to be examiner-specific, the Report provides no analysis that could rule out more plausible explanations for what the OIG found for Group A. For example, the Report does not disclose whether or how OIG corrected its “supportable hours” estimates upwards to account for time spent when experienced senior examiners discuss and assist junior examiners with their cases. This is an unambiguously beneficial activity that would likely not be captured in either examiner’s “digital footprint” but for which examiners would rightfully claim time. Because experienced senior examiners assist multiple junior examiners, the resulting discrepancies in their “supportable hours” could be disproportionately large.

Instead of resolving this conundrum, the OIG hedges its bets. It stops short of transparently accusing these top performers of fraud. Instead, it characterizes them as guilty of “potential abuse” (Report, at 22). But which is it¾actual, or potential abuse? The OIG is supposed to investigate¾not speculate. Nevertheless, the OIG apparently was not about to let a serious alleged crisis go to waste, whether or not it can be substantiated: the OIG’s top recommendation was predicated on this speculative “finding” as described below.

 

The OIG says the entire examination corps should perform as if they were all in the top 4%

The Report alleges that the 296 super-performers met—or even exceeded their performance goals by completing their work assignments in less time than allotted by their production goals. Id. From this factoid the OIG concludes that all examiners have more time than they need, so “production goals need revision upwards.” Id.

This makes no sense. First, as noted earlier, the statistical complement to the examiner group with significant amount of unsupported hours (Group A in the plot), is a group of examiners with a significant amount of unclaimed (but worked) hours (Group B in the plot). The OIG ignored this latter group, and contrary to Group A, examiners in Group B appear to spend more time than allotted to meet their production goals.

Second, and most importantly, the OIG recommendation is based on the “above average” and “outstanding” performance of 296 examiners—less than 4% of the examining corps. Production goals, however, apply to the full examining corps and the Report ignores this fact. Even if the Report were correct, nothing in the Report suggests that the other 96% of examiners should (or could) be held to the same performance standard. All examiners cannot be above average, much less perform as well as the top 4%.

To sum up, the OIG Report casts terrible aspersions on thousands of diligent and hardworking patent examiners, and it does so based on flawed data, biased models, and inferences that cannot be supported even by these flawed data and biased models. The errors and omissions of key statistical facts are so blatant that it is hard not to conclude that the OIG crafted its conclusions fist and backfilled analysis to get there.

For more information, read my detailed IQA petition for correction of the OIG Report. Under the IQA, OIG should withdraw and correct the Report within 60 days or give a reasoned basis in writing for its refusal to do so.

Related stories and other views

Commerce IG Report: Patent examiners defrauded government of millions for unworked time;

Inspector General’s Hyperbolic Report Distracts from Improving Patent Quality

Hearing on Examiner Fraud a Big, Fat Nothing Burger

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

4 comments so far.

  • [Avatar for Prizzi's Glory]
    Prizzi’s Glory
    September 26, 2016 07:28 am

    I wish I had written this blog post. When I read the OIG report, I certainly had the impression I was looking at a red herring. (Or was I smelling it?)

    I have found almost certain open and shut cases in which APJs file false documents in decisions on appeals and on requests for rehearings.

    Such was certainly the case of the 07/773,161 patent application, in which Joseph L. Dixon, James R. Hughes, and Eric S. Frahm asserted in their decision that they read the case documents when they clearly did not and almost certainly committed felony document falsification. Because these APJs worked together, there is almost certainly an open and shut case of felony conspiracy.

    When given the chance to admit their lie, the APJs blamed Examiner Steven HD Nguyen by alleging Examiner lack of clarity. While the Examiner had problems with English, he was crystal clear about withdrawing rejections.

    It is interesting that the case documents were improperly withheld from the PTAB reading room. (Can we say “cover up”?)

    It is also interesting that SAWS was alleged to have been terminated within days of the filing of the decision on the appeal.

    This case seems somehow connected with merry-go-round reopening of examination in TC 3600 and shows signs of involvement of senior officials from the Director’s Office.

    So what does it mean if an APJ is committing felonies? Documents produced by APJs are inputs into CAFC and District Court hearings. Should these hearings be contaminated by documents tainted by apparently felonious actions of APJs.

    How many cases are actually being tainted?

    I ran the following final decision search, which only covers cases findable in the reading room.

    https://e-foia.uspto.gov/Foia/DispatchBPAIServlet?Objtype=ser&SearchId=&SearchRng=decDt&txtInput_StartDate=&txtInput_EndDate=&docTextSearch=Joseph+L.+Dixon&page=60

    Dixon, who is a senior 101 expert, has rendered decisions in at least 2,770 cases since the beginning of 1999. These are hardly the only BPAI/PTAB cases probably tainted by felonious actions of an APJ — apparently within the context of secret “quality assurance” programs like SAWS.

    It seems like an issue that should be addressed by the OIG (or better by a special prosecutor because if the corruption extends to the Director’s office, it may go higher).

    Without a doubt the USPTO must come clean about its secret “quality assurance” programs and any ultra vires policy making especially when it contravenes the APA.

  • [Avatar for Mr Xaminer_Truthteller]
    Mr Xaminer_Truthteller
    September 24, 2016 03:17 am

    There is A LOT of time/work abuse technically going on at the USPTO of various degrees (particularly by Primary Examiners) who are given free reign and are not really held accountable. Many Primary examiners will spend just a short amount of time actually working on an application, cobble together a bogus low-quality office action, then claim all hours worked or even overtime/production bonuses (but again, quantity not quantity). Primary Examiners MAY get a few cases pulled for quality, and if an error is found, no big deal as so many are sent out it doesn’t affect their rating that much. It’s really no secret, upper management is taking a blind eye and a deaf ear to the situation, and OIG is simply not investigating things in the correct manner to really understand what is REALLY going on or what recommendations to make.

  • [Avatar for Shannon]
    Shannon
    September 22, 2016 10:12 pm

    Nice article!

    I totally agree the methodology the IG used to account for examination time is flawed.

    But one quibble. Under the count system the most productive examiners are not the ones that actually produce the most work. The most productive examiners are the ones that “close” more cases or do more high count work than low count work. Consider 2 examiners, each having a BD of 20 hours. The first examiner first action allows 4 cases. The first examiner is fully successful in meeting production goals because 20 hours per case x 4 cases = 80 hours. The second examiner writes 4 first actions and 4 final rejections. The second examiner has only completed 60 hours worth of work because 4 x(12.5 hours per first action+2.5 hours per final rejection)=60 even though it may have taken the second examiner 80 hours to deal with the 8 cases. This is why the report, surprisingly accurately in my opinion, recognizes that it is the most productive examiners who commit the most time fraud. Under my hypothetical, it is likely that the first examiner will “produce” something else and claim the extra counts as overtime. I believe this is reflected in Figure 7 of the report which shows a spike in unsupported work hours over the weekend. And under my hypothetical, the poor second examiner is probably working all kinds of voluntary overtime to acquire another 20 hours of work “credit” they need to keep their job.

  • [Avatar for Anon]
    Anon
    September 22, 2016 10:43 am

    A truly excellent analysis.

    As one who has been especially critical (and have noted the view captured here of different players at each of the two “tails” of the normal distribution), I can fully buy into Ron’s patient and detailed analysis, and echo his call for the Report’s authors to make the appropriate corrections.

    Nice job Ron.