just_n_examiner (just_n_examiner) wrote,
just_n_examiner
just_n_examiner

Patent Quality Metrics

Last December, the Office posted a Federal Register Notice titled Request for Comments on Enhancement in the Quality of Patents. The Office, in cooperation with PPAC is teaming up in "[an] effort to improve the quality of the overall patent examination and prosecution process, to reduce patent application pendency, and to ensure that granted patents are valid and provide clear notice...The USPTO is seeking to improve the quality of the examination of patent applications and patents resulting from that examination." To be sure, these are laudable goals.

Through the notice, the Office was "seeking public comment directed to this focus with respect to methods that may be employed by applicants and the USPTO to enhance the quality of issued patents, to identify appropriate indicia of quality, and to establish metrics for the measurement of the indicia." The effort is "directed to the shared responsibility of the USPTO and the public for improving quality and reducing pendency within the existing statutory and regulatory framework."

They were looking for comments on what quality measures should be used, on which stages in examination quality should be measured, on whether it is feasible to improve quality and at the same time reduce pendency, on the effect of recent pilot programs implemented by the Office, on customer surveys regarding patent quality, and on what tools are available that can aid in either measuring or improving patent quality.


This all sounds good to me.


One of the things that caught my eye when I read the notice was the provision for Quality Index Ranking (QIR). Apparently the Office is using internal statistical measures to "identify outliers and other anomalies" (translation: patterns which may be indicative of poor examination) in processing and examination.

What statistical measures are being used? Items such as

  • multiple non-final actions

  • restrictions (after first action, or multiple, sequential or late in prosecution)

  • reopening of prosecution after the filing of an appeal brief

  • reopening of prosecution after a final rejection

  • first action allowances

  • multiple requests for continued examination (RCE) made in a single application

  • allowances after RCE filing without any substantive amendment


On their face, these seem like reasonable things to measure when looking for problems in the conduct of examination. Still, we're talking about statistics, and one would do well to bear in mind what Mark Twain, amongst others, had to say about statistics.


Sure, some of these measures are pretty much irrefutable as a measure of poor quality examination.

For instance, allowing an application after an RCE without substantive amendment would seem to indicate that the application could have been allowed before the RCE filing, unless there were new arguments or maybe affidavits filed with the RCE.

Reopening prosecution after the filing of an Appeal Brief is also a pretty good indicator; if you're not willing to send it to the Board, you shouldn't send it out as a Final Rejection. Of course, sometimes (but not often) you do see persuasive arguments in the Brief that you hadn't during prosecution.


Other measures are open to interpretation.

Certainly, re-opening after final or multiple non-final actions are to be avoided when possible. If the rejection was done correctly in the first place, these things would never be necessary, right?

On the other hand, who is the good examiner and who is the bad one in this scenario: the examiner who sends out erroneous rejections 5% of the time, and therefore reopens or sends a second non-final 5% of the time, or the one that sends out erroneous rejections 10% of the time, but refuses to seriously consider the attorney's arguments and thus never reopens or sends second non-finals?


And really, First action allowances? Multiple RCE filings? Those statistics are going to be benchmarks for poor examination? Well, I suppose, if there is a pattern of repeated occurrence of them (which is, I guess, how they'll be used, to identify outliers), but these two don't strike me as the best indicators of poor examination. I believe that the rate of first action allowances and RCE filings vary quite a bit between different areas of art.


If you're looking for an objective measure of examination quality, my suggestion would be to consider looking at the number of times an examiner changes their primary reference over the course of prosecution. If the search was done well from the start (like by searching the disclosed invention and not just the claimed one), there should rarely be a need to change the primary reference, barring a major reworking of the claims. Certainly, new art does occasionally need to be applied, but if it's happening all the time, that's a pretty good sign that there's a problem.


Anyway, lots of comments were received in response to the Notice (unfortunately I haven't had the chance to read them all).


This week, the Office put out another Federal Register Notice, announcing a couple of roundtables "to obtain public input from organizations and individuals on actions that can improve patent quality and the metrics the USPTO should use to measure progress." They're also soliciting more comments on the quality enhancement initiative and metrics or any issue raised at the roundtables. Comments will be accepted through June 18th.

The roundtables are scheduled for May 10th in Los Angeles and May 18th at PTO Headquarters in Alexandria, and are both open to the public. The Alexandria roundtable will be webcast as well.
Subscribe
  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 53 comments
Previous
← Ctrl ← Alt
Next
Ctrl → Alt →
Previous
← Ctrl ← Alt
Next
Ctrl → Alt →