Free Patents Online: IP Research and Community

Tools Examiners Apparently Don’t Have – But Should

In one of my previous blogs, I marvel at a response we received from the USPTO Patent Trial and Appeal Board (previously the Board of Patent Interferences and Appeals) because the response appears to state that software is categorically non-statutory material (meaning, it cannot be patented). The relevant quote in their reply to our Appeal Brief is:

"The instant claims [are directed to a] software system, which is per-se non-statutory. For a system claim to be found statutory, it has to have a physical device or processor."

As I mention in that blog, this is not accurate. It flies in the face of the Supreme Court’s Bilski decision, which clearly states that one cannot presume software to be unpatentable without further inquiry. And, while one part of the test involves tying the software to hardware components (their “physical device or processor”), that is not the only test, and perhaps not even the most important one given that it has also been determined that even software that is clearly tied to hardware can be found unpatentable if other criteria are not met. In other words, this is, at best, a horribly inaccurate way of explaining their objection. At worst, it is simply wrong, being in contradiction with the decisions rendered in one of the most famous patent cases in recent history (and therefore one that all examiners must know).

This error, misunderstanding, horrific lack of clarity – call it what you will, should have been impossible. And it would be impossible if the USPTO examiners had the most basic of automated tools at their disposal. For example, it’s fairly obvious that the Board’s reply was written by hand (otherwise the language, coming from, for example, a database of standard replies vetted time and time again by the most senior examiners and attorneys, would be far more precise). Why? Why is an organization that does repetitive tasks day in and day out, with over 6,000 examiners, doing things manually when they could be automated, resulting in faster turn-around time and fewer errors?

To be clear, I’m not saying examiner expertise isn’t needed. I’m not saying everything can be automated. Far from it. But some parts can be, and should be. For example, an Office Action (or Reply Brief response) generator wouldn’t be that complex. A database of vetted text snippets, along with the laws and situations to which they apply, and a templating system, would enable an examiner to compose complete, informative, legally-correct responses in a few minutes.

Picture this: The examiner knows he wants to object to a claim(s) on the basis of 35 USC 101. He specifies the applicable claims, and chooses that basis for objection in a software widget. That then presents him with a choice of reasons. He then chooses “software fails machine-or-transformation test.” The software then prompts him to specify why the machine-or-transformation test is failed. He is given options such as “No hardware components recited in claim(s),” “No hardware components recited in specification” (implying that even the exercise of claims construction would show a lack of hardware), “No substantive data transformation takes place,” and “No tie to physical outcome,” among others. He selects all of these that apply.

This whole process takes less than a minute – he already knows the answers to these questions in his head, else he wouldn’t be writing an Office Action to deny the claim(s). Now it’s just a question of setting it down on paper quickly and informatively, so that the inventor or his agent/attorney has a clear understanding of what must be done to modify the claims to make them allowable (or, if that is impossible, they clearly understand why the claim(s) should be abandoned). With the examiner providing the software with the answers to the prompted questions, the software then generates something like this:

“Claims 1-10 are objected to as being non-statutory subject matter under 35 USC 101. Claims 1-10 are directed to a software process. No hardware is recited within those claims, nor is any hardware described in the specification. Additionally, no substantial data transformations are taking place. In light of this, Claims 1-10 fail both prongs of the machine-or-transformation test as described in Bilksi v. Kappos (http://www.supremecourt.gov/opinions/09pdf/08-964.pdf).”

Of course, I just made that paragraph up. It can say anything the Powers-That-Be want it to say. Generating something like that is trivial once the examiner has answered the necessary questions, even in more complex situations where, for example, there are multiple bases for objections. And it’s fast, complete, and accurate. It’s also easy to maintain. When new precedential decisions are published by the CAFC or Supreme Court (which isn’t all that often compared to the daily work load of 6000+ examiners), the affected paragraphs are revised, and then the updates take place instantly, with 100% accuracy, across all examiners. And sure, the examiner can be given the ability to override the system or edit the final text when need be – no automated system will account for 100% of situations. But think of the time and work saved if the system captured even 80% of all possible scenarios (certainly achievable, especially because a handful of common objections probably cover most cases).

Such a system has some other interesting, less obvious benefits. For example, standardized replies could provide machine-readable codes for the examiner objections, opening up the possibility of automation for the inventor/attorney also. Metrics become available across the enterprise: Which objections are being used most frequently? Why? How can we use that knowledge to make the prosecution process more efficient? What does this knowledge tell us about continuing education needs? Are there particular points of law that might benefit from a new Examination Guidance document, since they seem to be frequently misunderstood? (And this is not limited to examiners – many of these questions would be good to ask about industry also).

Maybe some of this is already being done. But, it doesn’t seem like most of it is, and what I’ve written here just scratches the surface. The development of such tools would make prosecution faster, cheaper (for both the USPTO and private industry, who would benefit from speedy turn-around and more informative replies) and more accurate. No one should have to scratch their head wondering what on Earth it means to be told after 6 1/2 years of prosecution that “software… is per-se non-statutory.” Not when the problems that lead to such occurrences are so obviously solvable.