A patent application can be granted if the claimed device is sufficiently different from previously known devices. Some applications can be rejected outright because examination reveals that exactly the same idea has been published before. Others can be quickly granted because the difference over the closest prior art is crystal clear. But most cases fall somewhere in between.
In order to decide borderline cases, European patent examiners are typically instructed to ask if a fictional “person with ordinary skill in the art” would have solved the so-called “objective technical problem” in the same manner as the device defined in the patent claim. In principle, a uniformly applicable marker set by the “person of ordinary skill” thereby divides all applications into those that can be granted and those that cannot. By instructing examiners to raise or lower the skill level needed for a grant, the requirements for obtaining a patent can in principle be adjusted across the board. All applications are then judged by equal criteria and the number of granted patents can be adjusted by moving the “ordinary skill” marker, as illustrated in Figure 1.
This looks easy in a drawing, but it is in practice difficult to implement the “ordinary skill” marker uniformly for thousands of applications. Despite formal guidance, all patent examiners will not assess the capabilities of the fictional skilled person in the same manner. Borderline cases will inevitably come down to very subjective opinions about what “ordinary skill” entails. Instead of a precisely adjustable marker, the scale will then instead have a grey zone where the decision to grant or not to grant a patent can (with reasonable justification) go either way, as illustrated in Figure 2.
One way to reduce subjectivity and convert the scale depicted in Figure 2 closer to the ideal depicted in Figure 1 might be to replace the fictional construct of a person of ordinary skill with a real-life computer which all examiners would have at their disposal: an artificially intelligent system which I will call the computer skilled in the art.
Let’s imagine the following examination procedure:
An applicant submits an application which comprises a patent claim containing technical features F1-F4. A patent examiner performs a search to locate the prior art document D1 which correspond most closely to the claim, and formulates the objective technical problem. The examiner may or may not receive help from an artificially intelligent system for finding the closest prior art. This makes no difference for the present argument. The device which is presented in document D1 contains features F1-F3, but not F4.
Instead of pondering whether or not a fictional person of ordinary skill might have solved the objective technical problem by adding feature F4 to the device known from D1, the examiner submits a task to the computer skilled in the art. The AI system is given:
- the closest prior art document D1,
- information which pinpoints the parts of D1 which correspond to the known features F1-F3 of the patent claim, and
- the objective technical problem.
The examiner does not submit to the artificially intelligent system the feature F4. Instead, the artificially intelligent system is asked to solve the objective technical problem based on the information it has been given. If the artificially intelligent system suggests that feature F4 would be a solution to the objective technical problem, then the application can be rejected because the claimed invention was obvious to the computer skilled in the art.
However, the key question would not necessarily have to be whether or not the computer is able (in the allocated time) to find the same solution as the applicant had claimed (F4). After all, the computer might quickly produce thousands of different solutions, many of which might not make any sense in practice. It would be harsh to rule out inventive step just because one of these solutions corresponds to feature F4. A better approach might be to instruct the computer to rank the solutions it produces. The claimed solution could then, for example, be considered obvious to the computer if it is ranked in the top ten.
There would be at least two benefits in utilizing an artificially intelligent system as a measuring stick for inventive step in this manner. Firstly, the artificially intelligent system would operate uniformly, unlike a fictional person, whose skill examiners can imagine very differently. Secondly, the requirements for obtaining a patent could be easily raised or lowered across the board, for example by increasing or decreasing the time allocated to the computer for finding a solution to the objective technical problem, or by changing the lowest ranking that is considered inventive, as illustrated in Figure 3.
Testing what the computer skilled in the art can come up with would not be an attempt to re-create something that real inventors might do. After all, the abilities of the “person of ordinary skill” do not exactly correspond to those of any real person, either. The computer skilled in the art and the person skilled in the art are just procedural tools for applying patent law fairly.
However, two problems would still have to be solved before artificially intelligent systems can be used as stand-in inventors for assessing inventive step.
Problem 1 is that the artificially intelligent system might not at all be up to the task of processing written information into new practical solutions and presenting these solutions in written form. The fictional person of ordinary skill is assumed to be capable of routine experimentation but replacing that person with a computer might require real-life practical skills from the computer, if the solutions proposed by the AI system are to be good enough to function as a measuring stick for inventive step. It might be that artificially intelligent systems still have a way to go before they can easily solve routine practical problems.
An AI system could presumably obtain quasi-practical skills through a training process where it is programmed to suggest routine improvements to existing technical devices, and a technically competent person provides feedback to the system about these suggestions. But the computer skilled in the art might also have to learn to reason like a human. It is a basic tenet of inventive step reasoning that the skilled person would take very seriously any guidance that the closest prior art document(s) provides for solving the objective technical problem. It is considered unlikely that a person of ordinary skill would pursue a certain solution, if the closest prior art indicates that the solution is disadvantageous.
The computer skilled in the art might not be inclined to show a comparable sensitivity to such “pointers” provided by the prior art. It has no need to economize its workload based on earlier human experience. But it could be argued that some form of workload-reduction-seeking behavior would have to be programmed into the computer if patent applications are to be judged fairly by the computer skilled in the art.
But let’s assume that problem 1 can be solved. This still leaves us with problem 2, which may not be solvable at all: it would be almost impossible for the applicant to argue against a negative opinion from the patent office stating that the patent claim cannot be granted because the computer skilled in the art discovered the same solution when it solved the objective technical problem. What would a possible counter-argument look like?
Artificially intelligent systems are opaque – it is usually not possible to analyze afterwards how they arrived to a particular solution. And even if that would be possible and the applicant would be provided with a record of how the AI system arrived at its solution, scrutinizing the work of the AI system would surely be an unduly large burden on the applicant. So, the same objectivity which enables the computer skilled in the art to establish a clear marker in Figure 3, actually prevents any further discussion between the applicant and the examiner. This would clearly violate the applicant’s basic right to be heard and defend their claims.
Taking all this into account, perhaps the best solution would be to use the computer skilled in the art only as a complement to the person skilled in the art, so that the output of the AI system wouldn’t be decisive. The grey zone shown in Figure 2 might be a nuisance, if it’s extremely wide, but it is also the space where the examiner and the applicant can conduct a reasoned debate about the merits of the patent claim. Patent offices should, therefore, not aim to reduce the width of that grey zone to zero.