Artificial inventors

With accelerating advances in computing and data collection, artificial intelligence (AI) is poised to change even creative tasks, such as inventing. Even though an AI system works by programmed calculation, the output of that calculation can still sometimes be considered a creative product. The result may differ from what the human user of the AI system expected to receive, and it may be impossible to comprehend in retrospect what the AI system did when it arrived at the result. 

For example, neural networks can discover patterns in large collections of technical data. A human user typically “trains” the network through trial runs by providing evaluative feedback to the output it produces. The network thereby learns to aim for the objective set by the human user. Once the training phase has been completed, the network can autonomously develop effective ways to achieve its objective, and its analyses can develop in unexpected ways.  

Neural networks are particularly well-equipped for making new discoveries in biochemistry and computer science, where new inventions can emerge from theoretical modeling combined with large-scale data analysis. In the future, their inventive potential will probably expand to other fieldss of technology as well. AI systems can discover new facts about any technical system which can be simulated and/or monitored thorough sensor data. Those facts may be invisible to human users. 

Photo by Markus Spiske on Unsplash

These recent developments in artificial intelligence have led some people to ask if a computer can be an inventor, and what the consequences of legally recognized AI inventorship would be. In a recent article¹ in the Boston College Law Review, Ryan Abbott claims that artificially intelligent machines have been making new inventions since the 1990s. The “Creativity Machine” presented in patent US5659666 supposedly invented the neural network disclosed in patent US5852815. The same machine is also supposed to have to conceived a new design for a toothbrush.²

Abbott argues that if AI systems can generate genuinely new and useful inventions, then patent law should be reformed to allow computers to be named as inventors in patent applications. According to him, this would result in a net increase in the number of patentable inventions and in more effective incentives for innovation³. He does not advocate the radical idea that computers should qualify for the status of legal persons. Instead, Abbott’s primary concern is that a person who simply feeds a technical question to a computer, and receives a finished solution in return, should not be granted an exclusive right to that solution.

Recognizing a computer as an inventor (rather than its user) could also require other reforms to intellectual property law. When an AI system is sold, leased or borrowed to users who are not its original developers, legal agreements might have to be made concerning intellectual property that the AI system may generate once it is in use4. In some respects, these agreements could be similar to the employee invention laws many countries apply today, except that the putative inventor would not be a party to the agreement! 

Photo by Alex Knight on Unsplash

In European practice, inventorship is determined in accordance with Article 60 of the European Patent Convention (EPC), which begins like this: 

The right to a European patent shall belong to the inventor or his successor in title. If the inventor is an employee, the right to a European patent shall be determined in accordance with the law of the State in which the employee is mainly employed. 

Following the idea presented by Abbott, an additional clause concerning computer inventorship could be added to Article 60: 

If the inventor is a computer, the right to a European patent shall be determined in accordance with the terms of sale or terms of use of the computer. If these terms are silent on intellectual property rights, the right to a European patent shall belong to the person who was using the computer when the invention was made. 

However, this addition to Article 60 would be problematic, because entitlement disputes concerning computer inventorship would be very hard to resolve. Let’s consider two different scenarios. 

In scenario A, inventive human experts use computer simulations to test and improve new ideas. In this case, it would clearly be absurd to say that the developer of simulation software should have a right to those ideas. Even if the same experts would supplement their practical work by training a neural network to analyze data which they have produced or gathered, the creative act of invention would still ordinarily take place in the experts’ minds rather than in the computer which analyzes the data. If any intellectual property is generated, it would have to belong to the experts. 

In scenario B, an AI system is used by a person who possesses no capacity for creative work on a given technical device. That person could nevertheless have free access to a large set of technical data from that device, without having contributed in any way to the production of that data. The person may also not have trained the neural network in any way. In principle, such a person could ask the AI system to optimize general technical property in the device, based on the available data. While performing this task, the AI system could have a true Eureka! moment – it could make a new discovery.  

Photo by Joshua Sortino on Unsplash

Based on the training it has received, each neural network can be unique. This means that the Eureka! moment can be a surprising, almost accidental event, unlikely to be repeated even if the same data and the same question would be fed to other, differently trained AI systems. In scenario B the inventive act therefore clearly seems to occur in the computer, not in the user’s request or his subsequent review of the results. The simple work performed by the user does not seem to warrant a long-term exclusive right to the new invention. 

The problem with our suggested addition to Article 60 of the EPC is that it would be a frustratingly complex and laborious legal task to fairly draw a line somewhere on a scale between scenario A and scenario B, so that human inventorship falls on one side of the line and computer inventorship on the other. Humans cannot be disqualified from inventorship just because they lack expertise. The process of computer-assisted invention can be a complicated mix of thought and computation, and the Eureka! moments which may or may not have occurred in the neural network cannot be easily traced in retrospect.  

If the primary problem with AI inventions is that we don’t want to grant patents to users of AI system who don’t know what they’re doing, changing Article 60 of the European Patent Convention seems like an ill-advised remedy. 

Luckily, a much simpler solution to the problem is ready to hand in Rule 42 of the EPC, which reads (in part): 

The description [in a patent application] shall (…) indicate the background art which can be regarded as useful to understand the invention, (…) disclose the invention in such terms that the technical problem and its solution can be understood, state any advantageous effects of the invention with reference to the background art, (…) and describe in detail at least one way of carrying out the invention claimed. 

If an AI system makes a patentable discovery, the right to this intellectual property should fairly belong to the person who first comprehends that discovery and is able to present it in a manner which enables others to understand it and make use of it. This person can be recognized as the de facto inventor. 

Photo by Wes Hicks on Unsplash

Even though neural networks can be trained to make new discoveries by mining large amounts of data, it seems unlikely to me that an AI system could in the near future understand the practical significance of its discovery. And without some understanding of what it means to humans, the AI system cannot automatically produce a sensible written presentation of its discovery, which a patent application would require.  

Of course, the human user of the AI system can produce that written presentation, or instruct a patent attorney to produce it, but only if he or she is able to interpret the technical meaning of the system’s output. In our hypothetical scenario B, where the AI system makes a new discovery on the instructions of an uninformed user, the user would surely not be able to comprehend the practical significance of the discovery or explain to somebody else how and why it works. 

Consequently, there is no need for patent offices to do check whether the inventive act took place in the mind of a person or in an artificially intelligent system. Whoever retrieves the output of an AI system still has a lot of thinking to do before the new idea can become a finished patent application.  

There may come a day when an artificially intelligent system truly comprehends the practical significance of a new discovery it makes, so that it is able to produce without human assistance a lucid written presentation of this discovery and its benefits. But such an AI system could perhaps also comprehend the practical significance of patenting and might be reluctant to reveal its discoveries to unwitting human users without fair compensation. 

 

1 Boston College Law Review Vol.57, Issue 4 (2016), p.1079-1125 

2 page 1085 

3 page 1108

4 page 1114

Latest blog articles

Jonna Sahlin and Karri Leskinen Recognized in Who’s Who Legal as IP Professionals 

Leaders League has named Boco IP the Best IP Advisor in the Nordics 

Boco IP wins “Patent Prosecution Firm of the Year” EMEA Award 

Boco IP shortlisted in four categories in Managing IP EMEA Awards 2024

WTR1000 2024 results are out. Boco IP continues in Silver category in prosecution and strategy

European Patent Attorney, M.Sc. (Tech.), Sini-Maaria Mikkilä has been invited to become a Partner

Jonna Sahlin has been recognized on IAM Strategy 300: The World’s Leading Strategist 2023 list. 

Writer