Mitigating Product Liability for Artificial Intelligence
In Short
The Situation: New technologies incorporating AI create questions about how product liability principles will apply and adapt.
The Issue: Legislatures and courts have not addressed how product liability laws apply to new AI technologies.
Looking Ahead: Companies should consider whether the courts will treat their AI technology as a product or a service, whether and how to allocate liability in agreements, and how industry standards may influence liability for AI.
Automobiles, drones, surgical equipment, household appliances, and other products are increasingly using artificial intelligence ("AI") and, in particular, machine learning to make decisions. The promise and expectation is that AI will improve product safety. But often programmers do not know exactly how their AI will learn, change from experience, and arrive at decisions. When injuries occur, it may be difficult to determine what went wrong and who should bear liability.
Traditional tort law will likely apply to AI, with modest adaptation, just as tort law adapted to crashworthiness of automobiles. Some vehicle manufacturers reportedly will accept liability if their AI does not prevent an accident. Absent such an agreement, courts will need to determine fault among product manufacturers/sellers, AI designers/suppliers, and AI purchasers/users. A central issue will be whether the user controls a product assisted by AI, or AI completely controls the product's operation.
Another threshold question is whether an AI system is a product or a service. Strict liability applies to flaws in product design, manufacture, or warnings that cause personal injury or property damage to others; negligence applies to services, such as data analysis to determine maintenance. Under the Uniform Commercial Code, mass-produced, off-the-shelf software is a "good," but software specifically designed for a customer is a service. Some courts distinguish between the thing containing the software (a product) and information produced by software (not a product).
Some scholars advocate applying a negligence standard to AI because AI is "stepping into the shoes" of humans. But courts may find it difficult to apply a "reasonable person" or "reasonable computer" standard. Should AI have learned to recognize a child darting out between parked cars? Should AI have elected to avoid hitting that child or an oncoming school van?
Plaintiffs typically favor strict liability for claims of defective products. Plaintiffs will argue that, barring product misuse, failure to update, or physical damage, a product with AI causing injury or property damage may suffice to prove a defect claim.
Planning can reduce uncertainty. Contractual warranties, indemnities, and limitations on each may allocate liability. Companies also should consider how to demonstrate their AI's decision-making process generally and in specific instances. Because AI, using technologies such as neural networks, can learn to perform functions and arrive at decisions beyond its original programming, companies also will need to consider how to document and prove that a function was performed or a decision was made as a result of reasonable programming that met then-current industry standards or best practices. Or, a company may need to rely on a state of the art defense: that the product risk was not reasonably foreseeable at the time of programming. To complicate matters, depending on regulations, event recorder data may be available but not admissible to determine fault.
A risk analysis should consider consumer expectations of performance and safety of products with AI. It will be important for companies to educate consumers about the capabilities, risks, and limitations of AI, particularly limitations on operating domain. The risk-utility test may turn on proof that products incorporating AI performed as least as safely as their human-dependent counterparts. Testing, simulations, and field performance data across myriad foreseeable uses and misuses, as well as documented design changes to mitigate foreseeable risks, would help to demonstrate reasonable safety.
Companies should not overlook opportunities to participate in the creation of ethical, legal, and industry standards for products incorporating AI. Various organizations provide those opportunities, including the American Law Institute, the Partnership on AI, SAE International, and the National Council of Information Sharing and Analysis Centers. The U.S. Department of Transportation and National Highway Traffic Safety Administration have invited input from organizations to facilitate the development of regulations.
Three Key Takeaways
- Companies should monitor how legislatures and courts shape tort law to apply to products, components, and software incorporating AI.
- Companies should consider using contractual warranties, indemnities, and limitations to control liability risk.
- Companies should consider participating with industry groups and government agencies to develop ethical guidelines and industry standards that reflect the benefits, risks, and limitations of products with AI.
Lawyer Contacts
For further information, please contact your principal Firm representative or one of the lawyers listed below. General email messages may be sent using our "Contact Us" form, which can be found at www.jonesday.com/contactus/.
Charles H. Moellenberg, Jr.
Pittsburgh
+1.412.394.7917
chmoellenberg@jonesday.com
Robert W. Kantner
Dallas
+1.214.969.3737
rwkantner@jonesday.com
David C. Kiernan
San Francisco / Silicon Valley
+1.415.875.5745 / +1.650.739.3917
dkiernan@jonesday.com
Jeffrey J. Jones
Detroit / Columbus
+1.313.230.7950 / +1.614.281.3950
jjjones@jonesday.com
Jones Day publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only and may not be quoted or referred to in any other publication or proceeding without the prior written consent of the Firm, to be given or withheld at our discretion. To request reprint permission for any of our publications, please use our "Contact Us" form, which can be found on our website at www.jonesday.com. The mailing of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.