Article (Scientific journals)
The Explanations One Needs for the Explanations One Gives. The Necessity of Explainable AI (XAI) for Causal Explanations of AI-related harm - Deconstructing the ‘Refuge of Ignorance’ in the EU’s AI Liability Regulation
Grozdanovski, Ljupcho
2024In International Journal of Law, Ethics and Technology, (2), p. 155-262
Peer reviewed
 

Files


Full Text
IntJLEthicsTech_y_ExplanationsOneNeeds.pdf
Author postprint (1.25 MB)
Download

All documents in ORBi are protected by a user license.

Send to



Details



Keywords :
Artificial Intelligence; causation; explainability; procedural fairness; equality of arms; effective participation; product liability; AI Act; AI Liability Directive; Product Liability Directive
Abstract :
[en] This paper examines how explanations related to the adverse outcomes of Artificial Intelligence (AI) contribute to the development of causal evidentiary explanations in disputes surrounding AI liability. The study employs a dual approach: first, it analyzes the emerging global caselaw in the field of AI liability, seeking to discern prevailing trends regarding the evidence and explanations considered essential for the fair resolution of disputes. Against the backdrop of those trends, the paper evaluates the upcoming legislation in the European Union (EU) concerning AI liability, namely the AI Liability Directive (AILD) and Revised Product Liability Directive (R- PLD). The objective is to ascertain whether the systems of evidence and procedural rights outlined in this legislation, particularly the right to request the disclosure of evidence, enable litigants to adequately understand the causality underlying AI-related harms. Moreover, the paper seeks o determine if litigants can effectively express their views before dispute-resolution authorities based on that understanding. An examination of the AILD and R-PLD reveals that their evidence systems primarily support ad hoc explanations, allowing litigants and courts to assess the extent of the defendants' compliance with the standards enshrined in regulatory instruments, such as the AI Act. However, the paper contends that, beyond ad hoc explanations, achieving fair resolution in AI liability disputes necessitates post-hoc explanations. These should be directed at unveiling the functionalities of AI systems and the rationale behind harmful automated decisions. The paper thus suggests that ‘full’ explainable AI (XAI) that is, both ad hoc and post hoc, is necessary so that the constitutional requirements associated with the right to a fair trial (access to courts, equality of arms, contradictory debate) can be effectively met.
Disciplines :
European & international law
Author, co-author :
Grozdanovski, Ljupcho ;  Université de Liège - ULiège > Cité
Language :
English
Title :
The Explanations One Needs for the Explanations One Gives. The Necessity of Explainable AI (XAI) for Causal Explanations of AI-related harm - Deconstructing the ‘Refuge of Ignorance’ in the EU’s AI Liability Regulation
Publication date :
May 2024
Journal title :
International Journal of Law, Ethics and Technology
Issue :
2
Pages :
155-262
Peer reviewed :
Peer reviewed
Available on ORBi :
since 04 July 2024

Statistics


Number of views
10 (1 by ULiège)
Number of downloads
1 (0 by ULiège)

OpenCitations
 
0
OpenAlex citations
 
1

Bibliography


Similar publications



Contact ORBi