Working paper (E-prints, working papers and research blog)
The Explanations One Needs for the Explanations One Gives. The Necessity of Explainable AI (XAI) for Causal Explanations of AI-related harm - Deconstructing the ‘Refuge of Ignorance’ in the EU’s AI liability Regulation
Grozdanovski, Ljupcho
2024
 

Files


Full Text
just_ai_jeanmonnetresearchpaper_1_2024_grozdanovski1.pdf
Author postprint (1.56 MB)
Download

All documents in ORBi are protected by a user license.

Send to



Details



Keywords :
AI; causation; explainability; fair trial; procedural fairness; equality of arms; effective judicial protection; AI liability; Product Liability; AI Act; AI Liability Directive; Product Liability Directive
Abstract :
[en] This paper examines how explanations related to the adverse outcomes of Artificial Intelligence (AI) contribute to the development of causal evidentiary explanations in disputes surrounding AI liability. The study employs a dual approach: first, it analyzes the emerging global caselaw in the field of AI liability, seeking to discern prevailing trends regarding the evidence and explanations considered essential for the fair resolution of disputes. Against the backdrop of those trends, the paper evaluates the upcoming legislation in the European Union (EU) concerning AI liability, namely the AI Liability Directive (AILD) and Revised Product Liability Directive (R-PLD). The objective is to ascertain whether the systems of evidence and procedural rights outlined in this legislation, particularly the right to request the disclosure of evidence, enable litigants to adequately understand the causality underlying AI-related harms. Moreover, the paper seeks to determine if litigants can effectively express their views before dispute-resolution authorities based on that understanding. An examination of the AILD and R-PLD reveals that their evidence systems primarily support ad hoc explanations, allowing litigants and courts to assess the extent of the defendants' compliance with the standards enshrined in regulatory instruments, such as the AI Act. However, the paper contends that, beyond ad hoc explanations, achieving fair resolution in AI liability disputes necessitates post-hoc explanations. These should be directed at unveiling the functionalities of AI systems and the rationale behind harmful automated decisions. The paper thus suggests that ‘full’ explainable AI (XAI) that is, both ad hoc and post hoc, is necessary so that the constitutional requirements associated with the right to a fair trial (access to courts, equality of arms, contradictory debate) can be effectively met.
Research Center/Unit :
European Legal Studies Research Center (ELSC)
Disciplines :
European & international law
Judicial law
Civil law
Author, co-author :
Grozdanovski, Ljupcho ;  Université de Liège - ULiège > Cité
Language :
English
Title :
The Explanations One Needs for the Explanations One Gives. The Necessity of Explainable AI (XAI) for Causal Explanations of AI-related harm - Deconstructing the ‘Refuge of Ignorance’ in the EU’s AI liability Regulation
Publication date :
26 January 2024
Name of the research project :
JUST-AI Jean Monnet Center of Excellence
Funders :
EACEA - Agence exécutive européenne pour l’Education et la Culture [BE]
Funding number :
101127357
Available on ORBi :
since 29 January 2024

Statistics


Number of views
90 (3 by ULiège)
Number of downloads
23 (0 by ULiège)

Bibliography


Similar publications



Contact ORBi