No full text
Scientific conference in universities or research centers (Scientific conferences in universities or research centers)
The Explanations One Needs for the Explanations One Gives: Thoughts on the Epistemic Link between Explainable AI and Causal (Evidentiary) Explanations under the EU’s AI Liability Regulation
Grozdanovski, Ljupcho
2023
 

Files


Full Text
No document available.

Send to



Details



Keywords :
Intelligence Artificielle, causality, liability, AI Act, AI Liability Directive, Product Liability Directive
Abstract :
[en] The aim of this presentation is to bridge the concept of explainability in connection to AI and (legal) evidence. We will specifically explore the epistemology and interrelationship between explanations relative to the (in)accuracy of AI output and causal explanations pertaining to the link between that output and a harm suffered. With explanatory accuracy as fil rouge, our analytical framework includes two main theoretical touchstones (and corresponding methodologies). The first strand consists of general knowledge construction theory and the epistemology of legal evidence. This strand will inform us of the conditions that need to be met for explanations pertaining to AI output and those pertaining to causality in connection to that output to be considered as accurate (or at least plausible). The second theoretical touchstone is the theory of justice and procedural fairness. The choice of this strand is justified by the fact that trials are the privileged epistemic contexts where accurate explanations on disputed facts are sought. To meet the standards of accuracy required for those, litigants in AI liability disputes should be able to have procedural entitlements that could allow them to give evidence and explain causation in conditions of procedural parity. In light of this (procedural) equality requirement that should frame the pursuit of fact-accuracy in adjudicatory contexts, the key analytical referent for this study is the theory of procedural abilities i.e. entitlements that litigants should enjoy in order to effectively make their views known before a court. Against this backdrop, and with a focus on the European Union’s (EU) regulation of AI, we will seek to answer two questions: 1. in cases of harm occasioned by the use of an AI system, does the accuracy of causal explanations in AI-related disputes depend on the accuracy of the explanations given on a system’s functionalities? 2. in the affirmative, should the applicable systems of evidence in the EU include the procedural ability for litigants to request and/or give evidence and explanation on how a given system had caused harm? To answer these questions, we will critically examine the systems of evidence in the EU’s upcoming procedural regulation on AI namely, the AI Liability Directive (AILD) and the Revised Product Liability Directive (RPLD). Both instruments include the right to request evidence (and explanation) for victims of AI-related harm, only not for the purpose of uncovering how an AI system actually caused harm (post hoc explainability), but for the purpose of determining whether a human agent (programmer or user) complied with technical standardization legislation, such as the AI Act (ad hoc explainability). Based on an analysis of the available (mostly North American) caselaw on AI liability, the EU’s systems of evidence are open to criticism. First, said caselaw reveals a trend of litigants consistently seeking evidence on how a given system actually caused harm. To this end, they naturally require post hoc explanations. The examined caselaw also reveals that ‘opening the black box’ is not always feasible, pushing courts to request expert evidence that can support arguments on the causal link between an AI system and a harm suffered. By limiting the evidence (and corresponding explanations) to ad hoc epxlainability (i.e. the compliance with the technical standards in the AI Act), the AILD and R-PLD do not seem to leave much room for litigants to request additional evidence - reverse engineering or expertise - that could provide them with the explanations they need to effectively argue causation. Second, neither the AILD nor the R-PLD mention the proof of reliance on (harmful) automated decisions. This is the missing explanatory piece in the instruments considered: as the examined national caselaw shows, the explanation that victims highlight as necessary is - again - not whether a human agent complied with applicable technical standards. They seek explanations on the reasons why that agent believed they should rely on a given decision (the noteworthy point being that those reasons may or may not be rooted in the agent’s compliance with an instrument like the AI Act). Perhaps, when the AILD and R-PLD become binding, court practice will interpret their provisions in a way that will enhance the litigants’ procedural abilities to request the evidence they need in order to better explain and debate causation. However, until national and EU courts begin applying these instruments, we remain in the wait-and-see zone and can but speculate on how they ought to apply, so that the basic requirements of fairness (like the equality of arms) can be fully observed.
Disciplines :
European & international law
Author, co-author :
Grozdanovski, Ljupcho ;  Université de Liège - ULiège > Cité
Language :
English
Title :
The Explanations One Needs for the Explanations One Gives: Thoughts on the Epistemic Link between Explainable AI and Causal (Evidentiary) Explanations under the EU’s AI Liability Regulation
Publication date :
29 November 2023
Event name :
Séminaire
Event organizer :
Inria Université de Lille
Event place :
Lille, France
Event date :
29 novembre 2023
Available on ORBi :
since 15 December 2023

Statistics


Number of views
6 (0 by ULiège)
Number of downloads
0 (0 by ULiège)

Bibliography


Similar publications



Contact ORBi