Evidence; Artificial Intelligence; AI Act; risk; risk regulation; Normativity
Abstract :
[en] In the spirit of the European Commission’s (EC) risk-based approach to AI, the AI Act (COM(2021) 206 final) contains a four-level taxonomy of AI-related risks, ranging from non-high to unacceptable. For so-called high-risk AI, it sets out a priori technical standards, the observance of which is meant to prevent the occurrence of various types of harm. However, based on a quantitative/qualitative analysis of the results from two public consultations conducted by the EC, this study shows that the views gathered by the EC are not reflected in the AI Act’s provisions. Although in ‘standard’ EU risk regulation, the objective of attaining a desired level of protection can justify a regulatory address, evidence remains required, for the purpose of avoiding risk-misrepresentations. Bearing in mind the requirement for evidence-based policy, expressed in the 2015 Better Regulation Agenda, this study argues that AI Act, as it currently stands, is not based on the evidence gathered and analysed by the EC, but that a preexisting strategy on AI seems to be primarily - if not, exclusively - form the grounds on which the EC designed the regulatory framework which took shape in the AI Act.
De Cooman, Jérôme ✱; Université de Liège - ULiège > Département de droit > Droit matériel européen ; Université de Liège - ULiège > Cité ; Université de Liège - ULiège > Département de droit > Droit européen de la concurrence
✱ These authors have contributed equally to this work.
Language :
English
Title :
Of Hypothesis and Facts: The Curious Origins of the EU's Regulation of High-Risk AI