Article (Scientific journals)
Humpty Dumpty and High-Risk AI Systems: The Ratione Materiae Dimension of the Proposal for an EU Artificial Intelligence Act
De Cooman, Jérôme
2022In Market and Competition Law Review, VI (1), p. 49-88
Peer Reviewed verified by ORBi Dataset
 

Files


Full Text
De_Cooman_Jerome_Humpty_Dumpty_The_Ratione_Materiae_Dimension_of_the_Proposal_for_an_EU_Artificial_Intelligence_Act.pdf
Author postprint (36.78 MB)
Download

All documents in ORBi are protected by a user license.

Send to



Details



Keywords :
Artificial Intelligence Act; AI Act; risk; risk regulation; product safety; recommender systems; EU Competition law
Abstract :
[en] On 21 April 2021, the European Commission proposed a Regulation laying down harmonised rules on artificial intelligence (hereafter, “AI”). This so-called Artificial Intelligence Act (hereafter, the “Proposal) is based on European values and fundamental rights. Far from appearing ex nihilo, it deeply relies on the European Ethics Guidelines proposed by the independent high-level expert group on AI set up by the European Commission. In addition, the European Commission’s White Paper on AI already called for a regulatory framework that should concentrate on a risk-based approach to AI regulation that minimises potential material and immaterial harms. The Proposal, internal market oriented, sets up a risk-based approach to AI regulation that distinguishes unacceptable, high, specific and non-high-risks. First, AI systems that produce unacceptable risk are prohibited by default. Second, those that generate high risk are subject to compliance with mandatory requirements such as transparency and human oversight. Third, AI systems interacting with natural persons have to respect transparency obligations. Fourth, developers and users of non-high-risk AI systems should voluntarily endorse requirements for high-risk AI systems. Exploring the origins of the AI Act and the ratione materiae dimension of the Proposal, this article argues that the very choice of what is a high-risk AI system and the astounding complexity of this definition are open to criticism. Rather than a full-extent analysis of the Proposal’s requirements, the article focuses on the definition of what is an unacceptable, high, specific and non-high-risk AI system. With a strong emphasis on high-risk AI systems, the Commission comes dangerously close to the pitfall this article humorously labels as the Humpty Dumpty fallacy, to pay tribute to the nineteenth century English author Lewis Carroll. Just because the Commission exhaustively enumerates high-risk AI systems does not mean the residual category displays non-high-risk. To support this argument, this article introduces recommender systems for consumers and competition law enforcement authorities. Neither of these two examples fall under the AI Act scope of application. Yet, the issues they have raised might well be qualified as high-risk in a different context. In addition, the AI Act, although inapplicable, could have provided a solution.
Disciplines :
European & international law
Author, co-author :
De Cooman, Jérôme  ;  Université de Liège - ULiège > Département de droit > Droit matériel européen
Language :
English
Title :
Humpty Dumpty and High-Risk AI Systems: The Ratione Materiae Dimension of the Proposal for an EU Artificial Intelligence Act
Publication date :
25 May 2022
Journal title :
Market and Competition Law Review
ISSN :
2184-0008
Publisher :
Universidade Católica Editora, Porto, Portugal
Volume :
VI
Issue :
1
Pages :
49-88
Peer reviewed :
Peer Reviewed verified by ORBi
Available on ORBi :
since 31 May 2022

Statistics


Number of views
216 (21 by ULiège)
Number of downloads
29 (4 by ULiège)

Scopus citations®
 
6
Scopus citations®
without self-citations
4
OpenCitations
 
0

Bibliography


Similar publications



Contact ORBi