[en] This paper assesses the relevance of following a product safety approach in AI regulation. It explores the original concept - within traditional risk regulation scholarship - of risk to fundamental rights, focusing on how various technical standards applied to so-called high-risk AI systems contribute to enhancing fundamental rights protection by eliminating or minimising the risks associated with their violation.
Against this backdrop, the paper critically examines the soundness of the normative choice to prioritise technical standards (typically associated with product safety) over subjective rights as a means of preventing fundamental rights violations. This inquiry is particularly relevant given that the technical standardisation under the AI Act is intended to mitigate the risk of human rights violations without granting end users subjective rights—except for the right to lodge a complaint and a (residual) right to an explanation.
The paper argues that compliance with technical standards designed to safeguard fundamental rights represents a unique interplay between risks and rights. While the concept of risk in EU regulation traditionally applies to measurable and assessable threats, the emerging notion of risk to fundamental rights entails unquantifiable probabilities of harm. Despite this, the EU legislature has chosen to adopt a product safety logic for the AI Act, relying on the arguably naïve assumption that compliance with technical standards alone will be sufficient to protect against fundamental rights violations. Based on this line of reasoning, this paper critically evaluates what this reveals about the EU legislature in enacting the AI Act and the potential effects on its implementation.