No document available.
Abstract :
[en] Artificial Intelligence (hereafter, ‘AI’) systems are widely adopted by public administrations. Competition law does not escape the rule. This is unsurprising. First, AI systems promise to address well-documented flaws in human decision-making, e.g., arbitrariness or bias. Second, AI systems carries the potential to solve the scissor effect of competition law enforcement identified by the European Court of Auditors (hereafter, ‘ECA’) in 2020, i.e., a decrease of market surveillance capacity on the one hand, and an increase of cases’ complexity on the other. The ECA therefore demanded the European Commission (hereafter, ‘EC’) to put more effort into proactively detecting anticompetitive behaviours. In this regard, it has been suggested AI systems could revitalize ex officio investigations by helping the EC open the ‘right’ investigation. One way to do so is AI-driven cartel screening.
AI-driven cartel screening flags indicators of collusion that then trigger the need for further investigations. This paper chooses to focus on dawn raids. In this regard, it should be borne in mind that the duty to state reason applies to dawn raid – at least to some extent. Leaving aside the duty to state specific reason pursuant to Article 20(4) Regulation 1/2003, this paper focuses on the condition proposed by the European Court of Justice: for a dawn raid to be legal, the EC has to be in possession of information and evidence providing reasonable grounds for suspecting infringement of competition law by the undertaking concerned. The question is, therefore, whether the conclusion of AI-driven cartel screening constitutes such information and evidence providing reasonable grounds.
This is debatable. Although this paper does not discard the benefits of AI-driven cartel screening, it argues it is not a silver bullet against collusive behaviour. This algorithmic shift in the fight against cartel faces (at least) three major challenges. The first is a data challenge. AI-driven cartel screening is a data-dependent solution that is therefore impacted by problems in the availability and quality of the data it relies on. As a result, the rate of error (both type I and type II) is forecasted to be non-negligible. The second is an algorithmic challenge. The opacity of some AI systems infringes the principle of good administration as limited explicability – given unknown parameters weight – prevent public servants to properly state reasons of their decision. The third is a human challenge. Human cognitive biases similarly challenge the duty to state reasons as explaining an administrative decision also means explaining how the public servant weighted that algorithmic recommendation in the decision-making process.
None of these drawbacks constitute a dead-end. This paper draws inspiration from the Proposal for a Regulation laying down harmonised rule on AI (hereafter, ‘AI Act’) and suggests technical (based on semi-supervised learning) and non-technical (four-eyes principle) solutions to ensure that AI, rather than legally justifying a dawn raid, does not become a source of fishing expeditions.