Abstract :
[en] The use of artificial intelligence (AI) in military armament comes with a large number of unparalleled opportunities, but also with a series of challenges, whether technical, ethical, political, or legal in nature. Whereas the legal challenges will be at the heart of this report, one cannot make abstraction of the other challenges as they are intimately intertwined and highly relevant for the legal questions.
This particularly holds true for the technical capabilities of AI-driven weapons and their level of autonomy, especially regarding the control that would be left to humans concerning targeting decisions. Due to the progressive integration of autonomy in weapons, the role of the human operating such weapon seems to change from that of an ‘active controller’ to that of a ‘passive supervisor’. While it is generally agreed upon that ‘meaningful human control’ should remain at all time to ensure legal use and appropriate accountability, this control is, in practice, less obvious and increasingly difficult to uphold due to the speed of execution, the number of tasks to be accomplished, but also the complexity of these tasks. Moreover, the unpredictability resulting from the nature and design of these autonomous systems questions the role and impact of human control. For instance, in case of weapons with self-learning capabilities, the role of human actors is further reduced as the system learns by itself and develops new solutions to complex problems, without human intervention.
In this report, we will start by presenting the Belgian political debate and legal approach to these complex issues, at both national and international level (I). Then, we will describe how this approach fits into the more general, domestic legal framework on the jus ad bellum (II) and the jus in bello (III). Lastly, we will analyse whether and how the general principles of criminal liability can be applied (IV).