Unpublished conference/Abstract (Scientific congresses and symposiums)
Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Ozbulak, Utku; Anzaku, Esla Timothy; De Neve, Wesley et al.
2021The 32nd British Machine Vision Conference (BMVC 2021)
Peer reviewed
 

Files


Full Text
BMVC-2021.pdf
Author postprint (2.04 MB)
Download

All documents in ORBi are protected by a user license.

Send to



Details



Keywords :
adversarial; adversarial attacks; adversarial examples; adversarial perturbation; adversarial transferability; adversarial vulnerability; model-to-model transferability; robustness; source images; image suitability; imagenet
Abstract :
[en] Although the adoption rate of deep neural networks (DNNs) has tremendously increased in recent years, a solution for their vulnerability against adversarial examples has not yet been found. As a result, substantial research efforts are dedicated to fix this weakness, with many studies typically using a subset of source images to generate adversarial examples, treating every image in this subset as equal. We demonstrate that, in fact, not every source image is equally suited for this kind of assessment. To do so, we devise a large-scale model-to-model transferability scenario for which we meticulously analyze the properties of adversarial examples, generated from every suitable source image in ImageNet by making use of three of the most frequently deployed attacks. In this transferability scenario, which involves seven distinct DNN models, including the recently proposed vision transformers, we reveal that it is possible to have a difference of up to $12.5%$ in model-to-model transferability success, $1.01$ in average $L_2$ perturbation, and $0.03$ ($8/225$) in average $L_{infty}$ perturbation when $1,000$ source images are sampled randomly among all suitable candidates. We then take one of the first steps in evaluating the robustness of images used to create adversarial examples, proposing a number of simple but effective methods to identify unsuitable source images, thus making it possible to mitigate extreme cases in experimentation and support high-quality benchmarking.
Disciplines :
Computer science
Author, co-author :
Ozbulak, Utku
Anzaku, Esla Timothy
De Neve, Wesley
Van Messem, Arnout  ;  Université de Liège - ULiège > Département de mathématique > Statistique appliquée aux sciences
Language :
English
Title :
Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Publication date :
22 November 2021
Event name :
The 32nd British Machine Vision Conference (BMVC 2021)
Event date :
November 22-25, 2021
Audience :
International
Peer reviewed :
Peer reviewed
Available on ORBi :
since 12 May 2022

Statistics


Number of views
36 (0 by ULiège)
Number of downloads
22 (0 by ULiège)

Bibliography


Similar publications



Contact ORBi