Paper published on a website (Scientific congresses and symposiums)
Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Ozbulak, Utku; Anzaku, Esla Timothy; De Neve, Wesley et al.
2021The 32nd British Machine Vision Conference (BMCV 2021)
Peer reviewed
 

Files


Full Text
0783.pdf
Publisher postprint (473.85 kB)
Download
Annexes
0783supp.pdf
(6 MB)
Download

All documents in ORBi are protected by a user license.

Send to



Details



Abstract :
[en] Although the adoption rate of deep neural networks (DNNs) has tremendously increased in recent years, a solution for their vulnerability against adversarial examples has not yet been found. As a result, substantial research efforts are dedicated to fix this weakness, with many studies typically using a subset of source images to generate adversarial examples, treating every image in this subset as equal. We demonstrate that, in fact, not every source image is equally suited for this kind of assessment. To do so, we devise a large-scale model-to-model transferability scenario for which we meticulously analyze the properties of adversarial examples, generated from every suitable source image in ImageNet by making use of three of the most frequently deployed attacks. In this transferability scenario, which involves seven distinct DNN models, including the recently proposed vision transformers, we reveal that it is possible to have a difference of up to 12.5% in model-to-model transferability success, 1.01 in average L2 perturbation, and 0.03 (8/225) in average L∞ perturbation when 1, 000 source images are sampled randomly among all suitable candidates. We then take one of the first steps in evaluating the robustness of images used to create adversarial examples, proposing a number of simple but effective methods to identify unsuitable source images, thus making it possible to mitigate extreme cases in experimentation and support high-quality benchmarking. In support of future research efforts, we make our code and the statistics for all evaluated source images as well as the list of identified fragile source images publicly available in https://github.com/utkuozbulak/imagenet-adversarial-image-evaluation.
Disciplines :
Computer science
Author, co-author :
Ozbulak, Utku
Anzaku, Esla Timothy;  Ghent University Global Campus
De Neve, Wesley;  Ghent University Global Campus
Van Messem, Arnout  ;  Université de Liège - ULiège > Département de mathématique > Statistique appliquée aux sciences
Language :
English
Title :
Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Publication date :
2021
Event name :
The 32nd British Machine Vision Conference (BMCV 2021)
Event date :
November 22–25, 2021
Audience :
International
Peer reviewed :
Peer reviewed
Source :
Available on ORBi :
since 06 April 2022

Statistics


Number of views
28 (5 by ULiège)
Number of downloads
5 (0 by ULiège)

Bibliography


Similar publications



Contact ORBi