K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," CoRR, vol. abs/1506.02640, 2015.
O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015. Springer International Publishing, 2015, pp. 234-241.
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," CoRR, vol. abs/1312.6199, 2013.
A. Nguyen, J. Yosinski, and J. Clune, "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 427-436.
S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "Deepfool: A simple and accurate method to fool deep neural networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2574-2582.
I. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," CoRR, vol. abs/1412.6572, 2014.
N. Papernot, P. D. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, "The limitations of deep learning in adversarial settings," CoRR, vol. abs/1511.07528, 2015.
N. Carlini and D. A. Wagner, "Towards evaluating the robustness of neural networks," CoRR, vol. abs/1608.04644, 2016.
A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial machine learning at scale," CoRR, vol. abs/1611.01236, 2016.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, "ImageNet Large Scale Visual Recognition Challenge," International Journal of Computer Vision, vol. 115, no. 3, pp. 211-252, 2015.
N. Papernot, P. D. McDaniel, X. Wu, S. Jha, and A. Swami, "Distillation as a defense to adversarial perturbations against deep neural networks," CoRR, vol. abs/1511.04508, 2015.
J. Lu, T. Issaranon, and D. A. Forsyth, "Safetynet: Detecting and rejecting adversarial examples robustly," CoRR, vol. abs/1704.00103, 2017.
N. Carlini and D. A. Wagner, "Adversarial examples are not easily detected: Bypassing ten detection methods," CoRR, vol. abs/1705.07263, 2017.
A. Athalye, N. Carlini, and D. Wagner, "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples," arXiv preprint arXiv:1802.00420, 2018.
A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples in the physical world," CoRR, vol. abs/1607.02533, 2016.
C. M. Bishop, Pattern Recognition and Machine Learning. Springer, 2006.
I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
J. S. Bridle, "Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition," in Neurocomputing. Springer, 1990, pp. 227-236.
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
U. Ozbulak, W. De Neve, and A. Van Messem, "How the softmax output is misleading for evaluating the strength of adversarial examples," NeuRIPS 2018, Workshop on Security in Machine Learning, arXiv:1811.08577, 2018.
K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
M. Frigge, D. C. Hoaglin, and B. Iglewicz, "Some implementations of the boxplot," The American Statistician, vol. 43, no. 1, pp. 50-54, 1989.
F. R. Hampel, "The breakdown points of the mean combined with some rejection rules," Technometrics, vol. 27, no. 2, pp. 95-107, 1985.
J. W. Tukey, Exploratory Data Analysis. Reading, Mass., 1977, vol. 2.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.
C.-J. Simon-Gabriel, Y. Ollivier, B. Schölkopf, L. Bottou, and D. Lopez-Paz, "Adversarial vulnerability of neural networks increases with input dimension," arXiv preprint arXiv:1802.01421, 2018.