Carlini, N., Wagner, D.A., Towards evaluating the robustness of neural networks. CoRR, 2016 arXiv:1608.04644.
Carlini, N., Wagner, D.A., Adversarial examples are not easily detected: bypassing ten detection methods. CoRR, 2017 arXiv:1705.07263.
Gong, Z., Wang, W., Ku, W.-S., 2017. Adversarial and clean data are not twins. arXiv:1704.04960.
Goodfellow, I., McDaniel, P., Papernot, N., Making machine learning robust against adversarial inputs. Commun. ACM 61:7 (2018), 56–66.
Goodfellow, I., Shlens, J., Szegedy, C., Explaining and harnessing adversarial examples. CoRR, 2014 arXiv:1412.6572.
K. Grosse, P. Manoharan, N. Papernot, M. Backes, P. McDaniel, On the (statistical) detection of adversarial examples. (2017) arXiv: 1702.06280.
He, K., Zhang, X., Ren, S., Sun, J., Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, 770–778.
Krizhevsky, A., Hinton, G., Learning Multiple Layers of Features From Tiny Images. Technical Report, 2009.
Krizhevsky, A., Sutskever, I., Hinton, G.E., ImageNet classification with deep convolutional neural networks. Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., (eds.) Advances in Neural Information Processing Systems 25, 2012, 1097–1105.
Kurakin, A., Goodfellow, I., Bengio, S., Adversarial examples in the physical world. CoRR, 2016 arXiv:1607.02533.
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., Gradient-based learning applied to document recognition. Proc. IEEE 86:11 (1998), 2278–2324.
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, A. Vladu, Towards deep learning models resistant to adversarial attacks. (2017) arXiv: 1706.06083.
J.H. Metzen, T. Genewein, V. Fischer, B. Bischoff, On detecting adversarial perturbations. (2017) arXiv: 1702.04267.
Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P., DeepFool: a simple and accurate method to fool deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
Papernot, N., McDaniel, P.D., Goodfellow, I., Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. CoRR, 2016 arXiv:1605.07277.
Papernot, N., McDaniel, P.D., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A., The limitations of deep learning in adversarial settings. CoRR, 2015 arXiv:1511.07528.
J. Redmon, S.K. Divvala, R.B. Girshick, A. Farhadi, You only look once: unified, real-time object detection. (2015) arXiv: 1506.02640.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L., ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115:3 (2015), 211–252.
K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition. (2014) arXiv: 1409.1556.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R., Intriguing properties of neural networks. CoRR, 2013 arXiv:1312.6199.