(49-2) 09 * << * >> * Russian * English * Content * All Issues

A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness
A.V. Trusov 1,2,3, E.E. Limonova 1,2, V.V. Arlazarov 1,2

Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences,
119333, Russia, Moscow, Vavilova 44, kor.2;
Smart Engines Service LLC, 117312, Russia, Moscow, pr. 60-letiya Oktyabrya 9;
Moscow Institute of Physics and Technology, 141701, Russia, Dolgoprudny, Institutskiy per. 9

 PDF, 8850 kB

DOI: 10.18287/2412-6179-CO-1494

Pages: 222-252.

Full text of article: English language.

Abstract:
Adversarial examples, in the context of computer vision, are inputs deliberately crafted to deceive or mislead artificial neural networks. These examples exploit vulnerabilities in neural networks, resulting in minimal alterations to the original input that are imperceptible by humans but can significantly impact the network’s output. In this paper, we present a thorough survey of research on adversarial examples, with a primary focus on their impact on neural network classifiers. We closely examine the theoretical capabilities and limitations of artificial neural networks. After that, we explore the discovery and evolution of adversarial examples, starting from basic gradient-based techniques and progressing toward the recent trend of employing generative neural networks for this purpose. We discuss the limited effectiveness of existing countermeasures against adversarial examples. Furthermore, we emphasize that the adversarial examples originate the misalignment between human and neural network decision-making processes. That can be attributed to the current methodology for training neural networks. We also argue that the commonly used term “attack on neural networks” is misleading when discussing adversarial deep learning. Through this paper, our objective is to provide a comprehensive overview of adversarial examples and inspire further researchers to develop more robust neural networks. Such networks will align better with human decision-making processes and enhance the security and reliability of computer vision systems in practical applications.

Keywords:
adversarial examples, adversarial deep learning, neural networks, neural network security.

Citation:
Trusov AV, Limonova EE, Arlazarov VV. A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness. Computer Optics 2025; 49(2): 222-252. DOI: 10.18287/2412-6179-CO-1494.

References:

  1. Lynchenko A, Sheshkus A, Arlazarov VL. Document image recognition algorithm based on similarity metric robust to projective distortions for mobile devices. Proc SPIE 2019; 11041: 110411K. DOI: 10.1117/12.2523152.
  2. Demidova YA, Sheshkus AV, Arlazarov VL. Method for training a compact discrete neural network descriptor. Proc SPIE 2022; 12084: 1208411. DOI: 10.1117/12.2623166.
  3. Bezmaternykh PV, Ilin DA, Nikolaev DP. U-Net-bin: hacking the document image binarization contest. Computer Optics 2019; 43(5): 825-832. DOI: 10.18287/2412-6179-2019-43-5-825-832.
  4. Shariff W, Farooq MA, Lemley J, Corcoran P. Event-based yolo object detection: proof of concept for forward perception system. Proc SPIE 2023; 12701: 127010A. DOI: 10.1117/12.2679341.
  5. Ilin DA. Fast words boundaries localization in text fields for low quality document images. Proceedings of the Instittute for Systems Analysis Russian Academy of Sciences (ISA RAS) 2018; 68(S1): 192-198. DOI: 10.14357/20790279180522.
  6. Ye M, Shen J, Lin G, Xiang T, Shao L, Hoi SCH. Deep learning for person re-identification: A survey and outlook. IEEE Trans Pattern Anal Mach Intell 2022; 44(6): 2872-2893. DOI: 10.1109/TPAMI.2021.3054775.
  7. Andreeva EI, Arlazarov VV, Gayer AV, Dorokhov EP, Sheshkus AV, Slavin OA. Document recognition method based on convolutional neural network invariant to 180 degree rotation angle. Journal of Information Technologies and Computing Systems 2019; 4: 87-93. DOI: 10.14357/20718632190408.
  8. Arlazarov VV, Andreeva EI, Bulatov KB, Nikolaev DP, Petrova OO, Savelev BI, Slavin OA. Document image analysis and recognition: a survey. Computer Optics 2022; 46(4): 567-589. DOI: 10.18287/2412-6179-CO-1020.
  9. Yang B, Cao X, Xiong K, Yuen C, Guan YL, Leng S, Qian L, Han Z. Edge intelligence for autonomous driving in 6g wireless system: Design challenges and solutions. IEEE Wirel Commun 2021; 28(2): 40-47. DOI: 10.1109/MWC.001.2000292.
  10. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R. Intriguing properties of neural networks. arXiv Preprint. 2013. Source: <https://arxiv.org/abs/1312.6199>. DOI: 10.48550/arXiv.1312.6199.
  11. T. B. Brown, D. Mané, A. Roy, M. Abadi, J. Gilmer, Adversarial patch. arXiv Preprint. 2017. Source: <https://arxiv.org/abs/1312.6199>. DOI: 10.48550/arXiv.1712.09665
  12. Lin C-S, Hsu C-Y, Chen P-Y, Yu C-M. Real-world adversarial examples via makeup. 2022 IEEE Int Conf on Acoustics, Speech and Signal Processing (ICASSP) 2022: 2854-2858. DOI: 10.1109/ICASSP43922.2022.9747469.
  13. Hu S, Liu X, Zhang Y, Li M, Zhang LY, Jin H, Wu L. Protecting facial privacy: Generating adversarial identity masks via style-robust makeup transfer. IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR) 2022: 15014-15023. DOI: 10.1109/CVPR52688.2022.01459.
  14. Zolfi A, Avidan S, Elovici Y, Shabtai A. Adversarial mask: Real-world universal adversarial attack on face recognition models. In Book: Amini MR, Canu S, Fischer A, Guns T, Kralj NP, Tsoumakas G, eds. Machine learning and knowledge discovery in databases. European Conference, ECML PKDD 2022, Grenoble, France, September 19-23, 2022, Proceedings, Part III. Cham, Switzerland: Springer Nature Switzerland AG; 2023, pp. 304-320. DOI: 10.1007/978-3-031-26409-2_19.
  15. Zhou Z, Tang D, Wang X, Han W, Liu X, Zhang K. Invisible mask: Practical attacks on face recognition with infrared, arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1803.04683>. DOI: 10.48550/arXiv.1803.04683.
  16. Wu Z, Lim S-N, Davis LS, Goldstein T. Making an invisibility cloak: Real world adversarial attacks on object detectors, In Book: Vedaldi A, Bischof H, Brox T, Frahm JM, eds. Computer vision – ECCV 2020. 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV. Cham, Switzerland: Springer Nature Switzerland AG; 2020: 1-17. DOI: 10.1007/978-3-030-58548-8_1.
  17. Thys S, Van Ranst W, Goedemé T. Fooling automated surveillance cameras: Adversarial patches to attack person detection. 2019 IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019: 49-55. DOI: 10.1109/CVPRW.2019.00012.
  18. Zhong Y, Liu X, Zhai D, Jiang J, Ji X. Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon. 2022 IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR) 2022: 15345-15354. DOI: 10.1109/CVPR52688.2022.01491.
  19. Hong S, Davinroy M, Kaya Y, Locke SN, Rackow I, Kulda K, Dachman-Soled D, Dumitraş T. Security analysis of deep neural networks operating in the presence of cache side-channel attacks. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1810.03487>. DOI: 10.48550/arXiv.1810.03487.
  20. Oh SJ, Schiele B, Fritz M. Towards reverse-engineering black-box neural networks. In Book: Samek W, Montavon G, Vedaldi A, Hansen L, Müller KR, eds. Explainable AI: Interpreting, explaining and visualizing deep learning. Cham, Switzerland: Springer Nature Switzerland AG; 2019: 121-144. DOI: 10.1007/978-3-030-28954-6_7.
  21. Chmielewski L, Weissbart L. On reverse engineering neural network implementation on GPU. In Book: Zhou J, et al, eds. Applied cryptography and network security workshops. ACNS 2021 Satellite Workshops, AIBlock, AIHWS, AIoTS, CIMSS, Cloud S&P, SCI, SecMT, and SiMLA, Kamakura, Japan, June 21-24, 2021, Proceedings. Cham, Switzerland: Springer Nature Switzerland AG; 2021: 96-113. DOI: 10.1007/978-3-030-81645-2_7.
  22. Goldblum M, Tsipras D, Xie C, Chen X, Schwarzschild A, Song D, Madry A, Li B, Goldstein T. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. IEEE Trans Pattern Anal Mach Intell 2023; 45(2): 1563-1580. DOI: 10.1109/TPAMI.2022.3162397.
  23. Gu T, Dolan-Gavitt B, Garg S. BadNets: Identifying vulnerabilities in the machine learning model supply chain. arXiv Preprint. 2017. Source: <https://arxiv.org/abs/1708.06733>. DOI: 10.48550/arXiv.1708.06733.
  24. Shafahi A, Huang WR, Najibi M, Suciu O, Studer C, Dumitras T, Goldstein T. Poison frogs! targeted clean-label poisoning attacks on neural networks. NIPS'18: Proc 32nd Int Conf on Neural Information Processing Systems 2018: 6106-6116.
  25. Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. CCS '15: Proc 22nd ACM SIGSAC Conf on Computer and Communications Security 2015: 1322-1333. DOI: 10.1145/2810103.2813677.
  26. Wang Y, Deng J, Guo D, Wang C, Meng X, Liu H, Ding C, Rajasekaran S. Sapag: A self-adaptive privacy attack from gradients. arXiv Preprint. 2020. Source: <https://arxiv.org/abs/2009.06228>. DOI: 10.48550/arXiv.2009.06228.
  27. Akhtar N, Mian A. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access 2018; 6: 14410-14430. DOI: 10.1109/ACCESS.2018.2807385.
  28. Machado GR, Silva E, Goldschmidt RR. Adversarial machine learning in image classification: A survey toward the defender’s perspective. ACM Comput Surv (CSUR) 2021; 55(1): 8. DOI: 10.1145/3485133.
  29. Long T, Gao Q, Xu L, Zhou Z. A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions. Computers & Security 2022; 121: 102847. DOI: 10.1016/j.cose.2022.102847.
  30. Ren K, Zheng T, Qin Z, Liu X. Adversarial attacks and defenses in deep learning. Engineering 2020; 6(3): 346-360. DOI: 10.1016/j.eng.2019.12.012.
  31. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks. arXiv Preprint. 2017. Source: <https://arxiv.org/abs/1706.06083>. DOI: 10.48550/arXiv.1706.06083.
  32. Liu X, Cheng M, Zhang H, Hsieh C-J. Towards robust neural networks via random self-ensemble. In Book: Ferrari V, Hebert M, Sminchisescu C, Weiss Y, eds. Computer Vision – ECCV 2018. 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part VII. Cham, Switzerland: Springer Nature Switzerland AG; 2018: 381-397. DOI: 10.1007/978-3-030-01234-2_23.
  33. Athalye A, Carlini N, Wagner D. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. Int Conf on Machine Learning (PMLR) 2018: 274-283.
  34. Wong E, Kolter Z. Provable defenses against adversarial examples via the convex outer adversarial polytope. Int Conf on Machine Learning (PMLR) 2018: 5286-5295.
  35. Zhang X, Zhang X, Sun M, Zou X, Chen K, Yu N. Imperceptible black-box waveform-level adversarial attack towards automatic speaker recognition. Complex & Intelligent Systems 2023; 9: 65-79. DOI: 10.1007/s40747-022-00782-x.
  36. Kwon H, Lee S. Ensemble transfer attack targeting text classification systems. Computers & Security 2022; 117: 102695. DOI: 10.1016/j.cose.2022.102695.
  37. Mo K, Tang W, Li J, Yuan X. Attacking deep reinforcement learning with decoupled adversarial policy. IEEE Trans Dependable Secure Comput 2023; 20(1): 758-768. DOI: 10.1109/TDSC.2022.3143566.
  38. Zhou X, Liang W, Li W, Yan K, Shimizu S, Wang KI-K. Hierarchical adversarial attacks against graph-neural-network-based iot network intrusion detection system. IEEE Internet Things J 2022; 9(12): 9310-9319. DOI: 10.1109/JIOT.2021.3130434.
  39. Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators. Neural Netw 1989; 2(5): 359-366. DOI: 10.1016/0893-6080(89)90020-8.
  40. Akhtar N, Mian A, Kardan N, Shah M. Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access 2021; 9: 155161-155196. DOI: 10.1109/ACCESS.2021.3127960.
  41. Mi J-X, Wang X-D, Zhou L-F, Cheng K. Adversarial examples based on object detection tasks: A survey. Neurocomputing 2023; 519: 114-126. DOI: 10.1016/j.neucom.2022.10.046.
  42. Serban A, Poll E, Visser J. Adversarial examples on object recognition: A comprehensive survey. ACM Comput Surv (CSUR) 2020; 53(3): 66. DOI: 10.1145/3398394.
  43. McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 1943; 5: 115-133. DOI: 10.1007/BF02478259.
  44. Rosenblatt F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol Rev 1958; 65(6): 386. DOI: 10.1037/H0042519.
  45. Gholami A, Kim S, Dong Z, Yao Z, Mahoney MW, Keutzer K. A survey of quantization methods for efficient neural network inference. arXiv Preprint. 2021. Source:<https://arxiv.org/abs/2103.13630>. DOI: 10.48550/arXiv.2103.13630.
  46. H. Chen, Y. Wang, C. Xu, B. Shi, C. Xu, Q. Tian, C. Xu, AdderNet: Do we really need multiplications in deep learning? 2020 IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR) 2020: 1465-1477. DOI: 10.1109/CVPR42600.2020.00154.
  47. Limonova EE, Alfonso DM, Nikolaev DP, Arlazarov VV. Bipolar morphological neural networks: Gate-efficient architecture for computer vision. IEEE Access 2021; 9: 97569-97581. DOI: 10.1109/ACCESS.2021.3094484.
  48. Paszke A, Gross S, Massa F, et al. PyTorch: An imperative style, high-performance deep learning library. Proc 33rd Int Conf on Neural Information Processing Systems 2019: 8026-8037.
  49. Keras. Simple. Flexible. Powerfull. 2024. Source: <https://keras.io>.
  50. Li Y, Yuan Y. Convergence analysis of two-layer neural networks with relu activation. NIPS'17: Proc 31st Int Conf on Neural Information Processing Systems 2017: 597-607.
  51. Zou D, Cao Y, Zhou D, Gu Q. Gradient descent optimizes over-parameterized deep relu networks. Mach Learn 2020; 109: 467-492. DOI: 10.1007/s10994-019-05839-6.
  52. Bahri Y, Dyer E, Kaplan J, Lee J, Sharma U. Explaining neural scaling laws. arXiv Preprint. 2021. Source: <https://arxiv.org/abs/2102.06701>. DOI: 10.48550/arXiv.2102.06701.
  53. Jacot A, Gabriel F, Hongler C. Neural tangent kernel: Convergence and generalization in neural networks. NIPS'18: Proc 32nd Int Conf on Neural Information Processing Systems 2018: 8580-8589.
  54. Bachmann G, Anagnostidis S, Hofmann T. Scaling mlps: A tale of inductive bias. arXiv Preprint. 2023. Source: <https://arxiv.org/abs/2306.13575>. DOI: 10.48550/arXiv.2306.13575.
  55. Ding S, Li H, Su C, Yu J, Jin F. Evolutionary artificial neural networks: A review. Artif Intell Rev 2013; 39(3): 251-260. DOI: 10.1007/s10462-011-9270-6.
  56. Zhou H, Lan J, Liu R, Yosinski J. Deconstructing lottery tickets: Zeros, signs, and the supermask. Proc 33rd Int Conf on Neural Information Processing Systems 2019: 3597-3607.
  57. Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature 1986; 323(6088): 533-536. DOI: 10.1038/323533a0.
  58. Duchi J, Hazan E, Singer Y. Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res 2011; 12(7): 2121-2159. DOI: 10.5555/1953048.2021068.
  59. Tieleman T, Hinton G, et al. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning 2012; 4(2): 26-31.
  60. Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv Preprint. 2014. Source: <https://arxiv.org/abs/1412.6980>. DOI: 10.48550/arXiv.1412.6980.
  61. Cybenko G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems 1989; 2(4): 303-314. DOI: 10.1007/BF02551274.
  62. Kolmogorov AN. On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. Doklady Akademii Nauk SSSR 1957; 114(5): 953-956.
  63. Hornik K. Approximation capabilities of multilayer feedforward networks. Neural Netw 1991; 4(2): 251-257. DOI: 10.1016/0893-6080(91)90009-T.
  64. Tabuada P, Gharesifard B. Universal approximation power of deep residual neural networks through the lens of control. IEEE Trans Autom Control 2023; 68(5): 2715-2728. DOI: 10.1109/TAC.2022.3190051.
  65. Cai Y. Achieve the minimum width of neural networks for universal approximation. arXiv Preprint. 2022. Source: <https://arxiv.org/abs/2209.11395>. DOI: 10.48550/arXiv.2209.11395.
  66. Hoffmann J, Borgeaud S, Mensch A, et al., Training compute-optimal large language models. arXiv Preprint. 2022. Source: <https://arxiv.org/abs/2203.15556>. DOI: 10.48550/arXiv.2203.15556.
  67. Alam M, Samad MD, Vidyaratne L, Glandon A, Iftekharuddin KM. Survey on deep neural networks in speech and vision systems. Neurocomputing 2020; 417: 302-321. DOI: 10.1016/j.neucom.2020.07.053.
  68. Santos CFGD, Papa JP. Avoiding overfitting: A survey on regularization methods for convolutional neural networks. ACM Comput Surv (CSUR) 2022; 54(10s): 213. DOI: 10.1145/3510413.
  69. Xu Y, Goodacre R. On splitting training and validation set: A comparative study of cross-validation, bootstrap and systematic sampling for estimating the generalization performance of supervised learning. J Anal Test 2018; 2(3): 249-262. DOI: 10.1007/s41664-018-0068-2.
  70. Bejani MM, Ghatee M. A systematic review on overfitting control in shallow and deep neural networks. Artificial Intelligence Review 2021; 54(8): 6391-6438. DOI: 10.1007/s10462-021-09975-1.
  71. Blalock D, Gonsales Ortiz JJ, Frankle J, Guttag J. What is the state of neural network pruning? Proceedings of Machine Learning and Systems 2020; 2: 129-146.
  72. Ren P, Xiao Y, Chang X, Huang P-Y, Li Z, Chen X, Wang X. A comprehensive survey of neural architecture search: Challenges and solutions. ACM Comput Surv (CSUR) 2021; 54(4): 76. DOI: 10.1145/3447582.
  73. Zoph B, Le QV. Neural architecture search with reinforcement learning. arXiv Preprint. 2016. Source: <https://arxiv.org/abs/1611.01578>. DOI: 10.48550/arXiv.1611.01578.
  74. Real E, Moore S, Selle A, Saxena S, Suematsu YL, Tan J, Le QV, Kurakin A. Large-scale evolution of image classifiers, in: International conference on machine learning. ICML'17: Proc 34th Int Conf on Machine Learning 2017; 70: 2902-2911.
  75. Zoph B, Vasudevan V, Shlens J, Le QV. Learning transferable architectures for scalable image recognition. 2018 IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR) 2018: 8697-8710. DOI: 10.1109/CVPR.2018.00907.
  76. Liu H, Simonyan K, Yang Y. DARTS: Differentiable architecture search. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1806.09055>. DOI: 10.48550/arXiv.1806.09055.
  77. Yun S, Han D, Oh SJ, Chun S, Choe J, Yoo Y. CutMix: Regularization strategy to train strong classifiers with localizable features. 2019 IEEE/CVF Int Conf on Computer Vision (ICCV) 2019: 6022-6031. DOI: 10.1109/ICCV.2019.00612.
  78. Hendrycks D, Mu N, Cubuk ED, Zoph B, Gilmer J, Lakshminarayanan B. AugMix: A simple data processing method to improve robustness and uncertainty. arXiv Preprint. 2019. Source: <https://arxiv.org/abs/1912.02781>. DOI: 10.48550/arXiv.1912.02781.
  79. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 2014; 15(1): 1929-1958. DOI: 10.5555/2627435.2670313.
  80. Ghiasi G, Lin T-Y, Le QV. DropBlock: A regularization method for convolutional networks. NIPS'18: Proc 32nd Int Conf on Neural Information Processing Systems 2018: 10750-10760.
  81. Lu Z, Xu C, Du B, Ishida T, Zhang L, Sugiyama M. LocalDrop: A hybrid regularization for deep neural networks. IEEE Trans Pattern Anal Mach Intell 2021; 44(7): 3590-3601. DOI: 10.1109/TPAMI.2021.3061463.
  82. Verma V, Lamb A, Beckham C, Najafi A, Mitliagkas I, Lopez-Paz D, Bengio Y. Manifold mixup: Better representations by interpolating hidden states. International Conference on Machine Learning (ICML) 2019: 6438-6447.
  83. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. 2016 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2016: 2818-2826. DOI: 10.1109/CVPR.2016.308.
  84. Li W, Dasarathy G, Berisha V. Regularization via structural label smoothing. 23rd Int Conf on Artificial Intelligence and Statistics 2020: 1453-1463.
  85. Moradi R, Berangi R, Minaei B. A survey of regularization strategies for deep models. Artif Intell Rev 2020; 53(6): 3947-3986. DOI: 10.1007/s10462-019-09784-7.
  86. Zollanvari A, James AP, Sameni R. A theoretical analysis of the peaking phenomenon in classification. J Classif 2020; 37: 421-434. DOI: 10.1007/s00357-019-09327-3.
  87. Poggio T, Mhaskar H, Rosasco L, Miranda B, Liao Q. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. Int J Autom Comput 2017; 14(5): 503-519. DOI: 10.1007/s11633-017-1054-2.
  88. Beyer K, Goldstein J, Ramakrishnan R, Shaft U. When is “nearest neighbor” meaningful? In Book: Beeri C, Buneman P, eds. Database Theory – ICDT'99. 7th International Conference, Jerusalem, Israel, January 10-12, 1999, Proceedings. Berlin, Heidelberg, New York: Springer-Verlag; 1999: 217-235. DOI: 10.1007/3-540-49257-7_15.
  89. Nicolau M, McDermott J, et al. Learning neural representations for network anomaly detection. IEEE Trans Cybern 2018; 49(8): 3074-3087. DOI: 10.1109/TCYB.2018.2838668.
  90. Chattopadhyay N, Chattopadhyay A, Gupta SS, Kasper M. Curse of dimensionality in adversarial examples. 2019 Int Joint Conf on Neural Networks (IJCNN) 2019: 1-8. DOI: 10.1109/IJCNN.2019.8851795.
  91. Basodi S, Ji C, Zhang H, Pan Y. Gradient amplification: An efficient way to train deep neural networks. Big Data Mining and Analytics 2020; 3(3): 196-207. DOI: 10.26599/BDMA.2020.9020004.
  92. He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. IEEE Int Conf on Computer Vision 2015: 1026-1034. DOI: 10.1109/ICCV.2015.123.
  93. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. NIPS'12: Proc 25th Int Conf on Neural Information Processing Systems 2012; 1: 1097-1105.
  94. Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML'15: Proc 32nd Int Conf on International Conference on Machine Learning 2015; 37: 448-456.
  95. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. 2016 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2016: 770-778. DOI: 10.1109/CVPR.2016.90.
  96. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017: 4700-4708. DOI: 10.1109/CVPR.2017.243.
  97. Hanin B. Which neural net architectures give rise to exploding and vanishing gradients? NIPS'18: Proc 32nd Int Conf on Neural Information Processing Systems 2018: 580-589.
  98. Arpit D, Campos V, Bengio Y. How to initialize your network? Robust initialization for WeightNorm & ResNets. Proc 33rd Int Conf on Neural Information Processing Systems 2019: 10902-10911.
  99. Loshchilov I, Hutter F. Decoupled weight decay regularization. arXiv Preprint. 2017. Source: <https://arxiv.org/abs/1711.05101>. DOI: 10.48550/arXiv.1711.05101.
  100. Desai CG. Comparative analysis of optimizers in deep neural networks. International Journal of Innovative Science and Research Technology 2020; 5(10): 959-962.
  101. Nakamura K, Derbel B, Won K-J, Hong B-W. Learning-rate annealing methods for deep neural networks. Electronics 2021; 10(16): 2029. DOI: 10.3390/electronics10162029.
  102. Loshchilov I, Hutter F. SGDR: Stochastic gradient descent with warm restarts. arXiv Preprint. 2016. Source: <https://arxiv.org/abs/1608.03983>. DOI: 10.48550/arXiv.1608.03983.
  103. Yang J, Shen X, Xing J, Tian X, Li H, Deng B, Huang J, Hua X-s. Quantization networks. 2019 IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR) 2019: 7308-7316. DOI: 10.1109/CVPR.2019.00748.
  104. Mitchell TM. The need for biases in learning generalizations. Rutgers CS Tech Report CBM-TR-117. New Brunswick, NJ: Rutgers University; 1980. Source: <http://dml.cs.byu.edu/ cgc/docs/mldm_tools/Reading/Need%20for%20Bias.pdf>.
  105. Gordon DF, Desjardins M. Evaluation and selection of biases in machine learning. Machine Learning 1995; 20: 5-22. DOI: 10.1023/A:1022630017346.
  106. Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE 1998; 86(11): 2278-2324. DOI: 10.1109/5.726791.
  107. Geirhos R, Rubisch P, Michaelis C, Bethge M, Wichmann FA, Brendel W. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. Int Conf on Learning Representations 2019. Source: <https://openreview.net/forum?id=Bygh9j09KX>.
  108. Rao Y, Zhao W, Zhu Z, Lu J, Zhou J. Global filter networks for image classification. 35th Conf on Neural Information Processing Systems (NeurIPS 2021) 2021: 980-993.
  109. Sheshkus A, Ingacheva A, Arlazarov V, Nikolaev D. HoughNet: Neural network architecture for vanishing points detection. 2019 Int Conf on Document Analysis and Recognition (ICDAR) 2020: 844-849. DOI: 10.1109/ICDAR.2019.00140.
  110. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N. An image is worth 16x16 words: Transformers for image recognition at scale. Int Conf on Learning Representations 2021. Source: <https://openreview.net/forum?id=YicbFdNTTy>.
  111. Tolstikhin IO, Houlsby N, Kolesnikov A, et al. MLP-mixer: An all-MLP architecture for vision. 35th Conf on Neural Information Processing Systems (NeurIPS 2021) 2021: 24261-24272.
  112. Liu Y, Sangineto E, Bi W, Sebe N, Lepri B, Nadai M. Efficient training of visual transformers with small datasets. NIPS'21: Proc 35th Int Conf on Neural Information Processing Systems 2021: 23818-23830. Source: <https://proceedings.neurips.cc/paper_files/paper/2021/file/c81e155d85dae5430a8cee6f2242e82c-Paper.pdf>.
  113. Morrison K, Gilby B, Lipchak C, Mattioli A, Kovashka A. Exploring corruption robustness: Inductive biases in vision transformers and MLP-mixers. arXiv Preprint. 2021. Source: <https://arxiv.org/abs/2106.13122>. DOI: 10.48550/arXiv.2106.13122.
  114. Feng P, Tang Z. A survey of visual neural networks: Current trends, challenges and opportunities. Multimed Syst 2023; 29(2): 693-724. DOI: 10.1007/s00530-022-01003-8.
  115. Zhang J, Sang J, Zhao X, Huang X, Sun Y, Hu Y. Adversarial privacy-preserving filter. MM '20: Proc 28th ACM Int Conf on Multimedia 2020: 1423-1431. DOI: 10.1145/3394171.3413906.
  116. Chen X, Gao X, Zhao J, Ye K, Xu C-Z. AdvDiffuser: Natural adversarial example synthesis with diffusion models. 2023 IEEE/CVF Int Conf on Computer Vision (ICCV) 2023: 4562-4572. DOI: 10.1109/ICCV51070.2023.00421.
  117. Metzen JH, Kumar MC, Brox T, Fischer V. Universal adversarial perturbations against semantic image segmentation. 2017 IEEE Int Conf on Computer Vision (ICCV) 2017: 2755-2764. DOI: 10.1109/ICCV.2017.300.
  118. Tan H, Wang L, Zhang H, Zhang J, Shafiq M, Gu Z. Adversarial attack and defense strategies of speaker recognition systems: A survey. Electronics 2022; 11(14): 2183. DOI: 10.3390/electronics11142183.
  119. Zhang WE, Sheng QZ, Alhazmi A, Li C. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Transactions on Intelligent Systems and Technology (TIST) 2020; 11(3): 24. DOI: 10.1145/3374217.
  120. Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L. ImageNet: A large-scale hierarchical image database, 2009 IEEE Conf on Computer Vision and Pattern Recognition 2009: 248-255. DOI: 10.1109/CVPR.2009.5206848.
  121. Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. arXiv Preprint. 2014. Source: <https://arxiv.org/abs/1412.6572>. DOI: 10.48550/arXiv.1412.6572.
  122. Nicolae M-I, Sinn M, Tran MN, et al. Adversarial robustness toolbox v1.0.0. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1807.01069>. DOI: 10.48550/arXiv.1807.01069.
  123. Kurakin A, Goodfellow I, Bengio S. Adversarial machine learning at scale. arXiv Preprint. 2016. Source: <https://arxiv.org/abs/1611.01236>. DOI: 10.48550/arXiv.1611.01236.
  124. Croce F, Hein M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. ICML'20: Proc 37th Int Conf on Machine Learning 2020: 2206-2216.
  125. Yamamura K, Sato H, Tateiwa N, Hata N, Mitsutake T, Oe I, Ishikura H, Fujisawa K. Diversified adversarial attacks based on conjugate gradient method. Proc 39th Int Conf on Machine Learning (ICML 2022) 2022: 24872-24894.
  126. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A. The limitations of deep learning in adversarial settings. 2016 IEEE European Symposium on Security and Privacy (EuroS&P) 2016: 372-387. DOI: 10.1109/EuroSP.2016.36.
  127. Carlini N, Wagner D. Towards evaluating the robustness of neural networks. 2017 IEEE Symposium on Security and Privacy (SP) 2017: 39-57. DOI: 10.1109/SP.2017.49.
  128. Su J, Vargas DV, Sakurai K. One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 2019; 23(5): 828-841. DOI: 10.1109/TEVC.2019.2890858.
  129. Storn R, Price K. Differential evolution – A simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 1997; 11: 341-359. DOI: 10.1023/A:1008202821328.
  130. Kotyan S, Vargas DV. Adversarial robustness assessment: Why in evaluation both L0 and L∞ attacks are necessary. PloS One 2022; 17(4): e0265723. DOI: 10.1371/journal.pone.0265723.
  131. Moosavi-Dezfooli S-M, Fawzi A, Frossard P. DeepFool: a simple and accurate method to fool deep neural networks. 2016 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2016: 2574-2582. DOI: 10.1109/CVPR.2016.282.
  132. Jang U, Wu X, Jha S. Objective metrics and gradient descent algorithms for adversarial examples in machine learning. ACSAC '17: Proc 33rd Annual Computer Security Applications Conf 2017: 262-277. DOI: 10.1145/3134600.3134635.
  133. Papernot N, McDaniel P, Wu X, Jha S, Swami A. Distillation as a defense to adversarial perturbations against deep neural networks. 2016 IEEE Symposium on Security and Privacy (SP) 2016: 582-597. DOI: 10.1109/SP.2016.41.
  134. Ghiasi A, Shafahi A, Goldstein T. Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates. arXiv Preprint. 2020. Source: <https://arxiv.org/abs/2003.08937>. DOI: 10.48550/arXiv.2003.08937.
  135. Lecuyer M, Atlidakis V, Geambasu R, Hsu D, Jana S. Certified robustness to adversarial examples with differential privacy. 2019 IEEE Symposium on Security and Privacy (SP) 2019: 656-672. DOI: 10.1109/SP.2019.00044.
  136. Croce F, Hein M. Minimally distorted adversarial examples with a fast adaptive boundary attack. ICML'20: Proc 37th Int Conf on Machine Learning 2020: 2196-2205.
  137. Andriushchenko M, Croce F, Flammarion N, Hein M. Square attack: a query-efficient black-box adversarial attack via random search. In Book: Vedaldi A, Bischof H, Brox T, Frahm J-M, eds. Computer Vision – ECCV 2020. 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII. Cham, Switzerland: Springer Nature Switzerland AG; 2020: 484-501. DOI: 10.1007/978-3-030-58592-1_29.
  138. How to train state-of-the-art models using TorchVision’s latest primitives. 2023. Source: <https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives>.
  139. Moosavi-Dezfooli S-M, Fawzi A, Fawzi O, Frossard P. Universal adversarial perturbations. 2017 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2017: 1765-1773. DOI: 10.1109/CVPR.2017.17.
  140. Athalye A, Engstrom L, Ilyas A, Kwok K. Synthesizing robust adversarial examples. Int Conf on Machine Learning 2018: 284-293.
  141. Brendel W, Rauber J, Bethge M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv Preprint. 2017. Source: <https://arxiv.org/abs/1712.04248>. DOI: 10.48550/arXiv.1712.04248.
  142. Feng R, Mangaokar N, Chen J, Fernandes E, Jha S, Prakash A. GRAPHITE: Generating automatic physical examples for machine-learning attacks on computer vision systems. 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P) 2022: 664-683. DOI: 10.1109/EuroSP53844.2022.00047.
  143. Xie C, Zhang Z, Zhou Y, Bai S, Wang J, Ren Z, Yuille AL. Improving transferability of adversarial examples with input diversity. 2019 IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR) 2019: 2730-2739. DOI: 10.1109/CVPR.2019.00284.
  144. Wang X, He X, Wang J, He K. Admix: Enhancing the transferability of adversarial attacks. 2021 IEEE/CVF Int Conf on Computer Vision (ICCV) 2021: 16158-16167. DOI: 10.1109/ICCV48922.2021.01585.
  145. Zhang H, Cisse M, Dauphin YN, Lopez-Paz D. mixup: Beyond empirical risk minimization. Int Conf on Learning Representations 2018. Source: <https://openreview.net/forum?id=r1Ddp1-Rb>.
  146. Wu W, Su Y, Chen X, Zhao S, King I, Lyu MR, Tai Y-W. Boosting the transferability of adversarial samples via attention. 2020 IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR) 2020: 1161-1170. DOI: 10.1109/CVPR42600.2020.00124.
  147. Zhao Z, Dua D, Singh S. Generating natural adversarial examples. Int Conf on Learning Representations 2018. Source: <https://openreview.net/forum?id=H1BLjgZCb>.
  148. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. NIPS'14: Proc 27th Int Conf on Neural Information Processing Systems 2014; 2: 2672-2680.
  149. Xiao C, Li B, Zhu J-Y, He W, Liu M, Song D. Generating adversarial examples with adversarial networks. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1801.02610>. DOI: 10.48550/arXiv.1801.02610.
  150. Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models. NIPS'20: Proc 34th Int Conf on Neural Information Processing Systems 2020: 6840-6851.
  151. Wang J, Lyu Z, Lin D, Dai B, Fu H. Guided diffusion model for adversarial purification. arXiv Preprint. 2022. Source: <https://arxiv.org/abs/2205.14969>. DOI: 10.48550/arXiv.2205.14969.
  152. Chen J, Chen H, Chen K, Zhang Y, Zou Z, Shi Z. Diffusion models for imperceptible and transferable adversarial attack. arXiv Preprint. 2023. Source: <https://arxiv.org/abs/2305.08192>. DOI: 10.48550/arXiv.2305.08192.
  153. Li X, Li F. Adversarial examples detection in deep networks with convolutional filter statistics. 2017 IEEE Int Conf on Computer Vision (ICCV) 2017: 5764-5772. DOI: 10.1109/ICCV.2017.615.
  154. Zantedeschi V, Nicolae M-I, Rawat A. Efficient defenses against adversarial attacks. AISec '17: Proc 10th ACM Workshop on Artificial Intelligence and Security 2017: 39-49. DOI: 10.1145/3128572.3140449.
  155. Guo C, Rana M, Cisse M, Van Der Maaten L. Countering adversarial images using input transformations. arXiv Preprint. 2017. Source: <https://arxiv.org/abs/1711.00117>. DOI: 10.48550/arXiv.1711.00117.
  156. Raff E, Sylvester J, Forsyth S, McLean M. Barrage of random transforms for adversarially robust defense. 2019 IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR) 2019: 6528-6537. DOI: 10.1109/CVPR.2019.00669.
  157. Carlini N, Wagner D. Adversarial examples are not easily detected: Bypassing ten detection methods. AISec '17: Proc ACM Workshop on Artificial Intelligence and Security 2017: 3-14. DOI: 10.1145/3128572.3140444.
  158. Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J. Boosting adversarial attacks with momentum. 2018 IEEE/CVF Conf on Computer Vision and Pattern Recognition 2018: 9185-9193. DOI: 10.1109/CVPR.2018.00957.
  159. Mosbach M, Andriushchenko M, Trost T, Hein M, Klakow D. Logit pairing methods can fool gradient-based attacks. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1810.12042>. DOI: 10.48550/arXiv.1810.12042.
  160. Xie C, Wu Y, Maaten Lvd, Yuille AL, He K. Feature denoising for improving adversarial robustness. 2019 IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR) 2019: 501-509. DOI: 10.1109/CVPR.2019.00059.
  161. Bai T, Luo J, Zhao J, Wen B, Wang Q. Recent advances in adversarial training for adversarial robustness. arXiv Preprint. 2021. Source: <https://arxiv.org/abs/2102.01356>. DOI: 10.48550/arXiv.2102.01356.
  162. Schott L, Rauber J, Bethge M, Brendel W. Towards the first adversarially robust neural network model on MNIST. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1805.09190>. DOI: 10.48550/arXiv.1805.09190.
  163. Rebuffi S-A, Gowal S, Calian DA, Stimberg F, Wiles O, Mann TA. Data augmentation can improve robustness. NIPS'21: Proc 35th Int Conf on Neural Information Processing Systems 2021: 29935-29948.
  164. Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ. Reluplex: An efficient smt solver for verifying deep neural networks. In Book: Majumdar R, Kunčak V, eds. Computer aided verification. 29th Int Conf, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I. Cham, Switzerland: Springer International Publishing AG; 2017: 97-117. DOI: 10.1007/978-3-319-63387-9_5.
  165. Anderson BG, Gautam T, Sojoudi S. An overview and prospective outlook on robust training and certification of machine learning models. arXiv Preprint. 2022. Source: <https://arxiv.org/abs/2208.07464>. DOI: 10.48550/arXiv.2208.07464.
  166. Raghunathan A, Steinhardt J, Liang P. Certified defenses against adversarial examples. arXiv preprint. 2018. Source: <https://arxiv.org/abs/1801.09344>. DOI: 10.48550/arXiv.1801.09344.
  167. Hein M, Andriushchenko M. Formal guarantees on the robustness of a classifier against adversarial manipulation. NIPS'17: Proc 31st Int Conf on Neural Information Processing Systems 2017: 2263-2273.
  168. Raghunathan A, Steinhardt J, Liang PS. Semidefinite relaxations for certifying robustness to adversarial examples. NIPS'18: Proc 32nd Int Conf on Neural Information Processing Systems 2018: 10900-10910.
  169. Huang X, Kwiatkowska M, Wang S, Wu M. Safety verification of deep neural networks. In Book: Majumdar R, Kunčak V, eds. Computer aided verification. 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I. Cham, Switzerland: Springer International Publishing AG; 2017: 3-29. DOI: 10.1007/978-3-319-63387-9_1.
  170. Dvijotham K, Stanforth R, Gowal S, Mann TA, Kohli P. A dual approach to scalable verification of deep networks. Conf on Uncertainty in Artificial Intelligence 2018; 1: 550-559.
  171. Mirman M, Gehr T, Vechev M. Differentiable abstract interpretation for provably robust neural networks. Int Conf on Machine Learning 2018: 3578-3586.
  172. Sinha A, Namkoong H, Volpi R, Duchi J. Certifying some distributional robustness with principled adversarial training. arXiv Preprint. 2017. Source: <https://arxiv.org/abs/1710.10571>. DOI: 10.48550/arXiv.1710.10571.
  173. Gowal S, Dvijotham K, Stanforth R, Bunel R, Qin C, Uesato J, Arandjelovic R, Mann T, Kohli P. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1810.12715>. DOI: 10.48550/arXiv.1810.12715.
  174. Shi Z, Wang Y, Zhang H, Yi J, Hsieh C-J. Fast certified robust training with short warmup. NIPS'21: Proc 35th Int Conf on Neural Information Processing Systems 2021: 18335-18349.
  175. Zhang H, Chen H, Xiao C, Gowal S, Stanforth R, Li B, Boning D, Hsieh C-J. Towards stable and efficient training of verifiably robust neural networks. arXiv Preprint. 2019. Source: <https://arxiv.org/abs/1906.06316>. DOI: 10.48550/arXiv.1906.06316.
  176. Mirman M, Baader M, Vechev M. The fundamental limits of interval arithmetic for neural networks. arXiv Preprint. 2021. Source: <https://arxiv.org/abs/2112.05235>. DOI: 10.48550/arXiv.2112.05235.
  177. Tjeng V, Xiao K, Tedrake R. Evaluating robustness of neural networks with mixed integer programming. arXiv Preprint. 2017. Source: <https://arxiv.org/abs/1711.07356>. DOI: 10.48550/arXiv.1711.07356.
  178. Xiang W, Johnson TT. Reachability analysis and safety verification for neural network control systems. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1805.09944>. DOI: 10.48550/arXiv.1805.09944.
  179. Jovanović N, Balunović M, Baader M, Vechev M. On the paradox of certified training. arXiv Preprint. 2021. Source: <https://arxiv.org/abs/2102.06700>. DOI: 10.48550/arXiv.2102.06700.
  180. Zhang B, Jiang D, He D, Wang L. Rethinking lipschitz neural networks and certified robustness: A boolean function perspective. NIPS'22: Proc 36th Int Conf on Neural Information Processing Systems 2022: 19398-19413.
  181. Cisse M, Bojanowski P, Grave E, Dauphin Y, Usunier N. Parseval networks: Improving robustness to adversarial examples. ICML'17: Proc 34th Int Conf on Machine Learning 2017: 854-863.
  182. Tsuzuku Y, Sato I, Sugiyama M. Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks. NIPS'18: Proc 32nd Int Conf on Neural Information Processing Systems 2018: 6542-6551.
  183. Huster T, Chiang C-YJ, Chadha R. Limitations of the lipschitz constant as a defense against adversarial examples. In Book: Alzate C, Monreale A, Assem H, et al., eds. ECML PKDD 2018 Workshops. Nemesis 2018, UrbReas 2018, SoGood 2018, IWAISe 2018, and Green Data Mining 2018, Dublin, Ireland, September 10-14, 2018, Proceedings. Cham, Switzerland: Springer Nature Switzerland AG; 2019: 16-29. DOI: 10.1007/978-3-030-13453-2_2.
  184. Anil C, Lucas J, Grosse R. Sorting out Lipschitz function approximation. Int Conf on Machine Learning 2019: 291-301.
  185. Zhang B, Jiang D, He D, Wang L. Boosting the certified robustness of l-infinity distance nets. arXiv Preprint. 2021. Source: <https://arxiv.org/abs/2110.06850>. DOI: 10.48550/arXiv.2110.06850.
  186. Li B, Chen C, Wang W, Carin L. Certified adversarial robustness with additive noise. Proc 33rd Int Conf on Neural Information Processing Systems 2019: 9464-9474.
  187. Erdemir E, Bickford J, Melis L, Aydore S. Adversarial robustness with non-uniform perturbations. NIPS'21: Proc 35th Int Conf on Neural Information Processing Systems 2021: 19147-19159.
  188. Wang L, Zhai R, He D, Wang L, Jian L. Pretrain-to-finetune adversarial training via sample-wise randomized smoothing. 2021. Source: <https://openreview.net/forum?id=Te1aZ2myPIu>.
  189. Yang G, Duan T, Hu JE, Salman H, Razenshteyn I, Li J. Randomized smoothing of all shapes and sizes. ICML'20: Proc 37th Int Conf on Machine Learning 2020: 10693-10705.
  190. Gilmer J, Metz L, Faghri F, Schoenholz SS, Raghu M, Wattenberg M, Goodfellow I. Adversarial spheres. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1801.02774>. DOI: 10.48550/arXiv.1801.02774.
  191. Schmidt L, Santurkar S, Tsipras D, Talwar K, Madry A. Adversarially robust generalization requires more data. NIPS'18: Proc 32nd Int Conf on Neural Information Processing Systems 2018: 5019-5031.
  192. Quinonero-Candela J, Sugiyama M, Schwaighofer A, Lawrence ND. Dataset shift in machine learning. Mit Press; 2008.
  193. Papernot N, McDaniel P. Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1803.04765>. DOI: 10.48550/arXiv.1803.04765.
  194. Apruzzese G, Anderson HS, Dambra S, Freeman D, Pierazzi F, Roundy K. “Real attackers don’t compute gradients”: Bridging the gap between adversarial ml research and practice. 2023 IEEE Conf on Secure and Trustworthy Machine Learning (SaTML) 2023: 339-364. DOI: 10.1109/SaTML54575.2023.00031.
  195. Nassi B, Mirsky Y, Nassi D, Ben-Netanel R, Drokin O, Elovici, Y. Phantom of the ADAS: Securing advanced driver-assistance systems from split-second phantom attacks. CCS '20: Proc 2020 ACM SIGSAC Conf on Computer and Communications Security 2020: 293-308. DOI: 10.1145/3372297.3423359.
  196. Xiao Q, Chen Y, Shen C, Chen Y, Li K. Seeing is not believing: Camouflage attacks on image scaling algorithms. SEC'19: Proc 28th USENIX Conf on Security Symposium 2019: 443-460.
  197. Miller BP, Fredriksen L, So B. An empirical study of the reliability of UNIX utilities. Commun ACM 1990; 33(12): 32-44. DOI: 10.1145/96267.96279.
  198. Hamlet R. Random testing. Encyclopedia of Software Engineering. New York: Wiley; 1994.
  199. Wagemans J, Elder JH, Kubovy M, Palmer SE, Peterson MA, Singh M, Von der Heydt R. A century of gestalt psychology in visual perception: I. perceptual grouping and figure-ground organization. Psychol Bull 2012; 138(6): 1172-1217. DOI: 10.1037/a0029333.
  200. Ponzo M. Intorno ad alcune illusioni nel campo delle sensazioni tattili, sull'illusione di Aristotele e fenomeni analoghi. Wilhelm Engelmann; 1910.
  201. Wimmer MC, Doherty MJ, Collins WA. The development of ambiguous figure perception. Monogr Soc Res Child Dev 2011; 76(1): vii, 1-130. DOI: 10.1111/j.1540-5834.2011.00589.x.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20