Abstract
Due to their impressive performance, Machine Learning Systems; including Deep Learning networks; are widely used in several application domains including sensitive areas like security applications, autonomous vehicles, medicines, weapon manufacturing and other areas which require high robustness and safety. However, these systems are vulnerable to adversarial attacks. This vulnerability has been investigated by researchers to improve the robustness of Machine Learning models and consequently increase confidence in their use. For this purpose, various attacks against these systems and defenses appeared and were classified. In this work, our objective is to provide a comprehensive review in the Adversarial Machine Learning field of research. Especially; to achieve our goal; we have detailed the Fast Gradient Sign Method as well as its variants, we have cited several adversarial attacks and their generated adversarial examples, we have also explained the similarity constraint and metrics that measure it in addition to other interesting notions like the adversary’s 3D model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Andrew Ng, Coursera, AI for Everyone, Lecture 4.4: Adversarial attacks on AI.
References
Dalvi, N., Domingos, P., Sanghai, S., Verma, D., et al.: Adversarial classification. In: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 99–108. ACM (2004)
Szegedy, C., et al.: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings (2014)
Zügner, D., Borchert, O., Akbarnejad, A., Günnemann, S.: Adversarial attacks on graph neural networks: perturbations and their paterns. ACM Trans. Knowl. Discov. Data 14(5), 3394520 (2020)
Hu, H., Lu, X., Zhang, X., Zhang, T., Sun, G.: Inheritance attention matrix-based universal adversarial perturbations on vision transformers. IEEE Signal Process. Lett. 28, 1923–1927 (2021)
Kwon, H., Kim, Y., Yoon, H., Choi, D.: Selective audio adversarial example in evasion attack on speech recognition system. IEEE Trans. Inf. Forensics Secur. 15(8747397), 526–538 (2020)
Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 8578273, pp. 1625–1634 (2018)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings (2015)
Zha, M., Meng, G., Lin, C., Zhou, Z., Chen, K.: RoLMA: a practical adversarial attack against deep learning-based LPR systems. In: Liu, Z., Yung, M. (eds.) Inscrypt 2019. LNCS, vol. 12020, pp. 101–117. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-42921-8_6
Biggio, B., Roli, F.: Wild Patterns: ten years after the rise of adversarial machine learning. Pattern Recogn. 84, 317–331(2018)
Papernot, N., Mcdaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: Proceedings - 2016 IEEE European Symposium on Security and Privacy, EURO S and P 2016.7467366, pp. 372–387 (2016)
Xu, H., et al.: Adversarial attacks and defenses in images, graphs and text: a review. Int. J. Autom. Comput. 17(2), 151-178 (2020).https://doi.org/10.1007/s11633-019-1211-x
Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 6494573, 984–996 (2014)
Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial machine learning at scale.In: 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings (2017)
Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings (2019)
Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 8601309, 828–841(2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Laatyaoui, S., Saber, M. (2022). Adversarial Attacks on Machine Learning Systems. In: Motahhir, S., Bossoufi, B. (eds) Digital Technologies and Applications. ICDTA 2022. Lecture Notes in Networks and Systems, vol 454. Springer, Cham. https://doi.org/10.1007/978-3-031-01942-5_20
Download citation
DOI: https://doi.org/10.1007/978-3-031-01942-5_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-01941-8
Online ISBN: 978-3-031-01942-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)