Skip to main content

Adversarial Attacks on Machine Learning Systems

  • Conference paper
  • First Online:
Digital Technologies and Applications (ICDTA 2022)

Abstract

Due to their impressive performance, Machine Learning Systems; including Deep Learning networks; are widely used in several application domains including sensitive areas like security applications, autonomous vehicles, medicines, weapon manufacturing and other areas which require high robustness and safety. However, these systems are vulnerable to adversarial attacks. This vulnerability has been investigated by researchers to improve the robustness of Machine Learning models and consequently increase confidence in their use. For this purpose, various attacks against these systems and defenses appeared and were classified. In this work, our objective is to provide a comprehensive review in the Adversarial Machine Learning field of research. Especially; to achieve our goal; we have detailed the Fast Gradient Sign Method as well as its variants, we have cited several adversarial attacks and their generated adversarial examples, we have also explained the similarity constraint and metrics that measure it in addition to other interesting notions like the adversary’s 3D model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Andrew Ng, Coursera, AI for Everyone, Lecture 4.4: Adversarial attacks on AI.

References

  1. Dalvi, N., Domingos, P., Sanghai, S., Verma, D., et al.: Adversarial classification. In: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 99–108. ACM (2004)

    Google Scholar 

  2. Szegedy, C., et al.: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings (2014)

    Google Scholar 

  3. Zügner, D., Borchert, O., Akbarnejad, A., Günnemann, S.: Adversarial attacks on graph neural networks: perturbations and their paterns. ACM Trans. Knowl. Discov. Data 14(5), 3394520 (2020)

    Article  Google Scholar 

  4. Hu, H., Lu, X., Zhang, X., Zhang, T., Sun, G.: Inheritance attention matrix-based universal adversarial perturbations on vision transformers. IEEE Signal Process. Lett. 28, 1923–1927 (2021)

    Article  Google Scholar 

  5. Kwon, H., Kim, Y., Yoon, H., Choi, D.: Selective audio adversarial example in evasion attack on speech recognition system. IEEE Trans. Inf. Forensics Secur. 15(8747397), 526–538 (2020)

    Article  Google Scholar 

  6. Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 8578273, pp. 1625–1634 (2018)

    Google Scholar 

  7. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings (2015)

    Google Scholar 

  8. Zha, M., Meng, G., Lin, C., Zhou, Z., Chen, K.: RoLMA: a practical adversarial attack against deep learning-based LPR systems. In: Liu, Z., Yung, M. (eds.) Inscrypt 2019. LNCS, vol. 12020, pp. 101–117. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-42921-8_6

    Chapter  Google Scholar 

  9. Biggio, B., Roli, F.: Wild Patterns: ten years after the rise of adversarial machine learning. Pattern Recogn. 84, 317–331(2018)

    Google Scholar 

  10. Papernot, N., Mcdaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: Proceedings - 2016 IEEE European Symposium on Security and Privacy, EURO S and P 2016.7467366, pp. 372–387 (2016)

    Google Scholar 

  11. Xu, H., et al.: Adversarial attacks and defenses in images, graphs and text: a review. Int. J. Autom. Comput. 17(2), 151-178 (2020).https://doi.org/10.1007/s11633-019-1211-x

  12. Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 6494573, 984–996 (2014)

    Google Scholar 

  13. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial machine learning at scale.In: 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings (2017)

    Google Scholar 

  14. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings (2019)

    Google Scholar 

  15. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 8601309, 828–841(2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Safae Laatyaoui .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Laatyaoui, S., Saber, M. (2022). Adversarial Attacks on Machine Learning Systems. In: Motahhir, S., Bossoufi, B. (eds) Digital Technologies and Applications. ICDTA 2022. Lecture Notes in Networks and Systems, vol 454. Springer, Cham. https://doi.org/10.1007/978-3-031-01942-5_20

Download citation

Publish with us

Policies and ethics