Skip to main content

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 776))

  • 277 Accesses

Abstract

Because neural network is a data-driven black box model, people cannot directly understand its decision basis, and once the neural network is constructed with adversarial samples, it can lead to wrong conclusions with high confidence. Therefore, many researchers focus on the robustness of the neural networks. This paper mainly studies neural network defense based on random noise injection. In theory, injecting exponential family noise into any layer of neural network can ensure the robustness of neural network. But the experiment shows that the disturbance resistance effect varies greatly with different noise distribution. We investigate the robustness of neural networks for injection of exponential and Gaussian noise, and give the upper bound of Renyi divergence under these two types of noise. In terms of experiments, we uses CIFAR-10 dataset to conduct experiments on a variety of neural network structures. It is found that random noise injection can effectively reduce the attack effect of adversarial sample and make the neural network more robust. However, when the noise is too high, the classification accuracy of the neural network itself will decline. This paper proposes to add Gaussian noise with small variance to the image subject and Gaussian noise with large variance to the background, so as to achieve better defense effect.

Supported by Saint-Petersburg State University, project ID:94062114.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bastounis, A., Hansen, A.C., Vlačić, V.: The mathematics of adversarial attacks in AI–why deep learning is unstable despite the existence of stable neural networks. arXiv preprint arXiv:2109.06098 (2021)

  2. Du, J., Zhang, H., Zhou, J.T., Yang, Y., Feng, J.: Query-efficient meta attack to deep neural networks (2019)

    Google Scholar 

  3. Guo, C., Rana, M., Cisse, M., Laurens, V.D.M.: Countering adversarial images using input transformations (2017)

    Google Scholar 

  4. Huang, S., Papernot, N., Goodfellow, I., Duan, Y., Abbeel, P.: Adversarial attacks on neural network policies (2017)

    Google Scholar 

  5. Li, D.H., Fukushima, M.: A modified BFGS method and its global convergence in nonconvex minimization. J. Comput. Appl. Math. 129(1–2), 15–35 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  6. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017)

    Google Scholar 

  7. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. IEEE (2016)

    Google Scholar 

  8. Moritz, P., Nishihara, R., Jordan, M.I.: A linearly-convergent stochastic L-BFGS algorithm. Mathematics (2015)

    Google Scholar 

  9. Papernot, N., Mcdaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP) (2016)

    Google Scholar 

  10. Pinot, R., et al.: Theoretical evidence for adversarial robustness through randomization (2019)

    Google Scholar 

  11. Samangouei, P., Kabkab, M., Chellappa, R.: Defense-GAN: protecting classifiers against adversarial attacks using generative models (2018)

    Google Scholar 

  12. Szegedy, C., et al.: Intriguing properties of neural networks. Comput, Sci (2013)

    Google Scholar 

  13. Wang, R., Fu, B., Fu, G., Wang, M.: Deep and cross network for ad click predictions (2017)

    Google Scholar 

  14. Wojtas, M., Chen, K.: Feature importance ranking for deep learning (2020)

    Google Scholar 

  15. Wu, M., et al.: Regional tree regularization for interpretability in deep neural networks. In: National Conference on Artificial Intelligence (2020)

    Google Scholar 

  16. Yao, Z., Gholami, A., Xu, P., Keutzer, K., Mahoney, M.: Trust region based adversarial attack on neural networks (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Enzhe Zhao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kang, J., Zhao, E., Guo, Z., Wang, S., Su, W., Zhang, X. (2023). Research on Neural Network Defense Problem Based on Random Noise Injection. In: Kovalev, S., Kotenko, I., Sukhanov, A. (eds) Proceedings of the Seventh International Scientific Conference “Intelligent Information Technologies for Industry” (IITI’23). IITI 2023. Lecture Notes in Networks and Systems, vol 776. Springer, Cham. https://doi.org/10.1007/978-3-031-43789-2_37

Download citation

Publish with us

Policies and ethics