Abstract
Because neural network is a data-driven black box model, people cannot directly understand its decision basis, and once the neural network is constructed with adversarial samples, it can lead to wrong conclusions with high confidence. Therefore, many researchers focus on the robustness of the neural networks. This paper mainly studies neural network defense based on random noise injection. In theory, injecting exponential family noise into any layer of neural network can ensure the robustness of neural network. But the experiment shows that the disturbance resistance effect varies greatly with different noise distribution. We investigate the robustness of neural networks for injection of exponential and Gaussian noise, and give the upper bound of Renyi divergence under these two types of noise. In terms of experiments, we uses CIFAR-10 dataset to conduct experiments on a variety of neural network structures. It is found that random noise injection can effectively reduce the attack effect of adversarial sample and make the neural network more robust. However, when the noise is too high, the classification accuracy of the neural network itself will decline. This paper proposes to add Gaussian noise with small variance to the image subject and Gaussian noise with large variance to the background, so as to achieve better defense effect.
Supported by Saint-Petersburg State University, project ID:94062114.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bastounis, A., Hansen, A.C., Vlačić, V.: The mathematics of adversarial attacks in AI–why deep learning is unstable despite the existence of stable neural networks. arXiv preprint arXiv:2109.06098 (2021)
Du, J., Zhang, H., Zhou, J.T., Yang, Y., Feng, J.: Query-efficient meta attack to deep neural networks (2019)
Guo, C., Rana, M., Cisse, M., Laurens, V.D.M.: Countering adversarial images using input transformations (2017)
Huang, S., Papernot, N., Goodfellow, I., Duan, Y., Abbeel, P.: Adversarial attacks on neural network policies (2017)
Li, D.H., Fukushima, M.: A modified BFGS method and its global convergence in nonconvex minimization. J. Comput. Appl. Math. 129(1–2), 15–35 (2001)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017)
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. IEEE (2016)
Moritz, P., Nishihara, R., Jordan, M.I.: A linearly-convergent stochastic L-BFGS algorithm. Mathematics (2015)
Papernot, N., Mcdaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP) (2016)
Pinot, R., et al.: Theoretical evidence for adversarial robustness through randomization (2019)
Samangouei, P., Kabkab, M., Chellappa, R.: Defense-GAN: protecting classifiers against adversarial attacks using generative models (2018)
Szegedy, C., et al.: Intriguing properties of neural networks. Comput, Sci (2013)
Wang, R., Fu, B., Fu, G., Wang, M.: Deep and cross network for ad click predictions (2017)
Wojtas, M., Chen, K.: Feature importance ranking for deep learning (2020)
Wu, M., et al.: Regional tree regularization for interpretability in deep neural networks. In: National Conference on Artificial Intelligence (2020)
Yao, Z., Gholami, A., Xu, P., Keutzer, K., Mahoney, M.: Trust region based adversarial attack on neural networks (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kang, J., Zhao, E., Guo, Z., Wang, S., Su, W., Zhang, X. (2023). Research on Neural Network Defense Problem Based on Random Noise Injection. In: Kovalev, S., Kotenko, I., Sukhanov, A. (eds) Proceedings of the Seventh International Scientific Conference “Intelligent Information Technologies for Industry” (IITI’23). IITI 2023. Lecture Notes in Networks and Systems, vol 776. Springer, Cham. https://doi.org/10.1007/978-3-031-43789-2_37
Download citation
DOI: https://doi.org/10.1007/978-3-031-43789-2_37
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43788-5
Online ISBN: 978-3-031-43789-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)