Abstract
Deep neural networks (DNNs) have been shown lack of robustness, as they are vulnerable to small perturbations on the inputs. This has led to safety concerns on applying DNNs to safety-critical domains. Several verification approaches based on constraint solving have been developed to automatically prove or disprove safety properties for DNNs. However, these approaches suffer from the scalability problem, i.e., only small DNNs can be handled. To deal with this, abstraction based approaches have been proposed, but are unfortunately facing the precision problem, i.e., the obtained bounds are often loose. In this paper, we focus on a variety of local robustness properties and a \((\delta,\varepsilon)\)-global robustness property of DNNs, and investigate novel strategies to combine the constraint solving and abstraction-based approaches to work with these properties:
-
We propose a method to verify local robustness, which improves a recent proposal of analyzing DNNs through the classic abstract interpretation technique, by a novel symbolic propagation technique. Specifically, the values of neurons are represented symbolically and propagated from the input layer to the output layer, on top of the underlying abstract domains. It achieves significantly higher precision and thus can prove more properties.
-
We propose a Lipschitz constant based verification framework. By utilising Lipschitz constants solved by semidefinite programming, we can prove global robustness of DNNs. We show how the Lipschitz constant can be tightened if it is restricted to small regions. A tightened Lipschitz constantcan be helpful in proving local robustness properties. Furthermore, a global Lipschitz constant can be used to accelerate batch local robustness verification, and thus support the verification of global robustness.
-
We show how the proposed abstract interpretation and Lipschitz constant based approaches can benefit from each other to obtain more precise results. Moreover, they can be also exploited and combined to improve constraints based approach.
We implement our methods in the tool PRODeep, and conduct detailed experimental results on several benchmarks
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Anderson G, Pailoor S, Dillig I, Chaudhuri S (2019) Optimization and abstraction: a synergistic approach for analyzing neural network robustness. In: McKinley KS, Fisher K (eds), Proceedings of the 40th ACM SIGPLAN conference on programming language design and implementation, PLDI 2019, Phoenix, AZ, USA, June 22–26, 2019. ACM, pp 731–744
Akhtar, S.W., Rehman, S., Akhtar, M., Khan, M.A., Riaz, F., Chaudry, Q., Young Rupert, C.D.: Improving the robustness of neural networks using k-support norm based adversarial training. IEEE Access 4, 9501–9511 (2016)
Cousot P, Cousot R (1977) Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Fourth ACM symposium on principles of programming languages (POPL), pp 238–252
Croce F, Hein M (2020) Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. CoRR, arXiv:2003.01690
Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP) . IEEE, pp 39–57
Dutta S, Jha S, Sankaranarayanan S, Tiwari A (2018) Output range analysis for deep feedforward neural networks. In: NASA formal methods - 10th international symposium, NFM 2018, newport news, VA, USA, April 17–19, 2018, Proceedings, pp 121–138
Dvijotham K, Stanforth R, Gowal S, Mann TA, Kohli P (2018) A dual approach to scalable verification of deep networks. CoRR, arXiv:1803.06567
Elboher, Y.Y., Gottschlich, J., Katz, G.: An abstraction-based framework for neural network verification. In: Lahiri, S.K., Wang, C. (eds.) Computer Aided Verification - 32nd international conference, CAV 2020, Los Angeles, CA, USA, July 21–24, 2020, Proceedings, Part I, vol. 12224, pp. 43–65. Springer, Lecture Notes in Computer Science (2020)
Ehlers R (2017) Formal verification of piece-wise linear feed-forward neural networks. In: 15th International symposium on automated technology for verification and analysis (ATVA2017), pp 269–286
Fazlyab, M., Robey, A., Hassani, H., Morari, M., Pappas, G.J.: Efficient and accurate estimation of lipschitz constants for deep neural networks. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E.B., Garnett, R. (eds.) Advances in neural information processing systems 32: annual conference on neural information processing systems 2019, NeurIPS 2019, 8–14 December 2019, pp. 11423–11434. Vancouver, BC, Canada (2019)
Gupta S, Dube P, Verma A (2020) Improving the affordability of robustness training for dnns. In: 2020 IEEE/CVF conference on computer vision and pattern recognition, CVPR workshops 2020, Seattle, WA, USA, June 14–19, 2020. IEEE, pp 3383–3392
Ghorbal K, Goubault E, Putot S (2009) The zonotope abstract domain taylor1+. In: International conference on computer aided verification. Springer, pp 627–633
Ghorbal K, Goubault E, Putot S (2010) A logical product approach to zonotope intersection. In: Touili T, Cook B, Jackson PB (eds) Computer aided verification, 22nd international conference, CAV 2010, Edinburgh, UK, July 15–19, 2010. Proceedings, volume 6174 of Lecture Notes in Computer Science. Springer, pp 212–226
Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M (2018) AI\(^2\): Safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy (S&P 2018), pp 948–963
Hinton, G., Deng, L., Yu, D., Dahl, G.E., Mohamed, A.R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T.N., Kingsbury, B.: Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process Mag 29(6), 82–97 (2012)
Huang X, Kwiatkowska M, Wang S, Wu M (2017) Safety verification of deep neural networks. In: 29th international conference on computer aided verification (CAV2017), pp 3–29
Huang Z, Zhang T (2020) Black-box adversarial attack with transferable model-based embedding. In: 8th international conference on learning representations, ICLR 2020, Addis Ababa, Ethiopia, April 26–30, 2020. OpenReview.net
Jeannin J-B, Ghorbal K, Kouskoulas Y, Gardner R, Schmidt A, Zawadzki E, Platzer A (2015) Formal verification of ACAS x, an industrial airborne collision avoidance system. In: 2015 international conference on embedded Software, EMSOFT 2015, Amsterdam, Netherlands, October 4-9, 2015, pp 127–136
Julian KD, Kochenderfer MJ, Owen MP (2018) Deep neural network compression for aircraft collision avoidance systems. CoRR, arXiv:1810.04240
Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ (2017) Towards proving the adversarial robustness of deep neural networks. arXiv:1709.02802
Katz G, Barrett CW, Dill DL, Julian K, Kochenderfer MJ (2017) Reluplex: an efficient SMT solver for verifying deep neural networks. In: 29th international conference on computer aided verification (CAV2017), pp 97–117
Katz, G., Huang, D.A., Ibeling, D., Julian, K., Lazarus, C., Lim, R., Shah, P., Thakoor, S., Wu, H., Zeljic, A., Dill, D.L., Kochenderfer, M.J., Barrett, C.W.: The marabou framework for verification and analysis of deep neural networks. In: Dillig, I., Tasiran, S. (eds.) Computer aided verification - 31st international conference, CAV 2019, New York City, NY, USA, July 15–18, 2019, proceedings, Part I, vol. 11561, pp. 443–452. Springer, Lecture notes in computer science (2019)
Kang, X., Song, B., Du, X., Guizani, M.: Adversarial attacks for image segmentation on multiple lightweight models. IEEE Access 8, 31359–31370 (2020)
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems 25: 26th annual conference on neural information processing systems 2012. Proceedings of a meeting held December 3–6, 2012, Lake Tahoe, Nevada, United States., pp 1106–1114
Lécun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc IEEE 86(11), 2278–2324 (1998)
Li R, Li J, Huang CC, Yang P, Huang X, Zhang L, Xue B, Hermanns H (2020) Prodeep: a platform for robustness verification of deep neural networks. In: ESEC/FSE 2020
Li, Jianlin, Liu, Jiangchao, Yang, Pengfei, Chen, Liqian, Huang, Xiaowei, Zhang, Lijun: Analyzing deep neural networks with symbolic propagation: Towards higher precision and faster verification. In: Chang, B.E. (ed.) Static analysis - 26th international symposium, SAS 2019, Porto, Portugal, October 8–11, 2019, Proceedings, vol. 11822, pp. 296–319. Springer, Lecture notes in computer science (2019)
Lomuscio A, Maganti L (2018) An approach to reachability analysis for feed-forward ReLU neural networks. In: KR2018
Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR 2016, Las Vegas, NV, USA, June 27–30, 2016 . IEEE Computer Society, pp 2574–2582
Mao C, Gupta A, Nitin V, Ray B, Song S, Yang J, Vondrick C (2020) Multitask learning strengthens adversarial robustness. CoRR, arXiv:2007.07236
Miné, A.: Tutorial on static inference of numeric invariants by abstract interpretation. Found Trends Program Lang 4(3–4), 120–372 (2017)
Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: 6th International conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net
Müller C, Singh G, Püschel M, Vechev MT (2020) Neural network robustness verification on gpus. CoRR, arXiv:2007.10868
Narodytska N, Kasiviswanathan SP, Ryzhyk L, Sagiv M, Walsh T (2017) Verifying properties of binarized deep neural networks. arXiv preprint arXiv:1709.06662
Neumaier, A., Shcherbina, O.: Safe bounds in linear and mixed-integer linear programming. Math. Program. 99(2), 283–296 (2004)
Narayanan, A., Wang, D.: Improving robustness of deep neural network acoustic models via speech separation and joint adaptive training. IEEE ACM Trans Audio Speech Lang Process 23(1), 92–101 (2015)
Nandy, J., Wynne, H., Li, L.M., (2019) Robustness for adversarial \(l_{p \ge 1}\) perturbations. In: NeurIPS, : workshop on machine learning with guarantees. Vancouver, Canada (2019)
Nguyen A, Yosinski J, Clune J (2015) Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 427–436
Prabhakar, P., Afzal, Z.R.: Abstraction based output range analysis for neural networks. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E.B., Garnett, R. (eds.) Advances in neural information processing systems 32: annual conference on neural information processing systems 2019, NeurIPS 2019, 8–14 December 2019, pp. 15762–15772. Vancouver, BC, Canada (2019)
Papernot N, McDaniel PD, Jha S, Fredrikson M, Celik ZB, Swami A (2015) The limitations of deep learning in adversarial settings. CoRR, arXiv:1511.07528
Papernot N, McDaniel PD, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: IEEE European symposium on security and privacy, EuroS&P 2016, Saarbrücken, Germany, March 21–24, 2016. IEEE, pp 372–387
Pulina L, Tacchella A (2010) An abstraction-refinement approach to verification of artificial neural networks. In: Computer aided verification, 22nd international conference, CAV 2010, Edinburgh, UK, July 15–19, 2010. Proceedings, pp 243–257,
Ru B, Cobb AD, Blaas A, Gal Y (2020) Bayesopt adversarial attack. In: 8th International conference on learning representations, ICLR 2020, Addis Ababa, Ethiopia, April 26–30, 2020. OpenReview.net
Rothberg E, Chen T, Ji H (2019) Towards better accuracy and robustness with localized adversarial training. In: The Thirty-Third AAAI conference on artificial intelligence, AAAI 2019, The thirty-first innovative applications of artificial Intelligence conference, IAAI 2019, The ninth AAAI symposium on educational advances in artificial intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019 . AAAI Press, pp 10017–10018
Ruan W, Huang X, Kwiatkowska M (2018) Reachability analysis of deep neural networks with provable guarantees. In: Proceedings of the twenty-seventh international joint conference on artificial intelligence, IJCAI 2018, July 13–19, 2018, Stockholm, Sweden., pp 2651–2659
Ruan W, Huang X, Kwiatkowska M (2018) Reachability analysis of deep neural networks with provable guarantees. In: IJCAI2018, pp 2651–2659
Ruan W, Wu M, Sun Y, Huang X, Kroening D, Kwiatkowska M (2019) Global robustness evaluation of deep neural networks with provable guarantees for the hamming distance. In: IJCAI2019
Singh G, Gehr T, Mirman M, Püschel M, Vechev MT (2018) Fast and effective robustness certification. Advances in neural information processing systems 31: Annual conference on neural information processing systems 2018, NeurIPS 2018, 3–8 December 2018. Montréal, Canada., pp 10825–10836
Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Vechev Martin T (2019) Beyond the single neuron convex barrier for neural network certification. In: Wallach Hanna M, Larochelle Hugo, Beygelzimer Alina, d'Alché-Buc Florence, Fox Emily B, Garnett Roman (eds) Advances in Neural Information Processing Systems 32: annual conference on neural information processing Systems 2019, NeurIPS 2019, 8–14 December 2019. Vancouver, BC, Canada, pp 15072–15083
Singh G, Gehr T, Püschel M, Vechev MT (2019) An abstract domain for certifying neural networks. PACMPL, 3(POPL):41:1–41:30
Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T.P., Leach, M., Kavukcuoglu, K., Graepel, T., Hassabis, D.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)
Schott L, Rauber J, Bethge M, Brendel W (2019) Towards the first adversarially robust neural network model on MNIST. In: 7th International conference on learning representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: International conference on learning representations (ICLR2014)
Tramèr, F., Boneh, D.: Adversarial training and robustness for multiple perturbations. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E.B., Garnett, R. (eds.) Advances in neural information processing systems 32: annual conference on neural information processing systems 2019, NeurIPS 2019, 8–14 December 2019, pp. 5858–5868. Vancouver, BC, Canada (2019)
Tran, H.-D., Bak, S., Xiang, W., Johnson, T.T.: Verification of deep convolutional neural networks using imagestars. In: Lahiri, S.K., Wang, C. (eds.) Computer aided verification - 32nd international conference, CAV 2020, Los Angeles, CA, USA, July 21–24, 2020, Proceedings, Part I, vol. 12224, pp. 18–42. Springer, Lecture Notes in Computer Science (2020)
Tran, H.-D., Lopez, D.M., Musau, P., Yang, X., Nguyen, L.V., Xiang, W., Johnson, T.T.: Star-based reachability analysis of deep neural networks. In: ter Beek, M.H., McIver, A., Oliveira, J.N. (eds.) Formal methods - the next 30 years - third world congress, FM 2019, Porto, Portugal, October 7–11, 2019, Proceedings, vol. 11800, pp. 670–686. Springer, Lecture notes in computer science (2019)
von Essen C, Giannakopoulou D (2014) Analyzing the next generation airborne collision avoidance system. In: Tools and algorithms for the construction and analysis of systems - 20th international conference, TACAS 2014, held as part of the European joint conferences on theory and practice of software, ETAPS 2014, Grenoble, France, April 5-13, 2014. Proceedings, pp 620–635
Wicker M, Huang X, Kwiatkowska M (2018) Feature-guided black-box safety testing of deep neural networks. In: International conference on tools and algorithms for the construction and analysis of systems (TACAS2018). Springer, pp 408–426
Wong E, Kolter JZ (2018) Provable defenses against adversarial examples via the convex outer adversarial polytope. In: Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pp 5283–5292
Wang Y, Ma X, Bailey J, Yi J, Zhou B, Gu Q (2019) On the convergence and robustness of adversarial training. In: Chaudhuri K, Salakhutdinov R (eds) Proceedings of the 36th international conference on machine learning, ICML 2019, 9–15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of machine learning research . PMLR, pp 6586–6595
Wu Min, Wicker matthew, Ruan Wenjie, Huang Xiaowei, Kwiatkowska Marta (2019) A game-based approximate verification of deep neural networks with provable guarantees. Theoretical Computer Science, 5
Wang S, Pei K, Whitehouse J, Yang J, Jana S (2018) Formal security analysis of neural networks using symbolic intervals. CoRR, arXiv:1804.10829
Weng T-W, Zhang H, Chen H, Song Z, Hsieh C-J, Daniel L, Boning DS, Dhillon IS (2018) Towards fast computation of certified robustness for relu networks. In: Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pp 5273–5282
Xu J, Du Q (2020) Texttricker: Loss-based and gradient-based adversarial attacks on text classification models. Eng Appl Artif Intell 92:
Xiang, W., Tran, H., Johnson, T.T.: Output reachable set estimation and verification for multilayer neural networks. IEEE Trans Neural Netw Learn Syst 29(11), 5777–5783 (2018)
Yan, Z., Guo, Y., Zhang, C.: Deep defense: Training dnns with improved adversarial robustness. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in neural information processing systems 31: annual conference on neural information processing systems 2018, NeurIPS 2018, 3–8 December 2018, pp. 417–426. Montréal, Canada (2018)
Zhao C, Fletcher PT, Yu M, Peng Y, Zhang G, Shen C (2019) The adversarial attack and detection under the fisher information metric. In: The thirty-third AAAI conference on artificial intelligence, AAAI 2019, The thirty-first innovative applications of artificial intelligence conference, IAAI 2019, The ninth AAAI symposium on educational advances in artificial intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019. AAAI Press, pp 5869–5876
Zügner D, Günnemann S (2019) Certifiable robustness and robust training for graph convolutional networks. In Teredesai A, Kumar V, Li Y, Rosales R, Terzi E, Karypis G (eds) Proceedings of the 25th ACM SIGKDD International conference on knowledge discovery & data mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019 . ACM, pp 246–256
Author information
Authors and Affiliations
Corresponding author
Additional information
Zhiming Liu, Xiaoping Chen, Ji Wang and Jim Woodcock
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Yang, P., Li, J., Liu, J. et al. Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation. Form Asp Comp 33, 407–435 (2021). https://doi.org/10.1007/s00165-021-00548-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00165-021-00548-1