Abstract
Deep learning has emerged as an effective approach for creating modern software systems, with neural networks often surpassing hand-crafted systems. Unfortunately, neural networks are known to suffer from various safety and security issues. Formal verification is a promising avenue for tackling this difficulty, by formally certifying that networks are correct. We propose an SMT-based technique for verifying binarized neural networks — a popular kind of neural network, where some weights have been binarized in order to render the neural network more memory and energy efficient, and quicker to evaluate. One novelty of our technique is that it allows the verification of neural networks that include both binarized and non-binarized components. Neural network verification is computationally very difficult, and so we propose here various optimizations, integrated into our SMT procedure as deduction steps, as well as an approach for parallelizing verification queries. We implement our technique as an extension to the Marabou framework, and use it to evaluate the approach on popular binarized neural network architectures.
Chapter PDF
Similar content being viewed by others
References
Artifact repository. https://github.com/guyam2/BNN_Verification_Artifact.
Marabou repository. https://github.com/NeuralNetworkVerification/Marabou.
P. Ashok, V. Hashemi, J. Kretinsky, and S. Mühlberger. DeepAbstract: Neural Network Abstraction for Accelerating Verification. In Proc. 18th Int. Symposium on Automated Technology for Verification and Analysis (ATVA), 2020.
P. Bacchus, R. Stewart, and E. Komendantskaya. Accuracy, Training Time and Hardware Efficiency Trade-Offs for Quantized Neural Networks on FPGAs. In Proc. 16th Int. Symposium on Applied Reconfigurable Computing (ARC), pages 121–135, 2020.
C. Barrett and C. Tinelli. Satisfiability modulo theories. Springer, 2018.
O. Bastani, Y. Ioannou, L. Lampropoulos, D. Vytiniotis, A. Nori, and A. Criminisi. Measuring Neural Net Robustness with Constraints. In Proc. 30th Conf. on Neural Information Processing Systems (NIPS), 2016.
M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba. End to End Learning for Self-Driving Cars, 2016. Technical Report. http://arxiv.org/abs/1604.07316.
R. Bunel, I. Turkaslan, P. Torr, P. Kohli, and P. Mudigonda. A Unified View of Piecewise Linear Neural Network Verification. In Proc. 32nd Conf. on Neural Information Processing Systems (NeurIPS), pages 4795–4804, 2018.
N. Carlini, G. Katz, C. Barrett, and D. Dill. Provably Minimally-Distorted Adversarial Examples, 2017. Technical Report. https://arxiv.org/abs/1709.10207.
H. Chen, L. Zhuo, B. Zhang, X. Zheng, J. Liu, R. Ji, D. D., and G. Guo. Binarized Neural Architecture Search for Efficient Object Recognition, 2020. Technical Report. http://arxiv.org/abs/2009.04247.
C.-H. Cheng, G. Nührenberg, C.-H. Huang, and H. Ruess. Verification of Binarized Neural Networks via Inter-Neuron Factoring, 2017. Technical Report. http://arxiv.org/abs/1710.03107.
D. Ciregan, U. Meier, and J. Schmidhuber. Multi-Column Deep Neural Networks for Image Classification. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 3642–3649, 2012.
G. Dantzig. Linear Programming and Extensions. Princeton University Press, 1963.
R. Ehlers. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks. In Proc. 15th Int. Symp. on Automated Technology for Verification and Analysis (ATVA), pages 269–286, 2017.
Y. Elboher, J. Gottschlich, and G. Katz. An Abstraction-Based Framework for Neural Network Verification. In Proc. 32nd Int. Conf. on Computer Aided Verification (CAV), pages 43–65, 2020.
T. Gehr, M. Mirman, D. Drachsler-Cohen, E. Tsankov, S. Chaudhuri, and M. Vechev. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. In Proc. 39th IEEE Symposium on Security and Privacy (S&P), 2018.
L. Geiger and P. Team. Larq: An Open-Source Library for Training Binarized Neural Networks. Journal of Open Source Software, 5(45):1746, 2020.
M. Giacobbe, T. Henzinger, and M. Lechner. How Many Bits Does it Take to Quantize Your Neural Network? In Proc. 26th Int. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 79–97, 2020.
S. Gokulanathan, A. Feldsher, A. Malca, C. Barrett, and G. Katz. Simplifying Neural Networks using Formal Verification. In Proc. 12th NASA Formal Methods Symposium (NFM), pages 85–93, 2020.
B. Goldberger, Y. Adi, J. Keshet, and G. Katz. Minimal Modifications of Deep Neural Networks using Verification. In Proc. 23rd Int. Conf. on Logic for Programming, Artificial Intelligence and Reasoning (LPAR), pages 260–278, 2020.
I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.
D. Gopinath, G. Katz, C. Pǎsǎreanu, and C. Barrett. DeepSafe: A Data-driven Approach for Assessing Robustness of Neural Networks. In Proc. 16th. Int. Symposium on on Automated Technology for Verification and Analysis (ATVA), pages 3–19, 2018.
S. Han, H. Mao, and W. Dally. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In Proc. 4th Int. Conf. on Learning Representations (ICLR), 2016.
T. Henzinger, M. Lechner, and D. Zikelic. Scalable Verification of Quantized Neural Networks (Technical Report), 2020. Technical Report. https://arxiv.org/abs/2012.08185.
X. Huang, M. Kwiatkowska, S. Wang, and M. Wu. Safety Verification of Deep Neural Networks. In Proc. 29th Int. Conf. on Computer Aided Verification (CAV), pages 3–29, 2017.
I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized Neural Networks. In Proc. 30th Conf. on Neural Information Processing Systems (NIPS), pages 4107–4115, 2016.
I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations. The Journal of Machine Learning Research, 18(1):6869–6898, 2017.
Y. Jacoby, C. Barrett, and G. Katz. Verifying Recurrent Neural Networks using Invariant Inference. In Proc. 18th Int. Symposium on Automated Technology for Verification and Analysis (ATVA), 2020.
K. Jia and M. Rinard. Efficient Exact Verification of Binarized Neural Networks, 2020. Technical Report. http://arxiv.org/abs/2005.03597.
K. Julian, J. Lopez, J. Brush, M. Owen, and M. Kochenderfer. Policy Compression for Aircraft Collision Avoidance Systems. In Proc. 35th Digital Avionics Systems Conf. (DASC), pages 1–10, 2016.
G. Katz, C. Barrett, D. Dill, K. Julian, and M. Kochenderfer. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. In Proc. 29th Int. Conf. on Computer Aided Verification (CAV), pages 97–117, 2017.
G. Katz, C. Barrett, D. Dill, K. Julian, and M. Kochenderfer. Towards Proving the Adversarial Robustness of Deep Neural Networks. In Proc. 1st Workshop on Formal Verification of Autonomous Vehicles (FVAV), pages 19–26, 2017.
G. Katz, C. Barrett, D. Dill, K. Julian, and M. Kochenderfer. Reluplex: a Calculus for Reasoning about Deep Neural Networks, 2021. Submitted, preprint avaialble upon request.
G. Katz, D. Huang, D. Ibeling, K. Julian, C. Lazarus, R. Lim, P. Shah, S. Thakoor, H. Wu, A. Zeljić, D. Dill, M. Kochenderfer, and C. Barrett. The Marabou Framework for Verification and Analysis of Deep Neural Networks. In Proc. 31st Int. Conf. on Computer Aided Verification (CAV), pages 443–452, 2019.
Y. Kazak, C. Barrett, G. Katz, and M. Schapira. Verifying Deep-RL-Driven Systems. In Proc. 1st ACM SIGCOMM Workshop on Network Meets AI & ML (NetAI), pages 83–89, 2019.
D. Kingma and J. Ba. Adam: a Method for Stochastic Optimization, 2014. Technical Report. http://arxiv.org/abs/1412.6980.
A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet Classification with Deep Convolutional Neural Networks. In Proc. 26th Conf. on Neural Information Processing Systems (NIPS), pages 1097–1105, 2012.
L. Kuper, G. Katz, J. Gottschlich, K. Julian, C. Barrett, and M. Kochenderfer. Toward Scalable Verification for Safety-Critical Deep Networks, 2018. Technical Report. https://arxiv.org/abs/1801.05950.
S. Lai, L. Xu, K. Liu, and J. Zhao. Recurrent Convolutional Neural Networks for Text Classification. In Proc. 29th AAAI Conf. on Artificial Intelligence, 2015.
D. Lin, S. Talathi, and S. Annapureddy. Fixed Point Quantization of Deep Convolutional Networks. In Proc. 33rd Int. Conf. on Machine Learning (ICML), pages 2849–2858, 2016.
A. Lomuscio and L. Maganti. An Approach to Reachability Analysis for Feed-Forward ReLU Neural Networks, 2017. Technical Report. http://arxiv.org/abs/1706.07351.
P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz. Pruning Convolutional Neural Networks for Resource Efficient Inference, 2016. Technical Report. http://arxiv.org/abs/1611.06440.
N. Narodytska, S. Kasiviswanathan, L. Ryzhyk, M. Sagiv, and T. Walsh. Verifying Properties of Binarized Deep Neural Networks, 2017. Technical Report. http://arxiv.org/abs/1709.06662.
N. Narodytska, H. Zhang, A. Gupta, and T. Walsh. In Search for a SAT-friendly Binarized Neural Network Architecture. In Proc. 7th Int. Conf. on Learning Representations (ICLR), 2019.
M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. XNOR-Net: Imagenet Classification using Binary Convolutional Neural Networks. In Proc. 14th European Conf. on Computer Vision (ECCV), pages 525–542, 2016.
K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proc. 3rd Int. Conf. on Learning Representations (ICLR), 2015.
C. Strong, H. Wu, A. Zeljić, K. Julian, G. Katz, C. Barrett, and M. Kochenderfer. Global Optimization of Objective Functions Represented by ReLU networks, 2020. Technical Report. http://arxiv.org/abs/2010.03258.
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing Properties of Neural Networks, 2013. Technical Report. http://arxiv.org/abs/1312.6199.
V. Tjeng, K. Xiao, and R. Tedrake. Evaluating Robustness of Neural Networks with Mixed Integer Programming. In Proc. 7th Int. Conf. on Learning Representations (ICLR), 2019.
H. Tran, S. Bak, and T. Johnson. Verification of Deep Convolutional Neural Networks Using ImageStars. In Proc. 32nd Int. Conf. on Computer Aided Verification (CAV), pages 18–42, 2020.
S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana. Efficient Formal Safety Analysis of Neural Networks, 2018. Technical Report. https://arxiv.org/abs/1809.08098.
S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana. Formal Security Analysis of Neural Networks using Symbolic Intervals. In Proc. 27th USENIX Security Symposium, pages 1599–1614, 2018.
T.-W. Weng, H. Zhang, H. Chen, Z. Song, C.-J. Hsieh, D. Boning, I. Dhillon, and L. Daniel. Towards Fast Computation of Certified Robustness for ReLU Networks, 2018. Technical Report. http://arxiv.org/abs/1804.09699.
H. Wu, A. Ozdemir, A. Zeljić, A. Irfan, K. Julian, D. Gopinath, S. Fouladi, G. Katz, C. Păsăreanu, and C. Barrett. Parallelization Techniques for Verifying Neural Networks. In Proc. 20th Int. Conf. on Formal Methods in Computer-Aided Design (FMCAD), pages 128–137, 2020.
H. Xiao, K. Rasul, and R. Vollgraf. Fashion-Mnist: a Novel Image Dataset for Benchmarking Machine Learning Algorithms, 2017. Technical Report. http://arxiv.org/abs/1708.07747.
J. Yang, X. Shen, J. Xing, X. Tian, H. Li, B. Deng, J. Huang, and X.-S. Hua. Quantization Networks. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 7308–7316, 2019.
Y. Zhou, S.-M. Moosavi-Dezfooli, N.-M. Cheung, and P. Frossard. Adaptive Quantization for Deep Neural Network, 2017. Technical Report. http://arxiv.org/abs/1712.01048.
Acknowledgements
We thank Nina Narodytska, Kyle Julian, Kai Jia, Leon Overweel and the Plumerai research team for their contributions to this project. The project was partially supported by the Israel Science Foundation (grant number 683/18), the Binational Science Foundation (grant number 2017662), the National Science Foundation (grant number 1814369), and the Center for Interdisciplinary Data Science Research at The Hebrew University of Jerusalem.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2021 The Author(s)
About this paper
Cite this paper
Amir, G., Wu, H., Barrett, C., Katz, G. (2021). An SMT-Based Approach for Verifying Binarized Neural Networks. In: Groote, J.F., Larsen, K.G. (eds) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2021. Lecture Notes in Computer Science(), vol 12652. Springer, Cham. https://doi.org/10.1007/978-3-030-72013-1_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-72013-1_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-72012-4
Online ISBN: 978-3-030-72013-1
eBook Packages: Computer ScienceComputer Science (R0)