Abstract
Deep convolutional networks have obtained remarkable achievements on various visual tasks due to their strong ability to learn a variety of features. A well-trained deep convolutional network can be compressed to 20%–40% of its original size by removing filters that make little contribution, as many overlapping features are generated by redundant filters. Model compression can reduce the number of unnecessary filters but does not take advantage of redundant filters since the training phase is not affected. Modern networks with residual, dense connections and inception blocks are considered to be able to mitigate the overlap in convolutional filters, but do not necessarily overcome the issue. To do so, we propose a new training strategy, weight asynchronous update, which helps to significantly increase the diversity of filters and enhance the representation ability of the network. The proposed method can be widely applied to different convolutional networks without changing the network topology. Our experiments show that the stochastic subset of filters updated in different iterations can significantly reduce filter overlap in convolutional networks. Extensive experiments show that our method yields noteworthy improvements in neural network performance.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Krizhevsky, A.; Sutskever, I.; Hinton, G. E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems, Vol. 1, 1097–1105, 2012.
Zhang, D. J.; He, L. C.; Tu, Z. G.; Zhang, S. F.; Han, F.; Yang, B. X. Learning motion representation for real-time spatio-temporal action localization. Pattern Recognition Vol. 103, 107312, 2020.
Li, J.; Xu, K.; Chaudhuri, S.; Yumer, E.; Zhang, H.; Guibas, L. Grass: Generative recursive autoencoders for shape structures. ACM Transactions on Graphics Vol. 36, No. 4, 1–14, 2017.
Li, J.; Chen, B. M.; Lee, G. H. SO-Net: Self-organizing network for point cloud analysis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9397–9406, 2018.
Zhang, D. J.; He, F. Z.; Tu, Z. G.; Zou, L.; Chen, Y. L. Pointwise geometric and semantic learning network on 3D point clouds. Integrated Computer-Aided Engineering Vol. 27, No. 1, 57–75, 2019.
Rabiner, L. R. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE Vol. 77, No. 2, 257–286, 1989.
Hinton, G.; Deng, L.; Yu, D.; Dahl, G.; Mohamed, A. R.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen. P.; Sainath, T. et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine Vol. 29, No. 6, 82–97, 2012.
Zhang, D. J.; Hong, M. B.; Zou, L.; Han, F.; He, F. Z.; Tu, Z. G.; Ren, Y. F. Attention pooling-based bidirectional gated recurrent units model for sentimental classification. International Journal of Computational Intelligence Systems Vol. 12, No. 2, 723–732, 2019.
Zhang, D. J.; Luo, M. T.; He, F. Z. Reconstructed similarity for faster GANs-based word translation to mitigate hubness. Neurocomputing Vol. 362, 83–93, 2019.
Pan, Y. T.; He, F. Z.; Yu, H. P. A novel enhanced collaborative autoencoder with knowledge distillation for top-N recommender systems. Neurocomputing Vol. 332, 137–148, 2019.
Huang, G.; Liu, Z.; Maaten, L. V. D.; Weinberger, K. Q. Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4700–4708, 2017.
Xie, S.; Girshick, R.; Doll, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1492–1500, 2017.
He, K. M.; Zhang, X. Y.; Ren, S. Q.; Sun, J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778, 2016.
Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich. A. Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1–9, 2015.
Han, S.; Pool, J.; Narang, S. R.; Mao, H. Z.; Gong, E. H.; Tang, S. J.; Elsen, E.; Vajda, P.; Paluri, M.; Tran, J. et al. DSD: Dense-sparse-dense training for deep neural networks. arXiv preprint arXiv:1607.04381, 2016.
Hu, H. Y.; Peng, R.; Tai, Y. W.; Tang, C. K. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250, 2016.
Molchanov, P.; Tyree, S.; Karras, T.; Aila, T.; Kautz, J. Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440, 2016.
Prakash, A.; Storer, J.; Florencio, D.; Zhang, C. Repr: Improved training of convolutional filters. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 10666–10675, 2019.
Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research Vol. 15, No. 56, 1929–1958, 2014.
Li, X.; Chen, S.; Hu, X.; Yang. J. Understanding the disharmony between dropout and batch normalization by variance shift. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2682–2690, 2019.
Gastaldi, X. Shake-shake regularization of 3-branch residual networks. In: Proceedings of the 5th International Conference on Learning Representations, 2017.
Yamada, Y.; Iwamura, M.; Akiba, T.; Kise, K. Shakedrop regularization for deep residual learning. IEEE Access Vol. 7, 186126–186136, 2019.
Li, Y. X.; Yosinski, J.; Clune, J.; Lipson, H.; Hopcroft, J. Convergent Learning: Do different neural networks learn the same representations? arXiv preprint arXiv:1511.07543, 2015.
Latham, P. E. Associative memory in realistic neuronal networks. In: Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, 237–244, 2001.
Norton, J. D. Science and certainty. Synthese Vol. 99, No. 1, 3–22, 1994.
Duchi, J. C.; Hazan, E.; Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research Vol. 12, 2121–2159, 2011.
Kingma, D. P.; Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Krizhevsky, A. Learning multiple layers of features from tiny images. Master Thesis. University of Tront, 2009.
Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
He, K. M.; Zhang, X. Y.; Ren, S. Q.; Sun, J. Identity mappings in deep residual networks. In: Computer Vision - ECCV 2016. Lecture Notes in Computer Science, Vol. 9908. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 630–645, 2016.
Zagoruyko, S.; Komodakis, N. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
Ren, S. Q.; He, K. M.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 39, No. 6, 1137–1149, 2017.
Lin, T. Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollar, P.; Zitnick, C. L. Microsoft COCO: Common objects in context. In: Computer Vision — ECCV 2014. Lecture Notes in Computer Science, Vol. 8693. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer Cham, 740–755, 2014.
Furlanello, T.; Lipton, Z.; Tschannen, M.; Itti, L.; Anandkumar, A. Born-again neural networks. In: Proceedings of the 35th International Conference on Machine Learning, 1602–1611, 2018.
Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Proceedings of the International Conference on Machine Learning, 448–456, 2015.
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China under Grant No. 61702350.
Author information
Authors and Affiliations
Corresponding author
Additional information
Dejun Zhang received his Ph.D. degree from the Department of Computer Science, Wuhan University, China, in 2015. He is currently an associate professor of the School of Geography and Information Engineering, China University of Geosciences. Since 2015, he has served as a senior member of the China Society for Industrial and Applied Mathematics (CSIAM) and is a member of the geometric design & computing committee of CSIAM. Since 2020, he has been serving as a China Computer Federation (CCF) Senior Member. He was a technical program chair for the 5th Asian Conference on Pattern Recognition (ACPR 2019). His research areas include computer vision, computer graphics, image and video processing, and deep learning. He has published more than 20 refereed articles in journals and conference proceedings.
Linchao He is currently a senior student in the College of Information and Engineering, Sichuan Agricultural University (SICAU) in Yaan, China. He is a member of the CCF. His research interests include image classification, object detection, action recognition, and deep learning.
Mengting Luo is currently a senior student in the College of Information and Engineering, Sichuan Agricultural University (SICAU) in Yaan, China. She is a member of the CCF. Her research interests include image classification, object detection, and action recognition.
Zhanya Xu received his Ph.D. degree from China University of Geosciences in 2010. He is currently a lecturer in the School of Geography and Engineering, China University of Geosciences. He is a member of the CCF. His research areas include spatial information services, big data processing, and intelligent computing. He has published more than 20 papers in journals and conferences.
Fazhi He received his Ph.D. degree from Wuhan University of Technology. He was a postdoctoral researcher in the State Key Laboratory of CAD & CG at Zhejiang University, a visiting researcher in Korea Advanced Institute of Science & Technology and a visiting faculty member in the University of North Carolina at Chapel Hill. Now he is a professor in the School of Computing, Wuhan University. He has served as a senior member of CSIAM and a member of the geometric design & computing committee of CSIAM. Currently, he is a member of the editorial board for the Journal of Computer-Aided Design & Computer Graphics. His research interests are computer graphics, computer-aided design, and computer supported cooperative work.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.
About this article
Cite this article
Zhang, D., He, L., Luo, M. et al. Weight asynchronous update: Improving the diversity of filters in a deep convolutional network. Comp. Visual Media 6, 455–466 (2020). https://doi.org/10.1007/s41095-020-0185-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41095-020-0185-5