Skip to main content

Comparing Neural Architectures to Find the Best Model Suited for Edge Devices

  • Conference paper
  • First Online:
Proceedings of International Conference on Intelligent Vision and Computing (ICIVC 2022) (ICIVC 2022)

Part of the book series: Proceedings in Adaptation, Learning and Optimization ((PALO,volume 17))

Included in the following conference series:

  • 233 Accesses

Abstract

Training large-scale neural network models is computationally expensive and demands a great deal of resources. It is an important area of study with a lot of potential for the future of the AI industry. In recent years, the power of computer hardware has significantly improved and we have new breakthroughs in deep learning. With these innovations, the computational cost of training large neural network models has declined by at least 10 folds in high- and average-performance machines. In this research, we explore NAL, AutoML, and other frameworks to determine the best suitable model for edge devices. The biggest improvements compared to reference models can be acquired if the NAS algorithm is co-designed with the corresponding inference engine.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 229.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 299.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 299.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Banbury, C., et al.: MicroNets: neural network architectures for deploying TinyML applications on commodity microcontrollers (2020)

    Google Scholar 

  2. Cai, H., Gan, C., Han, S.: Once for all: train one network and specialize it for efficient deployment. arXiv, pp. 1–15 (2019)

    Google Scholar 

  3. Cai, H., Zhu, L., Han, S.: ProxylessNAS: direct neural architecture search on target task and hardware. arXiv, pp. 1–13 (2018)

    Google Scholar 

  4. Chu, X., Zhang, B., Xu, R., Li, J.: FairNAS: rethinking evaluation fairness of weight sharing neural architecture search (2019)

    Google Scholar 

  5. David, R., et al.: TensorFlow Lite Micro: embedded machine learning on TinyML systems (2020)

    Google Scholar 

  6. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a LargeScale hierarchical image database. In: CVPR 2009 (2009)

    Google Scholar 

  7. Fedorov, I., Adams, R.P., Mattina, M., Whatmough, P.N.: SpArSe: sparse architecture search for CNNs on resource-constrained microcontrollers. arXiv, pp. 1–26 (2019)

    Google Scholar 

  8. Geada, R., Prangle, D., McGough, A.S.: Bonsai-Net: Oneshot Neural Architecture Search via differentiable pruners (2020)

    Google Scholar 

  9. He, X., Zhao, K., Chu, X.: AutoML: a survey of the state-of-the Art. arXiv, (Dl) (2019)

    Google Scholar 

  10. Krizhevsky, A., Nair, V., Hinton, G.: Cifar-10 (Canadian Institute for Advanced Research)

    Google Scholar 

  11. Lai, L., Suda, N., Chandra, V.: CMSIS-NN: efficient neural network Kernels for arm cortex-M CPUs. arXiv, pp. 1–10 (2018)

    Google Scholar 

  12. Liberis, E., Dudziak, L., Lane, N.D.: µNAS: constrained neural architecture search for microcontrollers (2020)

    Google Scholar 

  13. Liberis, E., Lane, N.D.: Neural networks on microcontrollers: saving memory at inference via operator reordering. arXiv, pp. 1–8 (2019)

    Google Scholar 

  14. Lin, J., Chen, W.-M., Lin, Y., Cohn, J., Gan, C., Han, S.: MCUNet: tiny deep learning on IoT devices. (NeurIPS), pp. 1–12 (2020)

    Google Scholar 

  15. Liu, D., Kong, H., Luo, X., Liu, W., Subramaniam, R.: Bringing AI to edge: from deep learning’s perspective, pp. 1–23 (2020)

    Google Scholar 

  16. Molchanov, P., Mallya, A., Tyree, S., Frosio, I., Kautz, J.: Importance estimation for neural network pruning. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  17. Rusci, M., Capotondi, A., Benini, L.: Memory-driven mixed low precision quantization for enabling deep network inference on microcontrollers. arXiv (2019)

    Google Scholar 

  18. Rusci, M., Fariselli, M., Capotondi, A., Benini, L.: Leveraging automated mixed-low-precision quantization for tiny edge microcontrollers. arXiv, pp. 1–12 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jai Mansukhani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Singh, B., Mansukhani, J. (2023). Comparing Neural Architectures to Find the Best Model Suited for Edge Devices. In: Sharma, H., Saha, A.K., Prasad, M. (eds) Proceedings of International Conference on Intelligent Vision and Computing (ICIVC 2022). ICIVC 2022. Proceedings in Adaptation, Learning and Optimization, vol 17. Springer, Cham. https://doi.org/10.1007/978-3-031-31164-2_16

Download citation

Publish with us

Policies and ethics