Abstract
Neural networks have been widely used and are being improved to meet the demands of future technological advancements. Softmax is used to deliver multi-class logistic regression and classifier operations after the input data from the different Convolutional layers have been processed. Exponentiation and division operations, for example, are hardware-intensive operations in this function. The gap between highly optimized hardware-efficient neural networks and softmax implementation has been widening in recent years, resulting in a bottleneck effect. As a result, in order to work with neural networks like CNN and DNN, a hardware-efficient implementation of the function is required. For multiple values of classes, we proposed a hardware-efficient softmax architecture and implemented it in FPGAs using appropriate approximation techniques.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Cardarilli GC et al (2021) A pseudo-softmax function for hardwarebased high-speed image classification. Sci Rep 11. Article ID: 15307
Wei Z, Arora A, Patel P, John L (2020) Design space exploration for softmax implementations. In: Proceedings of 31st IEEE International conference on application-specific systems, architectures and processors (ASAP), pp 45–52
Yuan B (2016) Efficient hardware architecture of softmax layer in deep neural network. In: Proceedings of 29th IEEE International system-on-chip conference (SOCC), pp 323–326
Du G, Tian C, Li Z, Zhang D, Yin Y-S, Ouyang Y (2019) Efficient softmax hardware architecture for deep neural networks. In: Proceedings of great lakes symposium VLSI (GLSVLSI), pp 75–80
Spagnolo F, Perri S, Corsonello P (2022) Aggressive approximation of the SoftMax function for power-efficient hardware implementations. IEEE Trans Circuits Syst II Express Briefs 69(3):1652–1656. https://doi.org/10.1109/TCSII.2021.3120495
Hussain MA, Tsai T-H (2021) An efficient and fast softmax hardware architecture (EFSHA) for deep neural networks. In: 2021 IEEE 3rd International conference on artificial intelligence circuits and systems (AICAS), pp 1–4. https://doi.org/10.1109/AICAS51828.2021.9458541
Zhu D, Lu S, Wang M, Lin J, Wang Z (2020) Efficient precision adjustable architecture for softmax function in deep learning. IEEE Trans Circuits Syst II Exp Briefs 67(12):3382–3386
Gao Y, Liu W, Lombardi F (2020) Design and implementation of an approximate softmax layer for deep neural networks. Institute of Electrical and Electronics Engineers (IEEE), pp 1–5
Geng X, Lin J, Zhao B, Kong A, Aly MMS, Chandrasekhar V (2019) Hardware-aware softmax approximation for deep neural networks. Lecture notes in computer science (Including Subseries Lecture notes in artificial intelligence and Lecture notes in bioinformatics), pp 107–122
Larkin D, Kinane A, Muresan V, O'Connor N (2006) An efficient hardware architecture for a neural network activation function generator. In: Proceedings of the ISNN International symposium on neural networks, vol 144, pp 1319–1327
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Gokula Kannan, R., Hari Raghavan, V., Guruviah, V. (2024). FPGA Implementation of Efficient Softmax Architecture for Deep Neural Networks. In: Gabbouj, M., Pandey, S.S., Garg, H.K., Hazra, R. (eds) Emerging Electronics and Automation. E2A 2022. Lecture Notes in Electrical Engineering, vol 1088. Springer, Singapore. https://doi.org/10.1007/978-981-99-6855-8_47
Download citation
DOI: https://doi.org/10.1007/978-981-99-6855-8_47
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-6854-1
Online ISBN: 978-981-99-6855-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)