Abstract
Automatic Image Captioning is a task that involves two prominent areas of Deep Learning research, i.e., image processing and language generation. Over the years, we have achieved a lot of success in being able to generate syntactically and semantically meaningful descriptions using deep learning architectures. Recent studies have implemented an attention mechanism that lets the model attend to different regions of the image at different timestamps while generating the caption. In this paper, we present a Transformer architecture that generates captions by just enforcing the attention mechanism. To understand the effect of attention mechanism on model performance, we separately train two LSTM-based image captioning models for a comparative study with our architecture. The models are trained on the Flickr-8K dataset using the Cross-Entropy Loss Function. For evaluating the models we calculate CIDEr-R, BLEU, METEOR, and ROUGE-L metric scores for the captions generated by these models on test split. Results from our comparative study suggest that the Transformer Architecture is a better approach toward image captioning and meaningful descriptions can be generated even without the use of traditional recurrent neural networks as decoders.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Hu, X, Yin X, Lin K, Wang L, Zhang L, Gao J, Liu Z (2020) VIVO: surpassing human performance in novel object captioning with visual vocabulary pre-training
Cornia M, Stefanini M, Baraldi L, Cucchiara R (2020) Meshed-memory transformer for image captioning. In: 2020 IEEE conference on computer vision and pattern recognition (CVPR 2020)
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. In: Proceedings of the 31st international conference on neural information processing systems (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, pp 6000-6010
Hodosh M, Young P, Hockenmaier J (2013) Framing image description as a ranking task: data, models and evaluation metrics. J Artif Intell Res 47:853–899
Yang Y, Teo CL, Daume H, Aloimono Y (2011) Corpus-guided sentence generation of natural images. In: Proceedings of the conference on empirical methods in natural language processing, pp 444–454
Kulkarni G (2011) Baby talk: understanding and generating simple image descriptions. In: CVPR et al (2011) Colorado Springs. CO, USA, pp 1601–1608. https://doi.org/10.1109/CVPR.2011.5995466
Hodosh M, Young P, Hockenmaier J (2013) Framing image description as a ranking task: data, models and evaluation metrics. J Artif Intell Res 47(2013):853–899
Ordonez V, Kulkarni G, Berg TL (2011) Im2text: describing images using 1 million captioned photographs. In: Advances in neural information processing systems, pp 1143–1151
Kuznetsova P, Ordonez V, Berg T, Choi Y (2014) Treetalk: Composition and compression of trees for image descriptions. Trans Assoc Comput Linguist 2(10):351–362
Kiros R, Salakhutdinov R, Zemel R. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539
Vinyals O, Toshev A, Bengio S, Erhan D (2015) Show and tell: a neural image caption generator. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), Boston, MA, USA, pp 3156–3164. https://doi.org/10.1109/CVPR.2015.7298935
Donahue J, Hendricks L, Guadarrama S, Rohrbach M, Venugopalan S (2015) Long-term recurrent convolutional networks for visual recognition and description. In: IEEE conference on computer vision and pattern recognition, pp 2625–2634
Pu Y, Gan Z, Henao R, Yuan X, Li C, Stevens A, Carin L (2016) Variational autoencoder for deep learning of images, labels and captions
Xu K, Ba J, Kiros R, Cho K, Courville A, Salakhudinov R, Zemel R, Bengio Y (2015) Show, attend and tell: neural image caption generation with visual attention. In: Proceedings of the 32nd international conference on machine learning
Huang L, Wang W, Chen J, Wei X (2019) Attention on attention for image captioning. In: 2019 IEEE/CVF international conference on computer vision (ICCV), Seoul, Korea (South), pp 4633–4642. https://doi.org/10.1109/ICCV.2019.00473
Anderson P et al (2018) Bottom-up and top-down attention for image captioning and visual question answering. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, Salt Lake City, UT, pp 6077–6086. https://doi.org/10.1109/CVPR.2018.00636
Zhang W, Nie W, Li X, Yu Y (2019) Image caption generation with adaptive transformer (2019). In: 34rd youth academic annual conference of Chinese Association of Automation (YAC). Jinzhou, China, pp 521–526. https://doi.org/10.1109/YAC.2019.8787715
Szegedy C, Vanhoucke, V, Ioffe S, Shlens J, Wojna ZB (2016) Rethinking the inception architecture for computer vision. 10.1109/CVPR.2016.308
Vedantam R, Lawrence Zitnick C, Parikh D (2015) Cider: consensus-based image description evaluation. CVPR 4566–4575
Denkowski M, Lavie A (2014) Meteor universal: language specific translation evaluation for any target language. SMT-W, pp 376–380
Lin C-Y (2004) Rouge: a package for automatic evaluation of summaries. Text Summarization Branches Out
Papineni K, Roukos S, Ward T, Zhu W-J (2002) Bleu: a method for automatic evaluation of machine translation. Association for Computational Linguistics (ACL), Philadelphia
Kingma D, Ba J (2014) Adam: a method for stochastic optimization. Int Conf Learn Representations
Bahdanau D, Cho K, Bengio Y (2014) Neural machine translation by jointly learning to align and translate. arXiv, [1409.0473]
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Chordia, S., Pawar, Y., Kulkarni, S., Toradmal, U., Suratkar, S. (2022). Attention Is All You Need to Tell: Transformer-Based Image Captioning. In: Rout, R.R., Ghosh, S.K., Jana, P.K., Tripathy, A.K., Sahoo, J.P., Li, KC. (eds) Advances in Distributed Computing and Machine Learning. Lecture Notes in Networks and Systems, vol 427. Springer, Singapore. https://doi.org/10.1007/978-981-19-1018-0_52
Download citation
DOI: https://doi.org/10.1007/978-981-19-1018-0_52
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-1017-3
Online ISBN: 978-981-19-1018-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)