Abstract
The amount of text data available online is increasing at a very fast pace; hence, text summarization has become essential. Most of the modern recommender and text classification systems require going through a huge amount of data. Manually generating precise and fluent summaries of lengthy articles is a very tiresome and time-consuming task. Hence, generating automated summaries for the data and using it to train machine learning models will make these models space and time efficient. Extractive summarization and abstractive summarization are two separate methods of generating summaries. The extractive technique identifies the relevant sentences from the original document and extracts only those from the text. Whereas in abstractive summarization techniques, the summary is generated after interpreting the original text, hence making it more complicated. In this paper, we will be presenting a comprehensive comparison of a few transformer architecture-based pretrained models for text summarization. For analysis and comparison, we have used the BBC news dataset that contains text data that can be used for summarization and human-generated summaries for evaluating and comparing the summaries generated by machine learning models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Babar S, Tech-Cse M, Rit (2013) Text summarization: an overview
Allahyari M, Pouriyeh SA, Assefi M, Safaei S, Trippe ED, Gutierrez JB, Kochut K (2017) Text summarization techniques: a brief survey. CoRR abs/1707.02268
Moratanch N, Gopalan C (2017) A survey on extractive text summarization, pp 1–6
Moratanch N, Gopalan C (2016) A survey on abstractive text summarization, pp 1–7
Christian H, Agus M, Suhartono D (2016) Single document automatic text summarization using term frequency-inverse document frequency (TF-IDF). ComTech: Comput Math Eng Appl 7:285. https://doi.org/10.21512/comtech.v7i4.3746
Nomoto T (2005) Bayesian learning in text summarization
Graves A (2013) Generating sequences with recurrent neural networks. CoRR abs/1308.0850
Nallapati R, Xiang B, Zhou B (2016) Sequence-to-sequence RNNs for text summarization. CoRR abs/1602.06023
Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9:1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
Shi T, Keneshloo Y, Ramakrishnan N, Reddy CK (2018) Neural abstractive text summarization with sequence-to-sequence models. CoRR abs/1812.02303
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. ArXiv abs/1706.03762
Devlin J, Chang M-W, Lee K, Toutanova K (2018) BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805
Zhang J, Zhao Y, Saleh M, Liu PJ (2019) PEGASUS: pre-training with extracted gap-sentences for abstractive summarization. CoRR abs/1912.08777
Dong L, Yang N, Wang W, Wei F, Liu X, Wang Y, Gao J, Zhou M, Hon H-W (2019) Unified language model pre-training for natural language understanding and generation. CoRR abs/1905.03197
Radford A (2018) Improving language understanding by generative pre-training
Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, Cistac P, Rault T, Louf R, Funtowicz M, Brew J (2019) HuggingFace’s transformers: state-of-the-art natural language processing. CoRR abs/1910.03771
Greene D, Cunningham P (2006) Practical solutions to the problem of diagonal dominance in kernel document clustering. In: Proceedings of the 23rd International conference on machine learning. association for computing machinery, New York, NY, USA, pp 377–384
Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2019) BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. CoRR abs/1910.13461
Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ (2019) Exploring the limits of transfer learning with a unified text-to-text transformer. CoRR abs/1910.10683
Zhuang F, Qi Z, Duan K, Xi D, Zhu Y, Zhu H, Xiong H, He Q (2021) A comprehensive survey on transfer learning. Proc IEEE 109:43–76. https://doi.org/10.1109/JPROC.2020.3004555
Lin C-Y (2004) Looking for a few good metrics: ROUGE and its evaluation
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Gupta, A., Chugh, D., Anjum, Katarya, R. (2022). Automated News Summarization Using Transformers. In: Aurelia, S., Hiremath, S.S., Subramanian, K., Biswas, S.K. (eds) Sustainable Advanced Computing. Lecture Notes in Electrical Engineering, vol 840. Springer, Singapore. https://doi.org/10.1007/978-981-16-9012-9_21
Download citation
DOI: https://doi.org/10.1007/978-981-16-9012-9_21
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-9011-2
Online ISBN: 978-981-16-9012-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)