Abstract
Deep learning models have made incredible progress in tackling an assortment of natural language processing (NLP) issues. An ever-developing assortment of research, in any case, outlines the dependence of deep neural systems (DNNs) to ill-disposed models—inputs adjusted by acquainting little irritations with knowingly fooling an objective model into yielding mistaken outcomes. The powerlessness to aggressive models has gotten one of the fundamental obstacles blocking neural system organization into security basic conditions. This paper talks about the contemporary utilization of ill-disposed guides to thwart DNNs and presents an extensive audit of their utilization to improve the robustness of DNNs in NLP applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Jones KS (1994) Natural language processing: a historical review. In: Current issues in computational linguistics: in honour of Don Walker. Dordrecht, The Netherlands. Springer, pp 3–16
Liddy ED (2001) Natural language processing. In: Encyclopedia of library and information science, 2nd edn. Marcel Decker Inc, New York
Coates A, Huval B, Wang T, Wu D, Catanzaro B, Andrew N (2013) Deep learning with cots HPC systems. In: Proceedings of ICML, pp 1337–1345
Raina R, Madhavan A, Ng AY (2009) Large-scale deep unsupervised learning using graphics processors. In: Proceedings of ICML, pp 873–880
Goodfellow I, Bengio Y, Courville A, Bengio Y (2016) Deep learning, vol 1. MIT Press, Cambridge
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444
Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117
Ciresan DC et al (2011) Flexible, high performance convolutional neural networks for image classification. Proc IJCAI 22(1):1237
Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, Kuksa P (2011) Natural language processing (almost) from scratch. J Mach Learn Res 12:2493–2537
Goldberg Y (2017) Neural network methods for natural language processing. Synth Lect Hum Lang Technol 10(1):1–309
Liu Y, Zhang M (2018) Neural network methods for natural language processing. Comput Linguist 44(1):193–195
Young T, Hazarika D, Poria S, Cambria E (2018) Recent trends in deep learning based natural language processing. IEEE Comput Intell Mag 13(3):55–75
Brierley C, Atwell E (2008) ProPOSEL: a human-oriented prosody and PoS English lexicon for machine-learning and NLP. In: Coling 2008: Proceedings of the workshop on cognitive aspects of the Lexicon (COGALEX 2008), pp 25–31
Hofmann T (2013) Probabilistic latent semantic analysis. arXiv preprint arXiv:1301.6705
Drucker H, Wu D, Vapnik VN (1999) Support vector machines for spam categorization. IEEE Trans Neural Netw 10(5):1048–1054
Tripathy A, Agrawal A, Rath SK (2015) Classification of sentimental reviews using machine learning techniques. Proc Comput Sci 57:821–829
Schmidt A, Wiegand M (2017) A survey on hate speech detection using natural language processing. In: Proceedings of the fifth international workshop on natural language processing for social media, pp 1–10
Nockleby JT (2000) Hate speech. Encycl Am const 3(2):1277–9
Xiang G, Fan B, Wang L, Hong J, Rose C (2012) Detecting offensive tweets via topical feature discovery over a large scale twitter corpus. In: Proceedings of the 21st ACM international conference on information and knowledge management Oct 29, pp 1980–1984
Ponti EM, O’horan H, Berzak Y, Vulić I, Reichart R, Poibeau R, Shutova T, Shutova E, Korhonen A (2019) Modeling language variation and universals: a survey on typological linguistics for natural language processing. Comput Linguist 45(3):559–601
Snyder B, Barzilay R (2008) Unsupervised multilingual learning for morphological segmentation. In: Proceedings of acl-08: hlt 2008 Jun, pp 737–745
Comrie B (1989, Jul 15) Language universals and linguistic typology: syntax and morphology. University of Chicago press
Croft B, Lafferty J (eds) (2003, May 31) Language modeling for information retrieval. Springer Science & Business Media
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Dinesh, P.M., Sujitha, V., Salma, C., Srijayapriya, B. (2021). A Review on Natural Language Processing: Back to Basics. In: Raj, J.S., Iliyasu, A.M., Bestak, R., Baig, Z.A. (eds) Innovative Data Communication Technologies and Application. Lecture Notes on Data Engineering and Communications Technologies, vol 59. Springer, Singapore. https://doi.org/10.1007/978-981-15-9651-3_54
Download citation
DOI: https://doi.org/10.1007/978-981-15-9651-3_54
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-9650-6
Online ISBN: 978-981-15-9651-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)