Skip to main content

BERT Based Language Identification in Code-Mixed English-Assamese Social Media Text

  • Conference paper
  • First Online:
Machine Intelligence and Data Science Applications (MIDAS 2022)

Part of the book series: Algorithms for Intelligent Systems ((AIS))

Included in the following conference series:

  • 203 Accesses

Abstract

Language identification in code-mixed language pairs has progressively gained research interest in recent times. Due to the extensive use of social media, it has become necessary to identify languages in code-mixed text for dealing with tasks such as detection of hate speeches, misinformation, and disinformation. Recent transformer models such as BERT have shown very good results in many NLP tasks including language identification. This work uses a transfer learning approach by applying a BERT model for language identification at a word level in a code-mixed Assamese-English language pair. Experimental results performed with an available data set show that BERT performs better than using word-level features or semantic word embeddings with an accuracy of 94%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 229.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 299.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 299.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://scikit-learn.org/.

  2. 2.

    http://nlp.stanford.edu/data/glove.6B.zip.

  3. 3.

    https://tfhub.dev/.

References

  1. Al-Badrashiny M, Diab M (2016) The george washington university system for the code-switching workshop shared task 2016. In: Proceedings of the second workshop on computational approaches to code switching. pp 108–111

    Google Scholar 

  2. Ansari MZ, Beg M, Ahmad T, Khan MJ, Wasim G (2021) Language identification of hindi-english tweets using code-mixed bert. arXiv preprint arXiv:2107.01202

  3. Bansal N, Goyal V, Rani S (2020) Language identification and normalization of code-mixed english and punjabi text. In: 17th international conference on natural language processing. p 30

    Google Scholar 

  4. Barman U, Wagner J, Chrupała G, Foster J (2014) Dcu-uvt: word-level language classification with code-mixed data. In: Proceedings of the first workshop on computational approaches to code switching. pp 127–132

    Google Scholar 

  5. Bokamba EG (1989) Are there syntactic constraints on code-mixing? World Engl 8(3):277–292

    Article  Google Scholar 

  6. Bora MJ, Kumar R (2018) Automatic word-level identification of language in assamese english hindi code-mixed data. In: 4th workshop on indian language data and Resources, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). pp 7–12

    Google Scholar 

  7. Boukkouri HE, Ferret O, Lavergne T, Noji H, Zweigenbaum P, Tsujii J (2020) Characterbert: reconciling elmo and bert for word-level open-vocabulary representations from characters. arXiv preprint arXiv:2010.10392

  8. Burling R (2003) The tibeto-burman languages of northeastern india. Sino Tibet Lang 3:169–191

    Google Scholar 

  9. Carpuat M (2014) Mixed language and code-switching in the canadian hansard. In: Proceedings of the first workshop on computational approaches to code switching. pp 107–115

    Google Scholar 

  10. Chaitanya I, Madapakula I, Gupta SK, Thara S (2018) Word level language identification in code-mixed data using word embedding methods for indian languages. In: 2018 international conference on advances in computing, communications and informatics (ICACCI). IEEE, pp 1137–1141

    Google Scholar 

  11. Chakravarthi BR, Priyadharshini R, Muralidaran V, Jose N, Suryawanshi S, Sherly E, McCrae JP (2022) Dravidiancodemix: Sentiment analysis and offensive language identification dataset for dravidian languages in code-mixed text. Lang Resour Eval :1–42

    Google Scholar 

  12. Das A, Gambäck B (2014) Identifying languages at the word level in code-mixed indian social media text

    Google Scholar 

  13. Das A, Gambäck B (2015) Code-mixing in social media text: the last language identification frontier?

    Google Scholar 

  14. Devlin J, Chang MW, Lee K, Toutanova K (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805

  15. Eibe F, Hall MA, Witten IH (2016) The weka workbench. online appendix for data mining: practical machine learning tools and techniques. In: Morgan Kaufmann

    Google Scholar 

  16. Grierson GA (2006) Languages of North-Eastern India. Gyan Publishing House

    Google Scholar 

  17. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Article  Google Scholar 

  18. Jaech A, Mulcaire G, Ostendorf M, Smith NA (2016) A neural model for language identification in code-switched tweets. In: Proceedings of the second workshop on computational approaches to code switching. pp 60–64

    Google Scholar 

  19. Jamatia A, Das A (2016) Task report: tool contest on pos tagging for code-mixed indian social media (facebook, twitter, and whatsapp) text@ icon 2016. In: Proceedings of ICON

    Google Scholar 

  20. Kalita NJ, Saharia N (2018) Language identification on code-mix social text. In: Proceedings of the international conference on computing and communication systems. Springer, pp 433–440

    Google Scholar 

  21. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980

  22. Lample G, Conneau A (2019) Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291

  23. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R (2019) Albert: a lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942

  24. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692

  25. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems. pp 3111–3119

    Google Scholar 

  26. Pennington J, Socher R, Manning CD (2014) Glove: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). pp 1532–1543

    Google Scholar 

  27. Piergallini M, Shirvani R, Gautam GS, Chouikha M (2016) Word-level language identification and predicting codeswitching points in swahili-english language data. In: Proceedings of the second workshop on computational approaches to code switching. pp 21–29

    Google Scholar 

  28. Samih Y, Maharjan S, Attia M, Kallmeyer L, Solorio T (2016) Multilingual code-switching identification via lstm recurrent neural networks. In: Proceedings of the second workshop on computational approaches to code switching. pp 50–59

    Google Scholar 

  29. Sanh V, Debut L, Chaumond J, Wolf T (2019) Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108

  30. Sarma N, Singh SR, Goswami D Identifying languages at the word level in assamese-bengali-hindi-english code-mixed social media text

    Google Scholar 

  31. Schroff F, Kalenichenko D, Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 815–823

    Google Scholar 

  32. Shirvani R, Piergallini M, Gautam GS, Chouikha M (2016) The howard university system submission for the shared task in language identification in spanish-english codeswitching. In: Proceedings of the second workshop on computational approaches to code switching. pp 116–120

    Google Scholar 

  33. Vasantharajan C, Thayasivam U (2022) Towards offensive language identification for tamil code-mixed youtube comments and posts. SN Comput Sci 3(1):1–13

    Article  Google Scholar 

  34. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: Advances in neural information processing systems. pp 5998–6008

    Google Scholar 

  35. Veena P, Kumar MA, Soman K (2017) An effective way of word-level language identification for code-mixed facebook comments using word-embedding via character-embedding. In: 2017 international conference on advances in computing, communications and informatics (ICACCI). IEEE, pp 1552–1556

    Google Scholar 

  36. Volk M, Clematide S (2014) Detecting code-switching in a multilingual alpine heritage corpus. Association for Computational Linguistics

    Google Scholar 

  37. Xia MX (2016) Codeswitching language identification using subword information enriched word vectors. In: Proceedings of the second workshop on computational approaches to code switching. pp 132–136

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nayan Jyoti Kalita .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kalita, N.J., Deka, P., Chennareddy, V., Sarma, S.K. (2023). BERT Based Language Identification in Code-Mixed English-Assamese Social Media Text. In: Ramdane-Cherif, A., Singh, T.P., Tomar, R., Choudhury, T., Um, JS. (eds) Machine Intelligence and Data Science Applications. MIDAS 2022. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-99-1620-7_14

Download citation

Publish with us

Policies and ethics