Skip to main content

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 350))

Abstract

Making human–computer interaction more organic and personalized for users essentially demands advancement in human emotion recognition. Emotions are perceived by humans considering multiple factors such as facial expressions, voice tonality, and information context. Although significant research has been conducted in the area of unimodal/multimodal emotion recognition in videos using acoustic/visual features, few papers have explored the potential of textual information obtained from the video utterances. Humans experience emotions through their audio-visual and linguistic senses, making it quintessential to take the latter into account. This paper outlines two different algorithms for recognizing multimodal emotional expressions in online videos. In addition to acoustic (speech), visual (facial), and textual (utterances) feature extraction using BERT, we utilize bidirectional LSTMs to capture the context between utterances. To obtain richer sequential information, we also implement a multi-head self-attention mechanism. Our analysis utilizes the benchmarking CMU multimodal opinion sentiment and emotion intensity (CMU-MOSEI) dataset, which is the largest dataset for sentiment analysis and emotion recognition to date. Our experiments result in improved F1 scores in comparison to the baseline models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/A2Zadeh/CMU-MultimodalSDK.

  2. 2.

    https://mccormickml.com/2019/07/22/BERT-fine-tuning/.

  3. 3.

    https://github.com/Ighina/MultiModalSA.

References

  1. Wang A, Baoshan Sun RJ (2019) An improved model of multi-attention lstmfor multimodal sentiment analysis—proceedings of the 2019 3rd international conference on computer science and artificial intelligence 2019. Acm.org, https://dl.acm.org/doi/abs/https://doi.org/10.1145/3374587.3374606

  2. Avots E, Sapin´ski T, Bachmann M, Kamin´ska D (Jul 2018) Audiovisual emotionrecognition in wild. Mach Vision Appl 30(5):975–985. https://doi.org/10.1007/s00138-018-0960-9, https://springerlink.bibliotecabuap.elogim.com/article/https://doi.org/10.1007/s00138-018-0960-9

  3. Chandra E, Hsu JYJ (Nov 2019) Deep learning for multimodal emotion recognitionattentive residual disconnected rnn. 2019 International Conference on Technologies and Applications of Artificial Intelligence (TAAI). https://doi.org/10.1109/taai48200.2019.8959913, https://ieeexplore.ieee.org/document/8959913

  4. Chatterjee J, Mukesh V, Hsu HH, Vyas G, Liu Z (Aug 2018) Speech emotion recognition using cross-correlation and acoustic features. 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on pervasive intelligence and computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). https://doi.org/10.1109/dasc/picom/datacom/cyberscitec.2018.00050, https://ieeexplore.ieee.org/document/8511893

  5. Dai W, Cahyawijaya S, Liu Z, Fung P Multimodal end-to-end sparse modelfor emotion recognition. https://arxiv.org/pdf/2103.09666.pdf

  6. Devlin J, Chang MW, Lee K, Google K, Language A (2019) BERT: pre-trainingof deep bidirectional transformers for language understanding, https://arxiv.org/pdf/1810.04805.pdf

  7. Khalil RA, Jones E, Babar MI, Jan T, Zafar MH, Alhussain T (2019) Speechemotion recognition using deep learning techniques: a review. IEEE Access 7:117327–117345 . https://doi.org/10.1109/access.2019.2936124, https:// ieeexplore.ieee.org/document/8805181

  8. Poria S, Cambria E, Bajpai R, Hussain A (Sep 2017) A review of affective computing: From unimodal analysis to multimodal fusion. Inf Fusion 37:98–125 . https://doi.org/10.1016/j.inffus.2017.02.003, https://www.sciencedirect.com/science/article/pii/S1566253517300738

  9. Qi H, Wang X, Hall F, Sitharama S, Cs I, Hall C, Chakrabarty K, Hall H Multisensor data fusion in distributed sensor networks using mobile agents. http://users.cis.fiu.edu/~iyengar/images/publications/data_ fusion_mobile_agents.pdf

  10. Rao KS, Koolagudi SG (Jul 2013) Recognition of emotions from video using acousticand facial features. Signal Image Video Process 9(5):1029–1045. https://doi.org/10.1007/s11760-013-0522-6, https://springerlink.bibliotecabuap.elogim.com/article/https://doi.org/10.1007/s11760-013-0522-6

  11. Zadeh A, Liang P, Vanbriesen J, Poria S, Tong E, Cambria E, Chen M, Morency LP (2018) Multimodal language analysis in the wild: cmu-mosei dataset and interpretable dynamic fusion graph, pp 2236–2246. https://www.aclweb.org/anthology/P18-1208.pdf

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Khalane, A., Shaikh, T. (2022). Context-Aware Multimodal Emotion Recognition. In: Ullah, A., Anwar, S., Rocha, Á., Gill, S. (eds) Proceedings of International Conference on Information Technology and Applications. Lecture Notes in Networks and Systems, vol 350. Springer, Singapore. https://doi.org/10.1007/978-981-16-7618-5_5

Download citation

Publish with us

Policies and ethics