Abstract
To be effective in the human world robots must respond to human emotional states. This paper focuses on the recognition of the six universal human facial expressions. In the last decade there has been successful research on facial expression recognition (FER) in controlled conditions suitable for human–computer interaction [1,2,3,4,5,6,7,8]. However the human–robot scenario presents additional challenges including a lack of control over lighting conditions and over the relative poses and separation of the robot and human, the inherent mobility of robots, and stricter real time computational requirements dictated by the need for robots to respond in a timely fashion.
Our approach imposes lower computational requirements by specifically adapting model-based techniques to the FER scenario. It contains adaptive skin color extraction, localization of the entire face and facial components, and specifically learned objective functions for fitting a deformable face model. Experimental evaluation reports a recognition rate of 70% on the Cohn–Kanade facial expression database, and 67% in a robot scenario, which compare well to other FER systems.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Essa, I.A., Pentland, A.P.: Coding, analysis, interpretation, and recognition of facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 19, 757–763 (1997)
Edwards, G.J., Cootes, T.F., Taylor, C.J.: Face Recognition Using Active Appearance Models. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1406–1607, pp. 581–595. Springer, Heidelberg (1998)
Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: The state of the art. IEEE TPAMI 22, 1424–1445 (2000)
Chibelushi, C.C., Bourel, F.: Facial expression recognition: A brief tutorial overview. In: CVonline: On-Line Compendium of Computer Vision (2003)
Wimmer, M., Zucker, U., Radig, B.: Human capabilities on video-based facial expression recognition. In: Proc. of the 2nd Workshop on Emotion and Computing – Current Research and Future Impact, Oldenburg, Germany, pp. 7–10 (2007)
Schuller, B., et al.: Audiovisual behavior modeling by combined feature spaces. In: ICASSP, vol. 2, pp. 733–736 (2007)
Fischer, S., et al.: Experiences with an emotional sales agent. In: André, E., et al. (eds.) ADS 2004. LNCS (LNAI), vol. 3068, pp. 309–312. Springer, Heidelberg (2004)
Tischler, M.A., et al.: Application of emotion recognition methods in automotive research. In: Proceedings of the 2nd Workshop on Emotion and Computing – Current Research and Future Impact, Oldenburg, Germany, pp. 50–55 (2007)
Breazeal, C.: Function meets style: Insights from emotion theory applied to HRI. 34, 187–194 (2004)
Rani, P., Sarkar, N.: Emotion-sensitive robots – a new paradigm for human-robot interaction. In: 4th IEEE/RAS International Conference on Humanoid Robots, vol. 1, pp. 149–167 (2004)
Scheutz, M., Schermerhorn, P., Kramer, J.: The utility of affect expression in natural language interactions in joint human-robot tasks. In: 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pp. 226–233. ACM Press, New York (2006)
Picard, R.: Toward agents that recognize emotion. In: Imagina 1998 (1998)
Bartlett, M.S., et al.: Measuring facial expressions by computer image analysis. Psychophysiology 36, 253–263 (1999)
Ikehara, C.S., Chin, D.N., Crosby, M.E.: A model for integrating an adaptive information filter utilizing biosensor data to assess cognitive load. In: Brusilovsky, P., Corbett, A.T., de Rosis, F. (eds.) UM 2003. LNCS, vol. 2702, pp. 208–212. Springer, Heidelberg (2003)
Vick, R.M., Ikehara, C.S.: Methodological issues of real time data acquisition from multiple sources of physiological data. In: Proc. of the 36th Annual Hawaii International Conference on System Sciences - Track 5, p. 1. IEEE Computer Society, Los Alamitos (2003)
Sheldon, E.M.: Virtual agent interactions. PhD thesis, Major Professor-Linda Malone (2001)
Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: International Conference on Automatic Face and Gesture Recognition, France, pp. 46–53 (2000)
Ekman, P.: Universals and cultural differences in facial expressions of emotion. In: Nebraska Symposium on Motivation 1971, vol. 19, pp. 207–283. University of Nebraska Press, Lincoln, NE (1972)
Ekman, P.: Facial expressions. In: Handbook of Cognition and Emotion, John Wiley & Sons Ltd., New York (1999)
Friesen, W.V., Ekman, P.: Emotional Facial Action Coding System. Unpublished manuscript, University of California at San Francisco (1983)
Michel, P., Kaliouby, R.E.: Real time facial expression recognition in video using support vector machines. In: Fifth International Conference on Multimodal Interfaces, Vancouver, pp. 258–264 (2003)
Cohn, J., et al.: Automated face analysis by feature point tracking has high concurrent validity with manual facs coding. Psychophysiology 36, 35–43 (1999)
Sebe, N., et al.: Emotion recognition using a cauchy naive bayes classifier. In: Proc. of the 16th International Conference on Pattern Recognition (ICPR 2002)., vol. 1, p. 10017. IEEE Computer Society, Los Alamitos (2002)
Cohen, I., et al.: Facial expression recognition from video sequences: Temporal and static modeling. CVIU special issue on face recognition 91(1-2), 160–187 (2003)
Schweiger, R., Bayerl, P., Neumann, H.: Neural architecture for temporal emotion classification. In: André, E., et al. (eds.) ADS 2004. LNCS (LNAI), vol. 3068, pp. 49–52. Springer, Heidelberg (2004)
Tian, Y.L., Kanade, T., Cohn, J.F.: Recognizing action units for facial expression analysis. IEEE TPAMI 23, 97–115 (2001)
Littlewort, G., et al.: Fully automatic coding of basic expressions from video. Technical report, University of California, San Diego, INC., MPLab (2002)
Ekman, P., Friesen, W. (eds.): The Facial Action Coding System: A Technique for The Measurement of Facial Movement. Consulting Psychologists Press, San Francisco (1978)
Bartlett, M., et al.: Recognizing facial expression: machine learning and application to spontaneous behavior. In: CVPR 2005. IEEE Computer Society Conference, vol. 2, pp. 568–573 (2005)
Kim, D.H., et al.: Development of a facial expression imitation system. In: International Conference on Intelligent Robots and Systems, Beijing, pp. 3107–3112 (2006)
Yoshitomi, Y., et al.: Effect of sensor fusion for recognition of emotional states using voice, face image and thermal image of face. In: Proc. of the 9th IEEE International Workshop on Robot and Human Interactive Communication, Osaka, Japan, pp. 178–183 (2000)
Otsuka, T., Ohya, J.: Recognition of facial expressions using hmm with continuous output probabilities. In: 5th IEEE International Workshop on Robot and Human Communication, Tsukuba, Japan, pp. 323–328 (1996)
Zhou, X., et al.: Real-time facial expression recognition based on boosted embedded hidden markov model. In: Proceedings of the Third International Conference on Image and Graphics, Hong Kong, pp. 290–293 (2004)
Wimmer, M.: Model-based Image Interpretation with Application to Facial Expression Recognition. PhD thesis, Technische Universitat München, Fakultät für Informatik (2007)
Cootes, T.F., Taylor, C.J.: Active shape models – smart snakes. In: Proc. of the 3rd BMVC, pp. 266–275. Springer, Heidelberg (1992)
Cootes, T.F., et al.: The use of active shape models for locating structures in medical images. In: IPMI, pp. 33–47 (1993)
Wimmer, M., Radig, B., Beetz, M.: A person and context specific approach for skin color classification. In: Proc. of the 18th ICPR 2006, vol. 2, pp. 39–42. IEEE Computer Society Press, Los Alamitos, CA, USA (2006)
Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: CVPR, Kauai, Hawaii, vol. 1, pp. 511–518 (2001)
Wimmer, M., et al.: Learning local objective functions for robust face model fitting. In: IEEE TPAMI (to appear, 2007)
Wimmer, M., et al.: Learning robust objective functions with application to face model fitting. In: DAGM Symp., vol. 1, pp. 486–496 (2007)
Hanek, R.: Fitting Parametric Curve Models to Images Using Local Self-adapting Separation Criteria. PhD thesis, Department of Informatics, Technische Universität München (2004)
Quinlan, R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, California (1993)
Yadav, A.: Human facial expression recognition. Technical report, Electrical and Computer Engineering, University of Auckland, New Zealand (2006)
Jayamuni, S.D.J.: Human facial expression recognition system. Technical report, Electrical and Computer Engineering, University of Auckland, New Zealand (2006)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Wimmer, M., MacDonald, B.A., Jayamuni, D., Yadav, A. (2008). Facial Expression Recognition for Human-Robot Interaction – A Prototype. In: Sommer, G., Klette, R. (eds) Robot Vision. RobVis 2008. Lecture Notes in Computer Science, vol 4931. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78157-8_11
Download citation
DOI: https://doi.org/10.1007/978-3-540-78157-8_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-78156-1
Online ISBN: 978-3-540-78157-8
eBook Packages: Computer ScienceComputer Science (R0)