Abstract
We present a HMM based system for real-time gesture analysis. The system outputs continuously parameters relative to the gesture time progression and its likelihood. These parameters are computed by comparing the performed gesture with stored reference gestures. The method relies on a detailed modeling of multidimensional temporal curves. Compared to standard HMM systems, the learning procedure is simplified using prior knowledge allowing the system to use a single example for each class. Several applications have been developed using this system in the context of music education, music and dance performances and interactive installation. Typically, the estimation of the time progression allows for the synchronization of physical gestures to sound files by time stretching/compressing audio buffers or videos.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
References
Mitra, S., Acharya, T., Member, S., Member, S.: Gesture recognition: A survey. IEEE Transactions on Systems, Man and Cybernetics - Part C 37, 311–324 (2007)
Rasamimanana, N.H., Bevilacqua, F.: Effort-based analysis of bowing movements: evidence of anticipation effects. The Journal of New Music Research 37(4), 339–351 (2009)
Rasamimanana, N.H., Kaiser, F., Bevilacqua, F.: Perspectives on gesture-sound relationships informed from acoustic instrument studies. Organised Sound 14(2), 208–216 (2009)
Rajko, S., Qian, G., Ingalls, T., James, J.: Real-time gesture recognition with minimal training requirements and on-line learning. In: CVPR 2007: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007)
Muller, R.: Human Motion Following using Hidden Markov Models. Master thesis, INSA Lyon, Laboratoire CREATIS (2004)
Bevilacqua, F., Guédy, F., Schnell, N., Fléty, E., Leroy, N.: Wireless sensor interface and gesture-follower for music pedagogy. In: NIME 2007: Proceedings of the 7th international conference on New interfaces for musical expression, pp. 124–129 (2007)
Bevilacqua, F.: Momentary notes on capturing gestures. In: (capturing intentions). Emio Greco/PC and the Amsterdam School for the Arts (2007)
Rabiner, L.R.: A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE 77(2), 257–286 (1989)
Fine, S., Singer, Y.: The hierarchical hidden markov model: Analysis and applications. In: Machine Learning, pp. 41–62 (1998)
Bobick, A.F., Wilson, A.D.: A state-based approach to the representation and recognition of gesture. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(12), 1325–1337 (1997)
Wilson, A.D., Bobick, A.F.: Realtime online adaptive gesture recognition. In: Proceedings of the International Conference on Pattern Recognition (1999)
Rajko, S., Qian, G.: A hybrid hmm/dpa adaptive gesture recognition method. In: International Symposium on Visual Computing (ISVC), pp. 227–234 (2005)
Rajko, S., Qian, G.: Hmm parameter reduction for practical gesture recognition. In: 8th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2008), pp. 1–6 (2008)
Artieres, T., Marukatat, S., Gallinari, P.: Online handwritten shape recognition using segmental hidden markov models. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(2), 205–217 (2007)
Bloit, J., Rodet, X.: Short-time viterbi for online HMM decoding: evaluation on a real-time phone recognition task. In: International Conference on Acoustics, Speech, and Signal Processing, ICASSP (2008)
Mori, A., Uchida, S., Kurazume, R., Ichiro Taniguchi, R., Hasegawa, T., Sakoe, H.: Early recognition and prediction of gestures. In: Proceedings of the International Conference on Pattern Recognition, vol. 3, pp. 560–563 (2006)
Schwarz, D., Orio, N., Schnell, N.: Robust polyphonic midi score following with hidden markov models. In: Proceedings of the International Computer Music Conference, ICMC (2004)
Cont, A.: Antescofo: Anticipatory synchronization and control of interactive parameters in computer music. In: Proceedings of the International Computer Music Conference, ICMC (2008)
Schnell, N., Borghesi, R., Schwarz, D., Bevilacqua, F., Müller, R.: Ftm - complex data structures for max. In: International Computer Music Conference, ICMC (2005)
Bevilacqua, F., Muller, R., Schnell, N.: Mnm: a max/msp mapping toolbox. In: NIME 2005: Proceedings of the 5th international conference on New interfaces for musical expression, pp. 85–88 (2005)
Viaud-Delmon, I., Bresson, J., Pachet, F., Bevilacqua, F., Roy, P., Warusfel, O.: Eartoy: interactions ludiques par l’audition. In: Journées d’Informatique Musicale - JIM 2007, Lyon, France (2007)
Rasamimanana, N., Guedy, F., Schnell, N., Lambert, J.P., Bevilacqua, F.: Three pedagogical scenarios using the sound and gesture lab. In: Proceedings of the 4th i-Maestro Workshop on Technology Enhanced Music Education (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bevilacqua, F., Zamborlin, B., Sypniewski, A., Schnell, N., Guédy, F., Rasamimanana, N. (2010). Continuous Realtime Gesture Following and Recognition. In: Kopp, S., Wachsmuth, I. (eds) Gesture in Embodied Communication and Human-Computer Interaction. GW 2009. Lecture Notes in Computer Science(), vol 5934. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-12553-9_7
Download citation
DOI: https://doi.org/10.1007/978-3-642-12553-9_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-12552-2
Online ISBN: 978-3-642-12553-9
eBook Packages: Computer ScienceComputer Science (R0)