Abstract
The aim of this paper is to specify some of the problems raised by the design of a gesture recognition system dedicated to Sign Language, and to propose suited solutions. The three topics considered here concern the simultaneity of information conveyed by manual signs, the possible temporal or spatial synchronicity between the two hands, and the different classes of signs that may be encountered in a Sign Language sentence.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
McNeil, D.: Hand and Mind: What gestures reveal about thought. University of Chicago Press, Chicago (1992)
Kendon, A.: How gestures can become like words. In: Poyatos, F. (ed.) Crosscultural perspectives in nonverbal communication, pp. 131–141. Hogrefe, Toronto (1988)
Cuxac, C.: La langue des Signes Française (LSF), les voies de l’iconicité, ch. 3, Faits de Langues, OPHRYS (2000)
Hienz, H., Bauer, B.: Video-Based Continuous Sign Language Recognition Using Statistical Methods. In: Proc. of International Conference on Automatic Face and Gesture Recognition FG 2000, pp. 440–445 (2000)
Vogler, C., Metaxas, D.N.: Toward Scalability in ASL Recognition: Breaking Down Signs into Phonemes. In: Braffort, A., Gibet, S., Teil, D., Gherbi, R., Richardson, J. (eds.) GW 1999. LNCS (LNAI), vol. 1739, pp. 211–224. Springer, Heidelberg (2000)
Braffort, A.: Reconnaissance et compréhension de gestes, application à la Langue des Signes. PhD thesis, Université Paris XI - Orsay (1996)
Liang, R.H., Ming, O.: A Sign Language Recognition System Using Hidden Markov Model and Context Sensitive Search. In: Proc. of ACM Symposium on Virtual Reality Software and Technology 1996, pp. 59–66 (1996)
Vogler, C., Metaxas, D.N.: Parallel Hiden Markov Models for American Sign Language Recognition. In: Proc. of International Conference on Computer Vision, pp. 116–122 (1999)
Lejeune, F., Braffort, A., Descles, J.P.: Study on Semantic Representations of French Sign Language Sentences. In: Wachsmuth, I., Sowa, T. (eds.) GW 2001. LNCS (LNAI), vol. 2298, pp. 197–201. Springer, Heidelberg (2002)
Braffort, A.: ARGo: An Architecture for Sign Language Recognition and Interpretation. In: Harling, P.A., Edwards, A.D.N. (eds.) Proc. of Gesture Workshop 1996, March 1996, pp. 17–30. Springer, Heidelberg (1996)
Sagawa, H., Takeuchi, M.: A Method for Analyzing Spatial Relationships Between Words in Sign Language Recognition. In: Braffort, A., Gibet, S., Teil, D., Gherbi, R., Richardson, J. (eds.) GW 1999. LNCS (LNAI), vol. 1739, pp. 197–210. Springer, Heidelberg (2000)
Cuxac, C., Fusellier-Souza, I., Sallandre, M.A.: Iconicité des Langues des Signes et Catégorisations. In: SEMIOTIQUES, (16) (1998)
Rubine, D.: The Automatic Recognition of Gestures. PhD thesis, Carnegie Mellon University (1991)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bossard, B., Braffort, A., Jardino, M. (2004). Some Issues in Sign Language Processing. In: Camurri, A., Volpe, G. (eds) Gesture-Based Communication in Human-Computer Interaction. GW 2003. Lecture Notes in Computer Science(), vol 2915. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24598-8_9
Download citation
DOI: https://doi.org/10.1007/978-3-540-24598-8_9
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-21072-6
Online ISBN: 978-3-540-24598-8
eBook Packages: Springer Book Archive