Abstract
This paper presents a model of a computer-animated avatar for the Russian and Czech sign languages. Basic principles of sign language(s) and their implementation in a computer model are briefly sketched. Particular attention is paid to animation principles of the "talking head", which allows for maximum expansion of the functions of the program, making it suitable not only for deaf and hard-of-hearing people, but for blind and non-disabled people too, so the universal audio-visual synthesizer is proposed.
Chapter PDF
Similar content being viewed by others
Keywords
References
DePaul ASL Synthesizer, http://asl.cs.depaul.edu
Efthimiou, E., et al.: Sign Language technologies and resources of the Dicta-Sign project. In: Proc. 5th Workshop on the Representation and Processing of Sign Languages, Istanbul, Turkey, pp. 37–44 (2012)
Dicta-Sign Project, www.dictasign.eu
Caminero, J., Rodríguez-Gancedo, M., Hernández-Trapote, A., López-Mencía, B.: SIGNSPEAK Project Tools: A way to improve the communication bridge between signer and hearing communities. In: Proc. 5th Workshop on the Representation and Processing of Sign Languages, Istanbul, Turkey, pp. 1–6 (2012)
SIGNSPEAK Project, www.signspeak.eu/en
Gibet, S., Courty, N., Duarte, K., Naour, T.: The SignCom system for data-driven animation of interactive virtual signers: Methodology and Evaluation. ACM Transactions on Interactive Intelligent Systems 1(1) (2011)
Borgotallo, R., et al.: A multi-language database for supporting sign language translation and synthesis. In: Proc. 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, Malta, pp. 23–26 (2010)
ViSiCAST Project, www.visicast.co.uk
eSign Project, www.sign-lang.uni-hamburg.de/esign
Vcom3D Company, www.vcom3d.com
SiSi Project, www-03.ibm.com/press/us/en/pressrelease/22316.wss
iCommunicator project, www.icommunicator.com
Beskow, J.: Trainable articulatory control models for visual speech synthesis. Journal of Speech Technology 4(7), 335–349 (2004)
Youssef, A., Hueber, T., Badin, P., Bailly, G.: Toward a Multi-Speaker Visual Articulatory Feedback System. In: Proc. International Conference INTERSPEECH-2011, Florence, Italy, pp. 589–592 (2011)
Hanke, T.: HamNoSys - Representing sign language data in language resources and language processing contexts. In: Proc. International Conference on Language Resources and Evaluation LREC-2004, Lisbon, Portugal, pp. 1–6 (2004)
Hoffmann, R., Jokisch, O., Lobanov, B., Tsirulnik, L., Shpilewsky, E., Piurkowska, B., Ronzhin, A., Karpov, A.: Slavonic TTS and SST Conversion for Let’s Fly Dialogue System. In: Proc. 12th International Conference on Speech and Computer SPECOM-2007, Moscow, Russia, pp. 729–733 (2007)
Tihelka, D., Kala, J., Matoušek, J.: Enhancements of Viterbi Search for Fast Unit Selection Synthesis. In: Proc. International Conference INTERSPEECH-2010, Makuhari, Japan, pp. 174–177 (2010)
Železný, M., Krňoul, Z., Cisar, P., Matousek, J.: Design, implementation and evaluation of the Czech realistic audio-visual speech synthesis. Signal Processing 86(12), 3657–3673 (2006)
Karpov, A., Tsirulnik, L., Krňoul, Z., Ronzhin, A., Lobanov, B., Železný, M.: Audio-Visual Speech Asynchrony Modeling in a Talking Head. In: Proc. International Conference INTERSPEECH-2009, Brighton, UK, pp. 2911–2914 (2009)
Krňoul, Z., Kanis, J., Železný, M., Müller, L.: Czech Text-to-Sign Speech Synthesizer. In: Popescu-Belis, A., Renals, S., Bourlard, H. (eds.) MLMI 2007. LNCS, vol. 4892, pp. 180–191. Springer, Heidelberg (2007)
Krňoul, Z., Železný, M., Müller, L.: Training of Coarticulation Models using Dominance Functions and Visual Unit Selection Methods for Audio-Visual Speech Synthesis. In: Proc. 9th International Conference on Spoken Language Processing INTERSPEECH-2006, Pittsburgh, PA, pp. 585–588 (2006)
Karpov, A., Ronzhin, A., Kipyatkova, I., Železný, M.: Influence of Phone-viseme Temporal Correlations on Audiovisual STT and TTS Performance. In: Proc. 17th International Congress of Phonetic Sciences ICPhS-2011, Hong Kong, China, pp. 1030–1033 (2011)
Kanis, J., Zahradil, J., Jurčíček, F., Müller, L.: Czech-Sign Speech corpus for semantic based machine translation. In: Sojka, P., Kopeček, I., Pala, K. (eds.) TSD 2006. LNCS (LNAI), vol. 4188, pp. 613–620. Springer, Heidelberg (2006)
Hrúz, M., Campr, P., Dikici, E., Kindirouglu, A., Krňoul, Z., Ronzhin, A., Sak, H., Schorno, D., Akarun, L., Aran, O., Karpov, A., Saraclar, M., Železný, M.: Automatic Fingersign to Speech Translation System. Journal on Multimodal User Interfaces 4(2), 61–79 (2011)
Krňoul, Z.: Web-based sign language synthesis and animation for on-line assistive technologies. In: Proc. 13th International ACM SIGACCESS Conference on Computers and Accessibility ASSETS-2011, Dundee, Scotland, UK, pp. 307–308 (2011)
Audio-visual demonstration of the universal multimodal synthesizer for Russian, http://www.spiiras.nw.ru/speech/demo/daktilrus.avi
Krňoul, Z., Železný, M.: Translation and conversion for Czech Sign Speech synthesis. In: Matoušek, V., Mautner, P. (eds.) TSD 2007. LNCS (LNAI), vol. 4629, pp. 524–531. Springer, Heidelberg (2007)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Karpov, A., Krnoul, Z., Zelezny, M., Ronzhin, A. (2013). Multimodal Synthesizer for Russian and Czech Sign Languages and Audio-Visual Speech. In: Stephanidis, C., Antona, M. (eds) Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion. UAHCI 2013. Lecture Notes in Computer Science, vol 8009. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39188-0_56
Download citation
DOI: https://doi.org/10.1007/978-3-642-39188-0_56
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39187-3
Online ISBN: 978-3-642-39188-0
eBook Packages: Computer ScienceComputer Science (R0)