Abstract
In the field of social human-robot interaction, and in particular for social assistive robotics, the capacity of recognizing the speaker’s discourse in very diverse conditions and where more than one interlocutor may be present, plays an essential role. The use of a mics. array that can be mounted in a robot supported by a voice enhancement module has been evaluated, with the goal of improving the performance of current automatic speech recognition (ASR) systems in multi-speaker conditions. An evaluation has been made of the improvement in terms of intelligibility scores that can be achieved in the operation of two off-the-shelf ASR solutions in situations that contemplate the typical scenarios where a robot with these characteristics can be found. The results have identified the conditions in which a low computational cost demand algorithm can be beneficial to improve intelligibility scores in real environments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Kriegel, J., Grabner, V., Tuttle-Weidinger, L., Ehrenmüller, I.: Socially assistive robots (SAR) in in-patient care for the elderly. Stud. Health Technol. Inform. 260, 178–185 (2019). https://doi.org/10.3233/978-1-61499-971-3-178
Beckert, E., et al.: Event-based experiments in an assistive environment using wireless sensor networks and voice recognition. In: 2nd International Conference on PErvasive Technologies Related to Assistive Environments (PETRA09), pp. 1–8. ACM, Corfu (2009). https://doi.org/10.1145/1579114.1579131
Martínez, J., et al.: Towards a robust robotic assistant for comprehensive geriatric assessment procedures: updating the CLARC system. In: 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 820–825. IEEE Press, Nanjing (2018). https://doi.org/10.1109/ROMAN.2018.8525818
Wang, D., Chen, J.: Supervised speech separation based on deep learning: an overview. IEEE/ACM Trans. Audio Speech Lang. Process. 26, 1702–1726 (2018). https://doi.org/10.3233/978-1-61499-971-3-178
Okuno, H.G., Nakadai, K., Kim, H.: Robot audition: missing feature theory approach and active audition. In: Springer Tracts in Advanced Robotics (14th Conference Robotics Research), vol. 70, pp. 227–244 (2009). https://doi.org/10.1007/978-3-642-19457-3_14
Valin, J., et al.: Robust recognition of simultaneous speech by a mobile robot. IEEE Trans. Robot. 23, 742–752 (2007). https://doi.org/10.1109/TRO.2007.900612
Chang, X., et al.: MIMO-speech: end-to-end multi-channel multi-speaker speech recognition. In: IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 237–244 (2020). https://doi.org/10.1109/ASRU46091.2019.9003986
Jankowski, C., Mruthyunjaya, V., Lin, R.: Improved robust ASR for social robots in public spaces (2020)
Biocca, F.: The cyborg’s dilemma: embodiment in virtual environments. In: Second International Conference on Cognitive Technology Humanizing the Information Age, pp. 12–26, Japan (1997). https://doi.org/10.1109/CT.1997.617676
Kennedy, J., et al.: Child speech recognition in human-robot interaction: evaluations and recommendations. In: 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 82?90. IEEE/ACM, Vienna (2017). https://doi.org/10.1145/2909824.3020229
Garnerin, M., Rossato, S., Laurent, B.: Gender representation in french broadcast corpora and its impact on ASR performance. In: 1st International Workshop on AI for Smart TV Content Production, Access and Delivery (AI4TV 2019), pp. 3–9. ACM, New York (2019). https://doi.org/10.1145/3347449.3357480
Nikunen, J., Diment, A., Virtanen, T.: Separation of moving sound sources using multichannel NMF and acoustic trackings. IEEE/ACM Trans. Audio Speech Lang. Process. 26, 281–295 (2018). https://doi.org/10.1109/TASLP.2017.2774925
Reche, P.J., et al.: Binaural lateral localization of multiple sources in real environments using a kurtosis-driven split-EM algorithm. Eng. Appl. Artif. Intell. 69, 137–146 (2018). https://doi.org/10.1016/j.engappai.2017.12.013
Souden, M., Benesty, J., Affes, S.: A study of the LCMV and MVDR noise reduction filters. IEEE Trans. Signal Process. 58, 4925–4935 (2010). https://doi.org/10.1109/TSP.2010.2051803
Yu, Z.L., Er, M.J.: An extended generalized sidelobe canceller in time and frequency domain. In: 2004 IEEE International Symposium on Circuits and Systems, pp. 629–633. IEEE, Vancouver (2004). https://doi.org/10.1109/ISCAS.2004.1328825
Griffiths, L., Jim, C.: An alternative approach to linearly constrained adaptive beamforming. IEEE Trans. Antennas Propag. 30, 27–34 (1982)
Morgan, J.P.: Time-frequency masking performance for improved intelligibility with microphone arrays. Master Thesis in the College of Engineering at the University of Kentucky (2017)
Taal, C.H., et al.: A short-time objective intelligibility measure for time-frequency weighted noisy speech. In: IEEE Transactions on Antennas and Propagation, pp. 4214–4217. IEEE, Dallas (2010). https://doi.org/10.1109/ICASSP.2010.5495701
Miller, G.A.: The masking of speech. Psychol. Bull. 44, 105–129 (1947)
Martinez-Colon, A., et al.: Attentional mechanism based on a microphone array for embedded devices and a single camera. In: Workshop of Physical Agents, pp. 165–178. Springer. Madrid (2018)
Ni, F., Zhou, Y., Liu, H.: A robust GSC beamforming method for speech enhancement using linear microphone array. In: IEEE 21st International Workshop on Multimedia Signal Processing (MMSP), pp. 1–5. IEEE. Kuala Lumpur (2019)
Li, C., Benesty, J., Chen, J.: Beamforming based on null-steering with small spacing linear microphone arrays. J. Acoust. Soc. Am. 143, 2651–2664 (2018)
Acknowledgements
This work has been funded by the National Research Project TEST-RTI2018-099522-A-C44’: “Test-beds for the Evaluation of Social Awareness in Assistance Robotics” and thanks to the collaboration with CSP group at Imperial College London, funded by the Spanish Ministry of Science, Innovation and University through the lectures mobility programme (Jose Castillejo’s 2018 grant). Most of the information about the typical life in a retirement home and Felipe’s robot name have been gathered from the experiences during the work developed in Vitalia Teatinos and supported by the Regional Project AT17-5509-UMA ‘ROSI’.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Martínez-Colón, A., Viciana-Abad, R., Perez-Lorenzo, J.M., Evers, C., Naylor, P.A. (2021). Evaluation of a Multi-speaker System for Socially Assistive HRI in Real Scenarios. In: Bergasa, L.M., Ocaña, M., Barea, R., López-Guillén, E., Revenga, P. (eds) Advances in Physical Agents II. WAF 2020. Advances in Intelligent Systems and Computing, vol 1285. Springer, Cham. https://doi.org/10.1007/978-3-030-62579-5_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-62579-5_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-62578-8
Online ISBN: 978-3-030-62579-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)