Abstract
This chapter introduces a research approach called ‘music scene description’ [232], [225], [228], where the goal is to build a computer system that can understand musical audio signals at the level of untrained human listeners without trying to extract every musical note from music. People listening to music can easily hum the melody, clap hands in time to the musical beat, notice a phrase being repeated, and find chorus sections. The brain mechanisms underlying these abilities, however, are not yet well understood. In addition, it has been difficult to implement these abilities on a computer system, although a system with them is useful in various applications such as music information retrieval, music production/editing, and music interfaces. It is therefore an important challenge to build a music scene description system that can understand complex real-world music signals like those recorded on commercially distributed compact discs (CDs).
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer Science+Business Media LLC
About this chapter
Cite this chapter
Goto, M. (2006). Music Scene Description. In: Klapuri, A., Davy, M. (eds) Signal Processing Methods for Music Transcription. Springer, Boston, MA. https://doi.org/10.1007/0-387-32845-9_11
Download citation
DOI: https://doi.org/10.1007/0-387-32845-9_11
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-30667-4
Online ISBN: 978-0-387-32845-4
eBook Packages: EngineeringEngineering (R0)