Abstract.
This paper gives an overview of the principles and methods for synthesizing complex 3D sound scenes by processing multiple individual source signals. Signal-processing techniques for directional sound encoding and rendering over loudspeakers or headphones are reviewed, as well as algorithms and interface models for synthesizing and dynamically controling room reverberation and distance effects. A real-time modular spatial-sound-processing software system, called Spat, is presented. It allows reproducing and controling the localization of sound sources in three dimensions and the reverberation of sounds in an existing or virtual space. A particular aim of the Spatialisateur project is to provide direct and computationally efficient control over perceptually relevant parameters describing the interaction of each sound source with the virtual space, irrespective of the chosen reproduction format over loudspeakers or headphones. The advantages of this approach are illustrated in practical contexts, including professional audio, computer music, multimodal immersive simulation systems, and architectural acoustics.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Jot, JM. Real-time spatial processing of sounds for music, multimedia and interactive human-computer interfaces. Multimedia Systems 7, 55–69 (1999). https://doi.org/10.1007/s005300050111
Issue Date:
DOI: https://doi.org/10.1007/s005300050111