Keywords

Natural sounds are all around us. The sound of my son struggling to get out of his wet rain coat, or rain boots squeaking across the floor. The sound of lemonade pouring into an ice-filled glass, or the ocean crashing at your feet. The sound of an agitated shopping cart plunging down a flight of stairs, or the familiar roar of a camp fire. Reality produces these sounds “for free,” but how can we best synthesize them in future computer-simulated realities?

Decades of advances in computer graphics and physics-based simulation have made it possible to convincingly animate a wide range of phenomena, such as contacting rigid and deformable bodies, fracturing solids, splashing water, and roaring fire. Such simulations will inevitably run in real time one day, paving the way for interactive virtual environments. Unfortunately, the realities simulated by current algorithms are essentially “silent movies,” with sound added as an afterthought. Instead, sound recordings have been edited manually for pre-produced animations, or triggered automatically in interactive settings. The former is labor intensive and inflexible; and the latter may produce awkward, repetitive and/or implausible results. Prior synthesis techniques lack the sophistication to sonify increasingly sophisticated physics-based animations. This situation is a serious obstacle to our ultimate goal: to build wonderous, real-time or off-line, multi-sensory experiences on future hardware platforms where graphics, motion, and sound are synchronized and highly engaging.

In this talk, I will describe progress toward these synthesis goals by our group at Cornell, and mention some of the many remaining challenges. I will discuss progress on modeling sound for visually important phenomena such as rigid bodies [4, 5], flexible bodies [8], liquids [6], thin-shell solids [2], brittle fracture [7], fire [3], and clothed virtual characters [1]. I will also highlight progress on mathematical methods and computer algorithms for reduced-order vibration analysis (linear and nonlinear), wave-based radiation analysis, precomputation techniques, reduced-order collision processing, many-body sound problems, sound propagation and listening, and real-time rendering. No prior knowledge of sound rendering will be assumed (Fig. 1).

Fig. 1
figure 1

Representative images from [2, 6, 7]