Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Acoustics in Ancient Times

Acoustics is the science of sound. Although sound waves are nearly as old as the universe, the scientific study of sound is generally considered to have its origin in ancient Greece. The word acoustics is derived from the Greek word akouein, to hear, although Sauveur appears to have been the first person to apply the term acoustics to the science of sound in 1701 [2.1].

Pythagoras, who established mathematics in Greek culture during the sixth century BC, studied vibrating strings and musical sounds. He apparently discovered that dividing the length of a vibrating string into simple ratios produced consonant musical intervals. According to legend, he also observed how the pitch of the string changed with tension and the tones generated by striking musical glasses, but these are probably just legends [2.2].

Although the Greeks were certainly aware of the importance of good acoustical design in their many fine theaters, the Roman architect Vitruvius was the first to write about it in his monumental De Architectura, which includes a remarkable understanding and analysis of theater acoustics:

We must choose a site in which the voice may fall smoothly, and not be returned by reflection so as to convey an indistinct meaning to the ear.

Early Experiments on Vibrating Strings, Membranes and Plates

Much of early acoustical investigations were closely tied to musical acoustics. Galileo reviewed the relationship of the pitch of a string to its vibrating length, and he related the number of vibrations per unit time to pitch. Joseph Sauveur made more-thorough studies of frequency in relation to pitch. The English mathematician Brook Taylor provided a dynamical solution for the frequency of a vibrating string based on the assumed curve for the shape of the string when vibrating in its fundamental mode. Daniel Bernoulli set up a partial differential equation for the vibrating string and obtained solutions which dʼAlembert interpreted as waves traveling in both directions along the string [2.3].

The first solution of the problem of vibrating membranes was apparently the work of S. D. Poisson, and the circular membrane was handled by R. F. A. Clebsch. Vibrating plates are somewhat more complex than vibrating membranes. In 1787 E. F. F. Chladni described his method of using sand sprinkled on vibrating plates to show nodal lines [2.4]. He observed that the addition of one nodal circle raised the frequency of a circular plate by about the same amount as adding two nodal diameters, a relationship that Lord Rayleigh called Chladniʼs law. Sophie Germain wrote a fourth-order equation to describe plate vibrations, and thus won a prize provided by the French emperor Napoleon, although Kirchhoff later gave a more accurate treatment of the boundary conditions. Rayleigh, of course, treated both membranes and plates in his celebrated book Theory of Sound [2.5].

Chladni generated his vibration patterns by strewing sand on the plate, which then collected along the nodal lines. Later he noticed that fine shavings from the hair of his violin bow did not follow the sand to the nodes, but instead collected at the antinodes. Savart noted the same behavior for fine lycopodium powder [2.6]. Michael Faraday explained this as being due to acoustic streaming [2.7]. Mary Waller published several papers and a book on Chladni patterns, in which she noted that particle diameter should exceed 100 μm in order to collect at the nodes [2.8]. Chladni figures of some of the many vibrational modes of a circular plate are shown in Fig. 2.1.

Fig. 2.1
figure 1

Chladni patterns on a circular plate. The first four have two, three, four, and five nodal lines but no nodal circles; the second four have one or two nodal circles

Speed of Sound in Air

From earliest times, there was agreement that sound is propagated from one place to another by some activity of the air. Aristotle understood that there is actual motion of air, and apparently deduced that air is compressed. The Jesuit priest Athanasius Kircher was one of the first to observe the sound in a vacuum chamber, and since he could hear the bell he concluded that air was not necessary for the propagation of sound. Robert Boyle, however, repeated the experiment with a much improved pump and noted the much-observed decrease in sound intensity as the air is pumped out. We now know that sound propagates quite well in rarified air, and that the decrease in intensity at low pressure is mainly due to the impedance mismatch between the source and the medium as well as the impedance mismatch at the walls of the container.

As early as 1635, Gassendi measured the speed of sound using firearms and assuming that the light of the flash is transmitted instantaneously. His value came out to be 478 m/s. Gassendi noted that the speed of sound did not depend on the pitch of the sound, contrary to the view of Aristotle, who had taught that high notes are transmitted faster than low notes. In a more careful experiment, Mersenne determined the speed of sound to be 450 m/s [2.9]. In 1650, G. A. Borelli and V. Viviani of the Accademia del Cimento of Florence obtained a value of 350 m/s for the speed of sound [2.10]. Another Italian, G. L. Bianconi, showed that the speed of sound in air increases with temperature [2.11].

The first attempt to calculate the speed of sound through air was apparently made by Sir Isaac Newton. He assumed that, when a pulse is propagated through a fluid, the particles of the fluid move in simple harmonic motion, and that if this is true for one particle, it must be true for all adjacent ones. The result is that the speed of sound is equal to the square root of the ratio of the atmospheric pressure to the density of the air. This leads to values that are considerably less than those measured by Newton (at Trinity College in Cambridge) and others.

In 1816, Pierre Simon Laplace suggested that in Newtonʼs and Lagrangeʼs calculations an error had been made in using for the volume elasticity of the air the pressure itself, which is equivalent to assuming the elastic motions of the air particles take place at constant temperature. In view of the rapidity of the motions, it seemed more reasonable to assume that the compressions and rarefactions follow the adiabatic law. The adiabatic elasticity is greater than the isothermal elasticity by a factor γ, which is the ratio of the specific heat at constant pressure to that at constant volume. The speed of sound should thus be given by c = (γp/ρ)1/2, where p is the pressure and ρ is the density. This gives much better agreement with experimental values [2.3].

Speed of Sound in Liquids and Solids

The first serious attempt to measure the speed of sound in liquid was probably that of the Swiss physicist Daniel Colladon, who in 1826 conducted studies in Lake Geneva. In 1825, the Academy of Sciences in Paris had announced as the prize competition for 1826 the measurement of the compressibility of the principal liquids. Colladon measured the static compressibility of several liquids, and he decided to check the accuracy of his measurements by measuring the speed of sound, which depends on the compressibility. The compressibility of water computed from the speed of sound turned out to be very close to the statically measured values [2.12]. Oh yes, he won the prize from the Academy.

In 1808, the French physicist J. B. Biot measured the speed of sound in a 1000 m long iron water pipe in Paris by direct timing of the sound travel [2.13]. He compared the arrival times of the sound through the metal and through the air and determined that the speed is much greater in the metal. Chladni had earlier studied the speed of sound in solids by noting the pitch emanating from a struck solid bar, just as we do today. He deduced that the speed of sound in tin is about 7.5 times greater than in air, while in copper it was about 12 times greater. Biotʼs values for the speed in metals agreed well with Chladniʼs.

Determining Frequency

Much of the early research on sound was tied to musical sound. Vibrating strings, membranes, plates, and air columns were the bases of various musical instruments. Music emphasized the importance of ratios for the different tones. A string could be divided into halves or thirds or fourths to give harmonious pitches. It was also known that pitch is related to frequency. Marin Mersenne (1588–1648) was apparently the first to determine the frequency corresponding to a given pitch. By working with a long rope, he was able determine the frequency of a standing wave on the length, mass, and tension of the rope. He then used a short wire under tension and from his rope formula he was able to compute the frequency of oscillation [2.14]. The relationship between pitch and frequency was later improved by Joseph Sauveur, who counted beats between two low-pitched organ pipes differing in pitch by a semitone. Sauveur [2.1] deduced that

the relation between sounds of low and high pitch is exemplified in the ratio of the numbers of vibrations which they both make in the same time.

He recognized that two sounds differing a musical fifth have frequencies in the ratio of 3:2. We have already commented that Sauveur was the first to apply the term acoustics to the science of sound [2.1]:

I have come then to the opinion that there is a science superior to music, and I call it acoustics; it has for its object sound in general, whereas music has for its objects sounds agreeable to the ear.

Tuning forks were widely used for determining pitch by the 19th century. Johann Scheibler (1777–1837) developed a tuning-fork tonometer which consisted of some 56 tuning forks. One was adjusted to the pitch of A above middle C, and another was adjusted by ear to be one octave lower. The others were then adjusted to differ successively by four vibrations per second above the lower A. Thus, he divided the octave into 55 intervals, each of about four vibrations per second. He then measured the number of beats in each interval, the sum total of such beats giving him the absolute frequency. He determined the frequency of the lower A to be 220 vibrations per second and the upper A to be 440 vibrations per second [2.15].

Felix Savart (1791–1841) used a rapidly rotating toothed wheel with 600 teeth to produce sounds of high frequency. He estimated the upper frequency threshold of hearing to be 24000 vibrations per second. Charles Wheatstone (1802–1875) pioneered the use of rapidly rotating mirrors to study periodic events. This technique was later used by Rudolph Koenig and others to study speech sounds.

Acoustics in the 19th Century

Acoustics really blossomed in the 19th century. It is impossible to describe but a fraction of the significant work in acoustics during this century. We will try to provide a skeleton, at least, by mentioning the work of a few scientists. Especially noteworthy is the work of Tyndall, von Helmholtz, and Rayleigh, so we begin with them.

Tyndall

John Tyndall was born in County Carlow, Ireland in 1820. His parents were unable to finance any advanced education. After working at various jobs, he traveled to Marburg, Germany where he obtained a doctorate. He was appointed Professor of Natural Philosophy at the Royal Institution in London, where he displayed his skills in popular lecturing. In 1872 he made a lecture tour in the United States, which was a great success. His first lectures were on heat, and in 1863 these lectures were published under the title Heat as a Mode of Motion.

In 1867 he published his book On Sound with seven chapters. Later he added chapters on the transmission of sound through the atmosphere and on combinations of musical tones. In two chapters on vibrations of rods, plates, and bells, he notes that longitudinal vibrations produced by rubbing a rod lengthwise with a cloth or leather treated with rosin excited vibrations of higher frequency than the transverse vibrations. He discusses the determination of the waveform of musical sounds. By shining an intense beam of light on a mirror attached to a tuning fork and then to a slowly rotating mirror, as Lissajous had done, he spread out the waveform of the oscillations.

Tyndall is well remembered for his work on the effect of fog on transmission of sound through the atmosphere. He had succeeded Faraday as scientific advisor to the Elder Brethren of Trinity House, which supervised lighthouses and pilots in England. When fog obscures the lights of lighthouses, ships depend on whistles, bells, sirens, and even gunfire for navigation warnings. In 1873 Tyndall began a systematic study of sound propagation over water in various weather conditions in the straits of Dover. He noted great inconsistencies in the propagation.

Helmholtz

Hermann von Helmholtz was educated in medicine. He had wanted to study physics, but his father could not afford to support him, and the Prussian government offered financial support for students of medicine who would sign up for an extended period of service with the military. He was assigned a post in Potsdam, where he was able to set up his own laboratory in physics and physiology. The brilliance of his work led to cancelation of his remaining years of army duty and to his appointment as Professor of Physiology at Königsberg. He gave up the practice of medicine and wrote papers on physiology, color perception, and electricity. His first important paper in acoustics appears to have been his On Combination Tones, published in 1856 [2.16].

His book On Sensations of Tone (1862) combines his knowledge of both physiology and physics and music as well. He worked with little more than a stringed instrument, tuning forks, his siren, and his famous resonators to show that pitch is due to the fundamental frequency but the quality of a musical sound is due to the presence of upper partials. He showed how the ear can separate out the various components of a complex tone. He concluded that the quality of a tone depends solely on the number and relative strength of its partial tone and not on their relative phase.

In order to study vibrations of violin stings and speech sounds, von Helmholtz invented a vibration microscope, which displayed Lissajous patterns of vibration. One lens of the microscope is attached to the prong of a tuning fork, so a fixed spot appears to move up and down. A spot of luminous paint is then applied to the string, and a bow is drawn horizontally across the vertical string. The point on the horizontally vibrating violin string forms a Lissajous pattern as it moves. By viewing patterns for a bowed violin string, von Helmholtz was able to determine the actual motion of the string, and such motion is still referred to as Helmholtz motion.

Much of Helmholtzʼs book is devoted to discussion of hearing. Using a double siren, he studied difference tones and combination tones. He determined that beyond about 30 beats per second, the listener no longer hears individual beats but the tone becomes jarring or rough. He postulated that individual nerve fibers acted as vibrating strings, each resonating at a different frequency. Noting that skilled musicians

can distinguish with certainty a difference in pitch arising from half a vibration in a second in the doubly accented octave,

he concluded that some 1000 different pitches might be distinguished in the octave between 50 and 100 cycles per second, and since there are 4500 nerve fibers in the cochlea, this represented about one fiber for each two cents of musical interval. He admitted, however

that we cannot precisely ascertain what parts of the ear actually vibrate sympathetically with individual tones.

Rayleigh

Rayleigh was a giant. He contributed to so many areas of physics, and his contributions to acoustics were monumental. His book Theory of Sound still has an honored place on the desk of every acoustician (alongside von Helmholtzʼs book, perhaps). In addition to his book, he published some 128 papers on acoustics. He anticipated so many interesting things. I have sometimes made the statement that every time I have a good idea about sound Rayleigh steals it and puts it into his book.

John William Strutt, who was to become the third Baron Rayleigh, was born at the family estate in Terling England in 1842. (Milk from the Rayleigh estate has supplied many families in London to this day.) He enrolled at Eton, but illness caused him to drop out, and he completed his schooling at a small academy in Torquay before entering Trinity College, Cambridge. His ill health may have been a blessing for the rest of the world. After nearly dying of rheumatic fever, he took a long cruise up the Nile river, during which he concentrated on writing his Science of Sound.

Soon after he returned to England, his father died and he became the third Baron Rayleigh and inherited title to the estate at Terling, where he set up a laboratory. When James Clerk Maxwell died in 1879, Rayleigh was offered the position as Cavendish Professor of Physics at Cambridge. He accepted it, in large measure because there was an agricultural depression at the time and his farm tenants were having difficulties in making rent payments [2.15].

Rayleighʼs book and his papers cover such a wide range of topics in acoustics that it would be impractical to attempt to describe them here. His brilliant use of mathematics set the standard for subsequent writings on acoustics. The first volume of his book develops the theory of vibrations and its applications to strings, bars, membranes, and plates, while the second volume begins with aerial vibrations and the propagation of waves in fluids.

Rayleigh combined experimental work with theory in a very skillful way. Needing a way to determine the intensity of a sound source, he noted that a light disk suspended in a beam of sound tended to line up with its plane perpendicular to the direction of the fluid motion. The torque on the disk is proportional to the sound intensity. By suspending a light mirror in a sound field, the sound intensity could be determined by means of a sensitive optical lever. The arrangement, known as a Rayleigh disk, is still used to measure sound intensity.

Another acoustical phenomenon that bears his name is the propagation of Rayleigh waves on the plane surface of an elastic solid. Rayleigh waves are observed on both large and small scales. Most of the shaking felt from an earthquake is due to Rayleigh waves, which can be much larger than the seismic waves. Surface acoustic wave (SAW) filters and sensors make use of Rayleigh waves.

George Stokes

George Gabriel Stokes was born in County Sligo, Ireland in 1819. His father was a Protestant minister, and all of his brothers became priests. He was educated at Bristol College and Pembroke College, Cambridge. In 1841 he graduated as senior wrangler (the top First Class degree) in the mathematical tripos and he was the first Smithʼs prize man. He was awarded a Fellowship at Pembroke College and later appointed Lucasian professor of mathematics at Cambridge. The position paid rather poorly, however, so he accepted an additional position as professor of physics at the Government School of Mines in London.

William Hopkins, his Cambridge tutor, advised him to undertake research into hydrodynamics, and in 1842 he published a paper On the steady motion of incompressible fluids. In 1845 he published his classic paper On the theories of the internal friction of fluids in motion, which presents a three-dimensional equation of motion of a viscous fluid that has come to be known as the Stokes–Navier equation. Although he discovered that Navier, Poisson, and Saint-Venant had also considered the problem, he felt that his results were obtained with sufficiently different assumptions to justify publication. The Stokes–Navier equation of motion of a viscous, compressible fluid is still the starting point for much of the theory of sound propagation in fluids.

Alexander Graham Bell

Alexander Graham Bell was born in Edinburgh, Scotland in 1847. He taught music and elocution in Scotland before moving to Canada with his parents in 1868, and in 1871 he moved to Boston as a teacher of the deaf. In his spare time he worked on the harmonic telegraph, a device that would allow two or more electrical signals to be transmitted on the same wire. Throughout his life, Bell had been interested in the education of deaf people, which interest lead him to invent the microphone and, in 1876, his electrical speech machine, which we now call a telephone. He was encouraged to work steadily on this invention by Joseph Henry, secretary of the Smithsonian Institution and a highly respected physicist and inventor.

Bellʼs telephone was a great financial, as well as technical success. Bell set up a laboratory on his estate near Braddock, Nova Scotia and continued to improve the telephone as well as to work on other inventions. The magnetic transmitter was replaced by Thomas Edisonʼs carbon microphone, the rights to which he obtained as a result of mergers and patent lawsuits [2.15].

Thomas Edison

The same year that Bell was born in Scotland (1847), Thomas A. Edison, the great inventor, was born in Milan, Ohio. At the age of 14 he published his own small newspaper, probably the first newspaper to be sold on trains. Also aged 14 he contracted scarlet fever which destroyed most of his hearing. His first invention was an improved stock-ticker for which he was paid $40000. Shortly after setting up a laboratory in Menlo Park, New Jersey, he invented (in 1877) the first phonograph. This was followed (in 1879) by the incandescent electric light bulb and a few years later by the Vitascope, which led to the first silent motion pictures. Other inventions included the dictaphone, mimeograph and storage battery.

The first published article on the phonograph appeared in Scientific American in 1877 after Edition visited the New York offices of the journal and demonstrated his machine. Later he demonstrated his machine in Washington for President Hayes, members of Congress and other notables. Many others made improvements to Edisonʼs talking machine, but the credit still goes to Edison for first showing that the human voice could be recorded for posterity.

In its founding year (1929), the Acoustical Society of America (ASA) made Thomas Edison an honorary fellow, an honor which was not again bestowed during the 20 years that followed.

Rudolph Koenig

Rudolph Koenig was born in Koenigsberg (now Kaliningrad), Russia in 1832 and attended the university there at a time when von Helmholtz was a Professor of Physiology there. A few years after taking his degree, Koenig moved to Paris where he studied violin making under Villaume. He started his own business making acoustical apparatus, which he did with great care and talent. He devoted more than 40 years to making the best acoustical equipment of his day, many items of which are still in working order in museums and acoustics laboratories. Koenig, who never married, lived in the small front room of his Paris apartment, which was also his office and stock room, while the building and testing of instruments was done in the back rooms by Koenig and a few assistants. We will attempt to describe but a few of his acoustical instruments, but they have been well documented by Greenslade [2.17], Beyer [2.18], and others. The two largest collections of Koenig apparatus in North America are at the Smithsonian Institution and the University of Toronto.

Koenig made tuning forks of all sizes. A large 64 Hz fork formed the basis for a tuning-fork clock. A set of forks covering a range of frequencies in small steps was called a tonometer by Johann Scheibler. For his own use, Koenig made a tonometer consisting of 154 forks ranging in frequency from 16 to 21845 Hz. Many tuning forks were mounted on hollow wooden resonators. He made both cylindrical and spherical Helmholtz resonators of all sizes.

To his contemporaries, Koenig was probably best known for his invention (1862) of the manometric flame apparatus, shown in Fig. 2.2, which allowed the visualization of acoustic signals. The manometric capsule is divided into two parts by a thin flexible membrane. Sounds waves are collected by a funnel, pass down the rubber tube, and cause the membrane to vibrate. Vibrations of the membrane cause a periodic change in the supply of gas to the burner, so the flame oscillates up and down at the frequency of the sound. The oscillating flame is viewed in the rotating mirror.

Fig. 2.2
figure 2

Koenigʼs manometric flame apparatus. The image of the oscillating flame is seen in the rotating mirror (after [2.17])

Koenig made apparatus for both the Fourier analysis and the synthesis of sound. At the 1876 exhibition, the instrument was used to show eight harmonics of a sung vowel. The Fourier analyzer included eight Helmholtz resonators, tuned to eight harmonics, which fed eight manometric flames. The coefficients of the various sinusoidal terms related to the heights of the eight flame images. The Helmholtz resonators could be tuned to different frequencies. The Fourier synthesizer had 10 electromagnetically-driven tuning forks and 10 Helmholtz resonators. A hole in each resonator could be opened or closed by means of keys [2.17].

The 20th Century

history of acoustics in the 20th century could be presented in several ways. In his definitive history, Beyer [2.15] devotes one chapter to each quarter century, perhaps the most sensible way to organize the subject. One could divide the century at the year 1929, the year the Acoustical Society of America was founded. One of the events in connection with the 75th anniversary of this society was the publication of a snapshot history of the Society written by representatives from the 15 technical committees and edited by Henry Bass and William Cavanaugh [2.19]. Since we make no pretense of covering all areas of acoustics nor of reporting all acoustical developments in the 20th century, we will merely select a few significant areas of acoustics and try to discuss briefly some significant developments in these. For want of other criteria, we have selected the nine areas of acoustics that correspond to the first eight technical committees in the Acoustical Society of America.

Architectural Acoustics

Wallace Clement Sabine (1868–1919) is generally considered to be the father of architectural acoustics. He was the first to make quantitative measurements on the acoustics of rooms. His discovery that the product of total absorption and the duration of residual sound is a constant still forms the basis of sound control in rooms. His pioneering work was not done entirely by choice, however. As a 27-year old professor at Harvard University, he was assigned by the President to determine corrective measures for the lecture room at Harvardʼs Fogg Art Museum. As he begins his famous paper on reverberation [2.20]:

The following investigation was not undertaken at first by choice but devolved on the writer in 1895 through instructions from the Corporation of Harvard University to propose changes for remedying the acoustical difficulties in the lecture-room of the Fogg Art Museum, a building that had just been completed.

Sabine determined the reverberation time in the Fogg lecture room by using an organ pipe and a chronograph. He found the reverberation time in the empty room to be 5.62 seconds. Then he started adding seat cushions from the Sanders Theater and measuring the resulting reverberation times. He developed an empirical formula T =0.164  V/A, where T is reverberation time, V is volume (in cubic feet) and A is the average absorption coefficient times the total area (in square feet) of the walls, ceiling and floor. This formula is still called the Sabine reverberation formula.

Following his success with the Fogg lecture room, Sabine was asked to come up with acoustical specifications for the New Boston Music Hall, now known as Symphony Hall, which would ensure hearing superb music from every seat. Sabine answered with a shoebox shape for the building to keep out street noise. Then, using his mathematical formula for reverberation time, Sabine carefully adjusted the spacing between the rows of seats, the slant of the walls, the shape of the stage, and materials used in the walls to produce the exquisite sound heard today at Symphony Hall (Fig. 2.3).

Fig. 2.3
figure 3

Interior of Symphony Hall in Boston whose acoustical design by Wallace Clement Sabine set a standard for concert halls

Vern Knudsen (1893–1974), physicist at University of California Los Angeles (UCLA) and third president of the Acoustical Society of America, was one of many persons who contributed to architectural acoustics in the first half of the 20th century. His collaboration with Hans Kneser of Germany led to an understanding of molecular relaxation phenomena in gases and liquids. In 1932 he published a book on Architectural Acoustics [2.21], and in 1950, a book Architectural Designing in Acoustics with Cyril Harris [2.22], which summarized most of what was known about the subject by the middle of the century.

In the mid 1940s Richard Bolt, a physicist at the Massachusetts Institute of Technology (MIT), was asked by the United Nations (UN) to design the acoustics for one of the UNʼs new buildings. Realizing the work that was ahead of him, he asked Leo Beranek to join him. At the same time they hired another MIT professor, Robert Newman, to help with the work with the United Nations; together they formed the firm of Bolt, Beranek, and Newman (BBN), which was to become one of the foremost architectural consulting firms in the world. This firm has provided acoustical consultation for a number of notable concert halls, including Avery Fisher Hall in New York, the Koussevitzky Music Shed at Tanglewood, Davies Symphony Hall in San Francisco, Roy Thompson Hall in Toronto, and the Center for the Performing Arts in Tokyo [2.23]. They are also well known for their efforts in pioneering the Arpanet, forerunner of the Internet.

Recipients of the Wallace Clement Sabine award for accomplishments in architectural acoustics, include Vern Knudsen, Floyd Watson, Leo Beranek, Erwin Meyer, Hale Sabine, Lothar Cremer, Cyril Harris, Thomas Northwood, Richard Waterhouse, Harold Marshall, Russell Johnson, Alfred Warnock, William Cavanaugh, John S. Bradley, and Christopher Jaffe. The silver medal in architectural acoustics was awarded to Theodore Schultz. The work of each of these distinguished acousticians could be a chapter in the history of acoustics but space does not allow it.

Physical Acoustics

Although all of acoustics, the science of sound, incorporates the laws of physics, we usually think of physical acoustics as being concerned with fundamental acoustic wave propagation phenomena, including transmission, reflection, refraction, interference, diffraction, scattering, absorption, dispersion of sound and the use of acoustics to study physical properties of matter and to produce changes in these properties. The foundations for physical acoustics were laid by such 19th century giants as von Helmholtz, Rayleigh, Tyndall, Stokes, Kirchhoff, and others.

Ultrasonic waves, sound waves with frequencies above the range of human hearing, have attracted the attention of many physicists and engineers in the 20th century. An early source of ultrasound was the Galton whistle, used by Francis Galton to study the upper threshold of hearing in animals. More powerful sources of ultrasound followed the discovery of the piezoelectric effect in crystals by Jacques and Pierre Curie. They found that applying an electric field to the plates of certain natural crystals such as quartz produced changes in thickness. Later in the century, highly efficient ceramic piezoelectric transducers were used to produce high-intensity ultrasound in solids, liquids, and gases.

Probably the most important use of ultrasound nowadays is in ultrasonic imaging, in medicine (sonograms) as well as in the ocean (sonar). Ultrasonic waves are used in many medical diagnostic procedures. They are directed toward a patientʼs body and reflected when they reach boundaries between tissues of different densities. These reflected waves are detected and displayed on a monitor. Ultrasound can also be used to detect malignancies and hemorrhaging in various organs. It is also used to monitor real-time movement of heart valves and large blood vessels. Air, bone, and other calcified tissues absorb most of the ultrasound beam; therefore this technique cannot be used to examine the bones or the lungs.

The father of sonar (sound navigation and ranging) was Paul Langevin, who used active echo ranging sonar at about 45 kHz to detect mines during World War I. Sonar is used to explore the ocean and study marine life in addition to its many military applications [2.24]. New types of sonar include synthetic aperture sonar for high-resolution imaging using a moving hydrophone array and computed angle-of-arrival transient imaging (CAATI).

Infrasonic waves, which have frequencies below the range of human hearing, have been less frequently studied than ultrasonic waves. Natural phenomena are prodigious generators of infrasound. When the volcano Krakatoa exploded, windows were shattered hundreds of miles away by the infrasonic wave. The ringing of both earth and atmosphere continued for hours. It is believed that infrasound actually formed the upper pitch of this natural volcanic explosion, tones unmeasurably deep forming the actual central harmonic of the event. Infrasound from large meteoroids that enter our atmosphere have very large amplitudes, even great enough to break glass windows [2.25]. Ultralow-pitch earthquake sounds are keenly felt by animals and sensitive humans. Quakes occur in distinct stages. Long before the final breaking release of built-up earth tensions, there are numerous and succinct precursory shocks. Deep shocks produce strong infrasonic impulses up to the surface, the result of massive heaving ground strata. Certain animals (fish) can actually hear infrasonic precursors.

Aeroacoustics, a branch of physical acoustics, is the study of sound generated by (or in) flowing fluids. The mechanism for sound or noise generation may be due to turbulence in flows, resonant effects in cavities or wave-guides, vibration of boundaries of structures etc. A flow may alter the propagation of sound and boundaries can lead to scattering; both features play a significant part in altering the noise received at a particular observation point. A notable pioneer in aeroacoustics was Sir James Lighthill (1924–1998), whose analyses of the sounds generated in a fluid by turbulence have had appreciable importance in the study of nonlinear acoustics. He identified quadrupole sound sources in the inhomogeneities of turbulence as a major source of the noise from jet aircraft engines, for example [2.26].

There are several sources of nonlinearity when sound propagates through gases, liquids, or solids. At least since the time of Stokes, it has been known that in fluids compressions propagate slightly faster than rarefactions, which leads to distortion of the wave front and even to the formation of shock waves. Richard Fay (1891–1964) noted that the waveform takes on the shape of a sawtooth. In 1935 Eugene Fubini–Ghiron demonstrated that the pressure amplitude in a nondissipative fluid is proportional to an infinite series in the harmonics of the original signal [2.15]. Several books treat nonlinear sound propagation, including those by Beyer [2.27] and by Hamilton and Blackstock [2.28].

Measurements of sound propagation in liquid helium have led to our basic understanding of cryogenics and also to several surprises. The attenuation of sound shows a sharp peak near the so-called lambda point at which helium takes on superfluid properties. This behavior was explained by Lev Landau (1908–1968) and others. Second sound, the propagation of waves consisting of periodic oscillations of temperature and entropy, was discovered in 1944 by V. O. Peshkov. Third sound, a surface wave of the superfluid component was reported in 1958, whereas fourth sound was discovered in 1962 by K. A. Shapiro and Isadore Rudnick. Fifth sound, a thermal wave, has also been reported, as has zero sound [2.15].

While there a number of ways in which light can interact with sound, the term optoacoustics typically refers to sound produced by high-intensity light from a laser. The optoacoustic (or photoacoustic) effect is characterized by the generation of sound through interaction of electromagnetic radiation with matter. Absorption of single laser pulses in a sample can effectively generate optoacoustic waves through the thermoelastic effect. After absorption of a short pulse the heated region thermally expands, creating a mechanical disturbance that propagates into the surrounding medium as a sound wave. The waves are recorded at the surface of the sample with broadband ultrasound transducers.

Sonoluminescence uses sound to produce light. Sonoluminescence, the emission of light by bubbles in a liquid excited by sound, was discovered by H. Frenzel and H. Schultes in 1934, but was not considered very interesting at the time. A major breakthrough occurred when Felipe Gaitan and his colleagues were able to produce single-bubble sonoluminescence, in which a single bubble, trapped in a standing acoustic wave, emits light with each pulsation [2.29].

The wavelength of the emitted light is very short, with the spectrum extending well into the ultraviolet. The observed spectrum of emitted light seems to indicate a temperature in the bubble of at least 10000 °C, and possibly a temperature in excess of one million degrees C. Such a high temperature makes the study of sonoluminescence especially interesting for the possibility that it might be a means to achieve thermonuclear fusion. If the bubble is hot enough, and the pressures in it high enough, fusion reactions like those that occur in the Sun could be produced within these tiny bubbles.

When sound travels in small channels, oscillating heat also flows to and from the channel walls, leading to a rich variety of thermoacoustic effects. In 1980, Nicholas Rott developed the mathematics describing acoustic oscillations in a gas in a channel with an axial temperature gradient, a problem investigated by Rayleigh and Kirchhoff without much success [2.30]. Applying Rottʼs mathematics, Hofler et al. invented a standing-wave thermoacoustic refrigerator in which the coupled oscillations of gas motion, temperature, and heat transfer in the sound wave are phased so that heat is absorbed at low temperature and waste heat is rejected at higher temperature [2.31].

Recipients of the ASA silver medal in physical acoustics since it was first awarded in 1975 have included Isadore Rudnick, Martin Greenspan, Herbert McSkimin, David Blackstock, Mack Breazeale, Allan Pierce, Julian Maynard, Robert Apfel, Gregory Swift, Philip Marston, Henry Bass, Peter Westervelt, and Andrea Prosperetti.

Engineering Acoustics

It is virtually impossible to amplify sound waves. Electrical signals, on the other hand, are relatively easy to amplify. Thus a practical system for amplifying sound includes input and output transducers, together with the electronic amplifier. Transducers have occupied a central role in engineering acoustics during the 20th century.

The transducers in a sound amplifying system are microphones and loudspeakers. The first microphones were Bellʼs magnetic transmitter and the loosely packed carbon microphones of Edison and Berliner. A great step forward in 1917 was the invention of the condenser microphone by Edward Wente (1889–1972). In 1962, James West and Gerhard Sessler invented the foil electret or electret condenser microphone, which has become the most ubiquitous microphone in use. It can be found in everything from telephones to childrenʼs toys to medical devices. Nearly 90% of the approximately one billion microphones manufactured annually are electret designs.

Ernst W. Siemens was the first to describe the dynamic or moving-coil loudspeaker, with a circular coil of wire in a magnetic field and supported so that it could move axially. John Stroh first described the conical paper diaphragm that terminated at the rim of the speaker in a section that was flat except for corrugations. In 1925, Chester W. Rice and Edward W. Kellogg at General Electric established the basic principle of the direct-radiator loudspeaker with a small coil-driven mass-controlled diaphragm in a baffle with a broad mid-frequency range of uniform response. In 1926, Radio Corporation of America (RCA) used this design in the Radiola line of alternating current (AC)-powered radios. In 1943 James Lansing introduced the Altec-Lansing 604 duplex radiator which combined an efficient 15 inch woofer with a high-frequency compression driver and horn [2.32].

In 1946, Paul Klipsch introduced the Klipschorn, a corner-folded horn that made use of the room boundaries themselves to achieve efficient radiation at low frequency. In the early 1940s, the Jensen company popularized the vented box or bass reflex loudspeaker enclosure. In 1951, specific loudspeaker driver parameters and appropriate enclosure alignments were described by Neville Thiele and later refined by Richard Small. Thiele–Small parameters are now routinely published by loudspeaker manufacturers and used by professionals and amateurs alike to design vented enclosures [2.33].

The Audio Engineering Society was formed in 1948, the same year the microgroove 33 1/3 rpm long-play vinyl record (LP) was introduced by Columbia Records. The founding of this new society had the unfortunate effect of distancing engineers primarily interested in audio from the rest of the acoustics engineering community.

Natural piezoelectric crystals were used to generate sound waves for underwater signaling and for ultrasonic research. In 1917 Paul Langevin obtained a large crystal of natural quartz from which 10 × 10 ×1.6  cm slices could be cut. He constructed a transmitter that sent out a beam powerful enough to kill fish in its near field [2.15]. After World Wart II, materials, such as potassium dihydrogen phosphate (KDP), ammonium dihydrogen phosphate (ADP) and barium titanate replaced natural quartz in transducers. There are several piezoelectric ceramic compositions in common use today: barium titanate, lead zirconate titanate (PZT) and modified iterations such as lead lanthanum zirconate titanate (PLZT), lead metaniobate and lead magnesium niobate (PMN, including electrostrictive formulations). The PZT compositions are the most widely used in applications involving light shutters, micro-positioning devices, speakers and medical array transducers.

Recipients of the ASA silver medal in engineering acoustics have included Harry Olson, Hugh Knowles, Benjamin Bauer, Per Bruel, Vincent Salmon, Albert Bodine, Joshua Greenspon, Alan Powell, James West, Richard Lyon, John Bouyoucos, Allan Zuckerwar, and Gary Elko. Interdisciplinary medals have gone to Victor Anderson, Steven Garrett, and Gerhard Sessler.

Structural Acoustics

The vibrations of solid structures was discussed at some length by Rayleigh, Love, Timoshenko, Clebsch, Airey, Lamb, and others during the 19th and early 20th centuries. Nonlinear vibrations were considered by G. Duffing in 1918. R. N. Arnold and G. B. Warburton solved the complete boundary-value problem of the free vibration of a finite cylindrical shell. Significant advances have been made in our understanding of the radiation, scattering, and response of fluid-loaded elastic plates by G. Maidanik, E. Kerwin, M. Junger, and D. Feit.

Statistical energy analysis (SEA), championed by Richard Lyon and Gideon Maidanik, had its beginnings in the early 1960s. In the 1980s, Christian Soize developed the fuzzy structure theory to predict the mid-frequency dynamic response of a master structure coupled with a large number of complex secondary subsystems. The structural and geometric details of the latter are not well defined and therefore labeled as fuzzy.

A number of good books have been written on the vibrations of simple and complex structures. Especially noteworthy, in my opinion, are books by Cremer et al. [2.34], Junger and Feit [2.35], Leissa [2.36], [2.37], and Skudrzyk [2.38]. Statistical energy analysis is described by Lyon [2.39]. Near-field acoustic holography, developed by Jay Maynard and Earl Williams, use pressure measurements in the near field of a vibrating object to determine its source distribution on the vibrating surface [2.40]. A near-field acoustic hologram of a rectangular plate driven at a point is shown in Fig. 2.4.

Fig. 2.4
figure 4

Near-field hologram of pressure near a rectangular plate driven at 1858 Hz at a point (courtesy of Earl Williams)

The ASA has awarded its Trent–Crede medal, which recognizes accomplishment in shock and vibration, to Carl Vigness, Raymond Mindlin, Elias Klein, J. P. Den Hartog, Stephen Crandall, John Snowdon, Eric Ungar, Miguel Junger, Gideon Maidanik, Preston Smith, David Feit, Sabih Hayek, Jerry Ginsberg, and Peter Stepanishen.

Underwater Acoustics

The science of underwater technology in the 20th century is based on the remarkable tools of transduction that the 19th century gave us. It was partly motivated by the two world wars and the cold war and the threats raised by submarines and underwater mines. Two nonmilitary commercial fields that have been important driving forces in underwater acoustics are geophysical prospecting and fishing. The extraction of oil from the seafloor now supplies 25% of our total supply [2.41].

Essential to understanding underwater sound propagation is detailed knowledge about the speed of sound in the sea. In 1924, Heck and Service published tables on the dependence of sound speed on temperature, salinity, and pressure [2.42]. Summer conditions, with strong solar heating and a warm atmosphere, give rise to sound speeds that are higher near the surface and decrease with depth, while winter conditions, with cooling of the surface, reverses the temperature gradient. Thus, sound waves will bend downward under summer conditions and upward in the winter.

Submarine detection can be either passive (listening to the sounds made by the submarine) or active (transmiting a signal and listen for the echo). Well into the 1950s, both the United States and United Kingdom chose active high-frequency systems since passive systems at that time were limited by the shipʼs radiated noise and the self noise of the arrays. During World War II, underwater acoustics research results were secret, but at the end of the war, the National Defense Research Council (NDRC) published the results. The Sub-Surface Warfare Division alone produced 22 volumes [2.41]. Later the NDRC was disbanded and projects were transferred to the Navy (some reports have been published by IEEE).

The absorption in seawater was found to be much higher than predicted by classical theory. O. B. Wilson and R. W. Leonard concluded that this was due to the relaxation frequency of magnesium sulfate, which is present in low concentration in the sea [2.43]. Ernest Yeager and Fred Fisher found that boric acid in small concentrations exhibits a relaxation frequency near 1 kHz. In 1950 the papers of Tolstoy and Clay discussed propagation in shallow water. At the Scripps Institution of Oceanography Fred Fisher and Vernon Simmons made resonance measurements of seawater in a 200 l glass sphere over a wide range of frequencies and temperature, confirming the earlier results and improving the empirical absorption equation [2.41].

Ambient noise in the sea is due to a variety of causes, such as ships, marine mammals, snapping shrimp, and dynamic processes in the sea itself. Early measurements of ambient noise, made under Vern Knudsen, came to be known as the Knudsen curves. Wittenborn made measurements with two hydrophones, one in the sound channel and the other below it. A comparison of the noise levels showed about a 20 dB difference over the low-frequency band but little difference at high frequency. It has been suggested that a source of low-frequency noise is the collective oscillation of bubble clouds.

The ASA pioneers medal in underwater acoustics has been awarded to Harvey Hayes, Albert Wood, Warren Horton, Frederick Hunt, Harold Saxton, Carl Eckart, Claude Horton, Arthur Williams, Fred Spiess, Robert Urick, Ivan Tolstoy, Homer Bucker, William Kuperman, Darrell Jackson, Frederick Tappert, Henrik Schmidt, William Carey, and George Frisk.

Physiological and Psychological Acoustics

Physiological acoustics deals with the peripheral auditory system, including the cochlear mechanism, stimulus encoding in the auditory nerve, and models of auditory discrimination.

This field of acoustics probably owes more to Georg von Békésy (1899–1972) than any other person. Born in Budapest, he worked for the Hungarian Telephone Co., the University of Budapest, the Karolinska Institute in Stockholm, Harvard University, and the University of Hawaii. In 1962 he was awarded the Nobel prize in physiology and medicine for his research on the ear. He determined the static and dynamic properties of the basilar membrane, and he built a mechanical model of the cochlea. He was probably the first person to observe eddy currents in the fluid of the cochlea. Josef Zwislocki (1922–) reasoned that the existence of such fluid motions would inevitably lead to nonlinearities, although Helmholtz had pretty much assumed that the inner ear was a linear system [2.44].

In 1971 William Rhode succeeded in making measurements on a live cochlea for the first time. Using the Mössbauer effect to measure the velocity of the basilar membrane, he made a significant discovery. The frequency tuning was far sharper than that reported for dead cochleae. Moreover, the response was highly nonlinear, with the gain increasing by orders of magnitude at low sound levels. There is an active amplifier in the cochlea that boosts faint sounds, leading to a strongly compressive response of the basilar membrane. The work of Peter Dallos, Bill Brownell, and others identified the outer hair cells as the cochlear amplifiers [2.45].

It is possible, by inserting a tiny electrode into the auditory nerve, to pick up the electrical signals traveling in a single fiber of the auditory nerve from the cochlea to the brain. Each auditory nerve fiber responds over a certain range of frequency and pressure. Nelson Kiang and others have determined that tuning curves of each fiber show a maximum in sensitivity. Within several hours after death, the basilar membrane response decreases, the frequency of maximum response shifts down, and the response curve broadens.

Psychological acoustics or psychoacoustics deals with subjective attributes of sound, such as loudness, pitch, and timbre and how they relate to physically measurable quantities such as the sound level, frequency, and spectrum of the stimulus.

At the Bell Telephone laboratories, Harvey Fletcher, first president of the Acoustical Society of America, and W. A. Munson determined contours of equal loudness by having listeners compare a large number of tones to pure tones of 1000 Hz. These contours of equal loudness came to be labeled by an appropriate number of phons. S. S. Stevens is responsible for the loudness scale of sones and for ways to calculate the loudness in sones. His proposal to express pitch in mels did not become as widely adopted, however, probably because musicians and others prefer to express pitch in terms of the musical scale.

The threshold for detecting pure tones is mostly determined by the sound transmission through the outer and middle ear; to a first approximation the inner ear (the cochlea) is equally sensitive to all frequencies, except the very highest and lowest. In 1951, J. C. R. Licklider (1915–1990), who is well known for his work on developing the Internet, put the results of several hearing surveys together [2.46]. Masking of one tone by another was discussed in a classic paper by Wegel and Lane who showed that low-frequency tones can mask higher-frequency tones better than the reverse [2.47].

Two major theories of pitch perception gradually developed on the basis of experiments in many laboratories. They are usually referred to as the place (or frequency) theory and the periodicity (or time) theory. By observing wavelike motions of the basilar membrane caused by sound stimulation, Békésy provided support for the place theory. In the late 1930s, however, J. F. Schouten and his colleagues performed pitch-shift experiments that provided support for the periodicity theory of pitch. Modern theories of pitch perception often combine elements of both of these [2.48].

Recipients of the ASA von Békésy medal have been Jozef Zwislocki, Peter Dallos, Murray Sachs, William Rhode, and Charles Liberman, while the silver medal in psychological and physiological acoustics has been awarded to Lloyd Jeffress, Ernest Wever, Eberhard Zwicker, David Green, Nathaniel Durlach, Neal Viemeister, Brian Moore, Steven Colburn, and William Yost.

Speech

The production, transmission, and perception of speech have always played an important role in acoustics. Harvey Fletcher published his book Speech and Hearing in 1929, the same year as the first meeting of the Acoustical Society of America. The first issue of the Journal of the Acoustical Society of America included papers on speech by G. Oscar Russell, Vern Knudsen, Norman French and Walter Koenig, Jr.

In 1939, Homer Dudley invented the vocoder, a system in which speech was analyzed into component parts consisting of the pitch fundamental frequency of the voice, the noise, and the intensities of the speech in a series of band-pass filters. This machine, which was demonstrated at New York Worldʼs Fair, could speak simple phrases.

An instrument that is particularly useful for speech analysis is the sound spectrograph, originally developed at the Bell Telephone laboratories around 1945. This instrument records a sound-level frequency–time plot for a brief sample of speech on which the sound level is represent by the degree of blackness in a two-dimensional time–frequency graph, as shown in Fig. 2.5. Digital versions of the sound spectrograph are used these days, but the display format is similar to the original machine.

Fig. 2.5
figure 5

Speech spectrogram of a simple sentence (I can see you) recorded on a sound spectrograph

Phonetic aspects of speech research blossomed at the Bell Telephone laboratories and elsewhere in the 1950s. Gordon Peterson and his colleagues produced several studies of vowels. Gunnar Fant published a complete survey of the field in Acoustic Theory of Speech Production [2.49]. The pattern playback, developed at Haskins Laboratories, dominated early research using synthetic speech in the United States. James Flanagan (1925–) demonstrated the significance of the use of our understanding of fluid dynamics in analyzing the behavior of the glottis. Kenneth Stevens and Arthur House noted that the bursts of air from the glottis had a triangular waveform that led to a rich spectrum of harmonics.

Speech synthesis and automatic recognition of speech have been important topics in speech research. Dennis Klatt (1938–1988) developed a system for synthesizing speech, and shortly before his death he gave the first completely intelligible synthesized speech paper presented to the ASA [2.50]. Fry and Denes [2.51] constructed a system in which speech was fed into an acoustic recognizer that compares

the changing spectrum of the speech wave with certain reference patterns and indicates the occurrence of the phoneme whose reference pattern best matches that of the incoming wave.

Recipients of the ASA silver medal in speech communication have included Franklin Cooper, Gunnar Fant, Kenneth Stevens, Dennis Klatt, Arthur House, Peter Ladefoged, Patricia Kuhl, Katherine Harris, Ingo Titze, Winifred Strange, David Pisone.

Musical Acoustics

Musical acoustics deals with the production of musical sound, its transmission to the listener, and its perception. Thus this interdisciplinary field overlaps architectural acoustics, engineering acoustics, and psychoacoustics. The study of the singing voice also overlaps the study of speech. In recent years, the scientific study of musical performance has also been included in musical acoustics.

Because the transmission and perception of sound have already been discussed, we will concentrate on the production of musical sound by musical instruments, including the human voice. It is convenient to classify musical instruments into families in accordance with the way they produce sound: string, wind, percussion, and electronic.

Bowed string instruments were probably the first to attract the attention of scientific researchers. The modern violin was developed largely in Italy in the 16th century by Gaspara da Salo and the Amati family. In the 18th century, Antonio Stradivari, a pupil of Nicolo Amati, and Guiseppi Guarneri created instruments with great brilliance that have set the standard for violin makers since that time. Outstanding contributions to our understanding of violin acoustics have been made by Felix Savart, Hermann von Helmholtz, Lord Rayleigh, C. V. Raman, Frederick Saunders, and Lothar Cremer, all of whom also distinguished themselves in fields other than violin acoustics. In more recent times, the work of Professor Saunders has been continued by members of the Catgut Acoustical Society, led by Carleen Hutchins. This work has made good use of modern tools such as computers, holographic interferometers, and fast Fourier transform (FFT) analyzers. One noteworthy product of modern violin research has been the development of an octet of scaled violins, covering the full range of musical performance.

The piano, invented by Bartolomeo Cristofori in 1709, is one of the most versatile of all musical instruments. One of the foremost piano researcher of our time is Harold Conklin. After he retired from the Baldwin Piano Co, he published a series of three papers in JASA (J. Acoustical Society of America) that could serve as a textbook for piano researchers [2.52]. Gabriel Weinreich explained the behavior of coupled piano strings and the aftersound which results from this coupling. Others who have contributed substantially to our understanding of piano acoustics are Anders Askenfelt, Eric Jansson, Juergen Meyer, Klaus Wogram, Ingolf Bork, Donald Hall, Isao Nakamura, Hideo Suzuki, and Nicholas Giordano. Many other string instruments have been studied scientifically, but space does not allow a discussion of their history here.

Pioneers in the study of wind instruments included Arthur Benade (1925–1987), John Backus (1911–1988), and John Coltman (1915–). Backus, a research physicist, studied both brass and woodwind instruments, especially the nonlinear flow control properties of woodwind reeds. He improved the capillary method for measuring input impedance of air columns, and he developed synthetic reeds for woodwind instruments. Benadeʼs extensive work led to greater understanding of mode conversion in flared horns, a model of woodwind instrument bores based on the acoustics of a lattice of tone holes, characterization of wind instruments in terms of cutoff frequencies, and radiation from brass and woodwind instruments. His two books Horns, Strings and Harmony and Fundamentals of Musical Acoustics have both been reprinted by Dover Books. Coltman, a physicist and executive at the Westinghouse Electric Corporation, devoted much of his spare time to the study of the musical, historical, and acoustical aspects of the flute and organ pipes. He collected more than 200 instruments of the flute family, which he used in his studies. More recently, flutes, organ pipes, and other wind instruments have been studied by Neville Fletcher and his colleagues in Australia.

The human voice is our oldest musical instrument, and its acoustics has been extensively studied by Johan Sundberg and colleagues in Stockholm. A unified discussion of speech and the singing voice appears in this handbook.

The acoustics of percussion instruments from many different countries have been studied by Thomas Rossing and his students, and many of them are described in his book Science of Percussion Instruments [2.53] as well as in his published papers.

Electronic music technology was made possible with the invention of the vacuum tube early in the 20th century. In 1919 Leon Theremin invented the aetherophone (later called the Theremin), an instrument whose vacuum-tube oscillators can be controlled by the proximity of the playerʼs hands to two antennae. In 1928, Maurice Martenot built the Ondes Martenot. In 1935 Laurens Hammond used magnetic tone-wheel generators as the basis for his electromechanical organ, which became a very popular instrument. Analog music synthesizers became popular around the middle of the 20th century. In the mid 1960s, Robert Moog and Donald Buchla built successful voltage-controlled music synthesizers which gave way to a revolution in the way composers could easily synthesize new sounds. Gradually, however, analog music synthesizers gave way to digital techniques making use of digital computers. Although many people contributed to the development of computer music, Max Mathews is often called the father of computer music, since he developed the MUSIC I program that begat many successful music synthesis programs and blossomed into a rich resource for musical expression [2.54].

The ASA has awarded its silver medal in musical acoustics to Carleen Hutchins, Arthur Benade, John Backus, Max Mathews, Thomas Rossing, Neville Fletcher, Johan Sundberg, Gabriel Weinreich, William Strong.

Conclusion

This brief summary of acoustics history has only scratched the surface. Many fine books on the subject appear in the list of references, and readers are urged to explore the subject further. The science of sound is a fascinating subject that draws from many different disciplines.