Keywords

1 Auditory Evoked Potentials (AEPs)

In 1924, a German psychiatrist at Jena University, Hans Berger, described his invention of the electroencephalogram (EEG) recording system and he recorded human EEGs for the first time in history [1] (Fig. 1.1).

Fig. 1.1
An image of wave-like patterns. The pattern on the top is irregular and variable, and the pattern on the bottom is constant with regular curves.

The first recordings of the EEG, by Hans Berger in 1924, were taken from epidural electrodes placed directly on the dura of a patient via a local craniotomy [1]

In 1930, cochlear microphonic potentials were recorded from the cochlea by Weber and Bray [2]. In Fig. 1.2, the lower trace is the sound stimulus and the upper trace is the resultant cochlear microphonic (CM) which is similar in its pattern to that of the sound stimulus. Stimulation of an occluded ear did not elicit a definable CM.

Fig. 1.2
An image of a diagram of the inner ear on the left and two graphs on the right. The top-right graph illustrates the recordings of the cochlear microphone. The bottom-right graph illustrates the recordings of the occluded ear. The upper curves are irregular and variable and the lower curves are regular and constant.

Cochlear microphonic potentials recorded by Weber and Bray in 1930 [2]

In 1938, Loomis found a K complex on the EEG which is a cortical response to auditory and touch stimuli or inspiratory interruptions during stage II of sleep [3]. In Fig. 1.3, the EEG tracings change simultaneously with the K complex and two phases following the sound stimuli are evident on the EEG recordings.

Fig. 1.3
An image of a page of a report with a recording of electrical signals in the waveform pattern. The middle of the wave has tall deflection waves. The lowermost graph illustrates the sound triggers, and an arrow marks a loud sound given as the stimulus, and the deflection is right after it.

Examples of the K complex in modern EEGs to acoustic stimuli

In 1939, Davis PA, described V potentials on the EEG provoked by acoustic stimuli [4]. In Fig. 1.4, arrows indicate the acoustic stimuli. The V wave indicates that the brain was also responsive to the stimuli.

Fig. 1.4
An image of multiple waveform patterns recorded by an E E G. The arrows depict the point of time where the sound stimulus was given. The deflection of the waves is after that. The number on the top-left of the waves illustrates the frequencies employed in the E E G.

V potentials found on the EEG as reported by Davis, PA [4]. On-effects and modifications of spontaneous rhythms in response to sounds. The frequency employed is indicated in each case. No measurements of loudness were reported. O is the monopolar occipital record; V is the monopolar record from the vertex. The reference electrodes on the ear lobes are connected in parallel. Paired tracings do not represent simultaneous records. Calibrations are for 1 sec. and 100 μV. throughout. (a) Resolution of the alpha rhythm and the on-effect in an alpha subject on the alpha rhythm. (b) On- and off-effects in a non-alpha subject. Also, note the resolution of beta waves in the vertex record. (c) The same as in B but in another subject. (d) Resolution of fast (beta) frequencies in another non-alpha subject. (e) Typical on-effects. “Anticipatory” reactions are shown in the second tracing. (f) Resolution of the alpha rhythm and on-effects from a pair of identical twins, D-94 and D-95, and the effect of a verbal command

The K complex and the V potentials elicited on the EEG were ultimately found to be the same response of these brain waves to acoustic stimulation. In 1947, Dawson reported his finding of a long latency response (LLR) which was illustrated by superimposing each EEG tracing following frequent acoustic stimuli [5]. In Fig. 1.5. The amplitude of the LLR is influenced by the depth of sleep level, Stages II and III, with Stage III eliciting more robust responses. After the discovery of the LLR, Geisler, in 1958 [6], found a middle latency response (MLR) around a latency of 10–50 msec following acoustic stimulation (Fig. 1.6). At that time, the auditory brainstem response (ABR) had yet to be discovered. Almost half a century later, the MLR contributed to the finding of an auditory steady-state response (ASSR). In the same year, 1958, Davis H described a summating potential (SP) which was mixed in with and preceded the CM recordings [7] (Fig. 1.7). The polarity of SPs was positive at the basal turn of the cochlea and negative at the third turn of the cochlea.

Fig. 1.5
An image of multiple waveform patterns and how the response changes with the stages of sleep. From top to bottom, awake, transition to stage 2, stage 2, stage 3, and ren. The waveforms are irregular and variable. In 2, 3, and 4 has deflections.

Changes of LLR as a function in depth of sleep levels [5]

Fig. 1.6
An image of seven waves on the left, irregular and variable. Circle with A and B points, on the side. Arrows in the last wave indicate the stim, onset, and peak of the curve. On the left side, is a graph with an x-axis as log latency, and a y-axis as log amplitude. The center part of the graph is highlighted. The amplitude of the curve increases gradually.

(a) MLR responses as reported by Geisler in 1958 [6] (b) Under the rubric of AEPs, the shadowed area outlines the MRL

Fig. 1.7
An image of a diagram on the top with a. basal turn, b. third turn, c. basal turn, d. third turn. On the left side of the picture, the sound specifics are mentioned. The bottom part illustrates a curve that is regular and constant, the line below it starts with on and ends with off. The center of the curve is marked C M, and at the end, S P is marked.

Summating potentials (SP) were found in cochlear microphonic (CM) recordings by Davis H in 1958 [7]

In 1963, Nelson Kiang discovered a postauricular response to sound stimuli [8] evoked by a loud click (Fig. 1.8). Another name given to this response was the inion response. This response was forgotten immediately but has recently been appreciated as a vestibular-evoked myogenic potential (VEMP).

Fig. 1.8
An image of four waveforms on the left, the top two waves at 105 decibels and the bottom two waves at 100 decibels. Lines below the waves illustrate in time in milliseconds, and a wave pattern on the right, the wave increases gradually in amplitude and then decreases after 31 milliseconds and after that, the waves become less variable.

(a) The postauricular response was reported by Kiang, N in 1963 [8]. (b) Inion response. Both responses are identical to the postauricular muscle response

In 1964, Walter described a contingent negative variation (CNV) response which can be regarded as a type of event-related potential [9] (Fig. 1.9). An irregular but predictable pattern of auditory stimuli is presented to the subject who is tasked to predict the next presence or absence of the next auditory stimulus to which he/she is presented following a visual flash. The subject’s expectation of the next auditory stimulus generates the cortical CNV. The CNV was an epoch-making potential that is evoked by activity of higher brain functions involved in perception, judgement, or decision-making.

Fig. 1.9
An image of multiple wave patterns, starting with a click, in the graph form, with the x-axis as seconds and the y-axis as microvolts. The line below the wave depicts the flashes of stimulus. The line below the C N V curve illustrates the tone keypress. All the waveforms are irregular and variable.

Contingent Negative Variation (CNV), as a type of event-related potential, was reported by Walter, WG in 1964 [9]. (a) Auditory evoked potential stimulated by click, (b) Visually evoked potential stimulated by flash 1 sec after click, (c) Evoked potential (a+b), (d) CNV evoked by click and flashes terminated by button (a+b). CNV is called as expectancy wave, (e) CNV evoked by click (S1) and flash (S2) terminated by button in a 15-year-old patient with auditory agnosia. CNV is evoked after click (S1)

In 1967, Sutton described a P300 evoked wave, an event-related potential, which is a positive wave occurring 300 msec following a discrete auditory stimulus [10]. Two different stimuli (tones) are presented to a subject. One is a target tone presented rarely, 20% frequency, and a nontarget tone presented frequently, 80%. In Fig. 1.10, the black trace shows the response to frequent stimuli and the dotted trace shows a largely positive response around 300 msec to the rare stimuli [11].

Fig. 1.10
An image of a graph on the left side, the x-axis as the time, the y-axis as the voltage. The graph increases gradually in amplitude through different phases. The diagram of multiple horses is on the right side, above it, the target and nontarget tone is mentioned. Horse with lines is rare and without lines is frequent.

P300, as an event-related potential, was reported by Sutton S in 1967. Two different tones of rare stimuli and frequent stimuli are presented to the listener [10]

Figure 1.11 shows typical P300 responses comparing rare stimuli with frequent stimuli from a normal hearing subject. The upper trace (a) indicates when each button is pushed button by the right-hand finger, the middle trace (b) shows when each button is pushed button by the left-hand finger. The lower trace (c) is the mentally counted number of rare stimuli. There are recorded P300s to rare stimuli in the three tasks but not to frequent stimuli.

Fig. 1.11
A set of graphS. The left set indicates rare stimuli, a. target, right-hand, control, b. target, left-hand, control, c. target, mental, control. The right set illustrates frequent stimuli, a. non-target, right-handed, control, b. non-target, left-handed control, c. non-target, mental, control. All the graphs illustrate 3 curves.

Grand average of P300s recordings from Fz, Cz, and Pz are superimposed following rare stimuli and frequent stimuli in normal subjects [11]. ▲: P300

In 1967, an electrocochleographic response (the ECoG) was described simultaneously by Yosie, in Japan [12] and Portman, in France [13]. In Fig. 1.12, the upper trace shows the wave configuration of rarefaction and condensation wave of the auditory stimulus. The middle trace shows the evoked responses to these rarefaction and condensation stimuli. The lower trace shows the ECoG by A + B and CM by A - B. The finding of the ECoG soon led to the discovery of the ABR.

Fig. 1.12
An image of a set of wave patterns. On the left side, there are rarefaction clicks and on the right side, the condensation clicks are depicted, graph a. second stimuli A and B, b. electrical response A plus B and A minus B, and c. E Coch G A P N 1 and C M.

The ECoG as first reported by Yosie, N in Japan and Portman M in France in 1967 [12, 13]

1970 was the most important year in the history of the ABR. Jewett, in the USA, and Sohmer, in Israel, concurrently reported, for the first time, their discovery of the ABR [14, 15]. Figure 1.13 shows our examples of human and cat ABRs. Figure 1.14 illustrates a human ABR (a) and a cat ABR (b) as published by Sohmer. Both the human and the cat ABR waves are similar in shape. As a result of these publications, Jewett and Sohmer have been regarded as the pioneers of ABR.

Fig. 1.13
An image of two sets of wave patterns on both sides with 5 wave patterns. Below the last wave, a line illustrates the time as 5 seconds. The right side waveforms depict the phrases and the voltage as 2.5, 2.5, 5, 10, 20, and 50.

Recordings of the ABR were initially published by Jewett, DL in 1970 [14]. Examples of human and cat ABRs. (a) Human ABRs as a function of stimulus intensity, (b) Cat ABRs as a function of stimulus intensity

Fig. 1.14
In an image of two sets of wave patterns, the right side has the response of the human and the left side has the response of the cat. i. scalp-ground, ii. pinna-ground and iii. pinna-scalp. The graph below a with the x-axis as the time and the y-axis as 0.105 microvolts and below b with the x-axis as time and the y-axis as 1.05 microvolts.

Simultaneously, Sohmer in 1970 published his ABRs [15]: (a) human, (b) cat

The ABR became a new and extremely valuable tool for localizing and diagnosing neuropathology within the eighth cranial nerve and along its’ length from the cochlea to the pons. This far-field recorded response is generated acoustically by presenting, via earphones, a number of sharp clicks (around 2000) to the subject. The recorded responses to each of these clicks are summed and computer averaged to eliminate the background EEG signals. The recording window is 10 msec long. The summed and averaged responses over this 10 msec reveal up to seven prominent waves, in a normal hearing subject. The generators of each of these waves have been determined neurophysiologically [14, 16], which delineates the localization of brainstem pathologies. In Fig. 1.15, the left figure shows ABRs recorded from a cat. The distribution and amplitude of each wave are a function of the nuclei through which the neurological response traverses [16]. The right figure further illustrates the neurological substrate underlying each ABR wave.

Fig. 1.15
An image of a. diagrams of the response of a cat to the stimuli in form of waves, b. diagram of a man with a visible brain, and a house-shaped graph with the x-axis is time and the y-axis as voltage. The curves illustrate different phrases of the response through the brain stem, and it begins with the sound of a click.

(a) The distribution of auditory evoked potentials in the brainstem of a cat [16]. (b) is a Schema of the origins of each peak of the human ABR [16, 17]

Since the ABR was discovered by Jewett and Sohmer, many subsequent researchers have applied different names to the ABR, as shown in Fig. 1.16. In 1979, at the US-Japan ABR Seminar in Hawaii, Professor Jun-Ichi Suzuki of Teikyo University in Japan proposed choosing the best nomenclature, among 6, to the participants. They voted and chose ABR and it is now used universally to describe this technique.

Fig. 1.16
In an image of a photograph of a group of people, people in the back line are standing, and people in the front are sitting. The picture is of the US-Japan A B R seminar in Hawaii, in January 1979.

US-Japan ABR Seminar in Hawaii, Jan. 1979

In 1978, a new potential was discovered by Kemp [18] (Fig. 1.17) and he called it a transient-evoked otoacoustic emission (TOAE). In the following year, Kemp subsequently described another OAE which he called a distortion product otoacoustic emission (DPOAE) [19] (Fig. 1.18). DPOAEs have since been routinely used in clinical practice and they have contributed to establishing the concept of auditory neuropathy (AN) which was reported by Starr [20] and Kaga [21] simultaneously in 1996.

Fig. 1.17
An image of two graphs, with the x-axis as time in milliseconds and the y-axis as the voltage in microvolts. The left graph is for normal T O A E, and the right is for no response. The left-side graph is irregular and variable, with high variance in amplitude, and the right-side graph has low variance in amplitude.

Transient otoacoustic emissions (TOAEs) were discovered by Kemp, DT in 1978 [18] Examples of normal TOAE (a) and no response of TOAE (b)

Fig. 1.18
An image of two graphs and a wave pattern. The diagram on top has 2 f1 minus f2 plus D P C A E, the first peak is the distortion product, the second peak is for f1, and the third is for f2. The graph's x-axis is the frequency, and the y-axis is sound in decibels. The right graph is for normal response, and the left graph is for no response

Distortion product OAEs (DPOAEs) were also discovered by Kemp, DT in 1979 [19] and they have been used clinically in the evaluation of cochlear function. Examples of normal DPOAE (a) and no response DPOAE (b)

In 1978, another new potential was discovered by Näätärnen in Finland [22] (Fig. 1.19). He called it a mismatch negativity (MMN) potential and it also is a type of event-related potential, which is automatically elicited by any discriminable change in a repetitive sound or sound pattern. Two different auditory stimuli are presented to a subject. However, button-pushing or mental counting is not used as a task. Deviant and standard stimuli are presented and, automatically, the brain discriminates the amplitude difference between the two. The N200 amplitude is more robust to deviant stimuli than it is to standard stimuli. This difference in response amplitude quantifies the MMN as an index of central auditory system plasticity.

Fig. 1.19
An image of multiple waveforms at different frequencies of Fz and the difference, titled M M N as a function of frequency change. The key below it depicts deviant and standard lines with negative and positive voltages, and the line depicts the time. The graph on the right with N 1, P 2, M M N, P 2b, N 2b, P3a, and P3b is marked on Fz and Pz waves.

Mismatch negativity (MMN), an event-related potential, was found by Näätärnen in 1978 [22]

In 1981, Galambos described a 40-Hz ASSR (Auditory Steady-State Response) [23]. The ASSR is somewhat similar to the ABR in that both are electrophysiologic responses to rapid acoustic stimuli, presented via earphones or ear inserts, and these responses are recorded from external electrodes arranged in a particular montage on the scalp. The difference between the two techniques lies in the rate of the presented stimuli. The ASSR sound stimuli are repeatedly presented at a high repetition rate whereas the ABR stimuli are presented at a relatively lower rate. Also, the identification of the evoked ASSR responses (waves) uses a mathematical algorithm to identify these waves, increasing the objectivity in the analysis of these responses. The lower recording, illustrated in Fig. 1.20, shows the stimulus sound wave, called the sinusoidal amplitude modulation, and the upper recording shows the 40-Hz sinusoidal evoked potentials.

Fig. 1.20
In an image of a graph with the x-axis as the time from 0-200 milliseconds, and the y-axis as the frequency 30, 40, and 50 Hertz. The upper part at each frequency illustrates curves, and the lower part illustrates the amplitudes in the curves. The amplitude of the wave increases with each frequency gradually.

When using stimuli of 30 Hz, 40 Hz, and 50 Hz, the sinusoidal peak amplitude of 40 Hz is larger than the others (Galambos R, 1981) [23]

In 1992, a vestibular-evoked myogenic potential (VEMP) was discovered by Colebatch and Halmagyi in Australia [24]. This wave configuration is remarkably similar to the postauricular muscle response (Fig. 1.21) [24, 25].

Fig. 1.21
A set of two images. A, wave curve starts with a click, then is marked at p 13, n 23, n 34, p 44. The graph below it illustrates the x-axis as 10 milliseconds and the y-axis as 100 microvolts. B, a diagram of the vestibular nerve, the vestibular nucleus, the internal vestibular spinal tract, the accessory nerve, and innervating the sternocleidomastoid muscle.

(a) Example of a typical VEMP, (b) A reaction of the vestibular neural myogenic pathway [24, 25]

In 1995, several reports of clinically obtained multiple-frequency ASSRs were published in the literature by Picton’s Canadian and Australian groups simultaneously. Pure tone audiograms were generated by multiple-frequency ASSRs [26]. In Fig. 1.22, the upper ASSR displays a simple audiogram such as a pure tone audiogram. However, the lower ASSR shows an estimated audiogram generated by the algorithm.

Fig. 1.22
In an image with two sets of graphs, a. with the x-axis as frequency and the y-axis as the master decibel hearing loss, b. with the x-axis as frequency and the y-axis as the estimated decibel hearing loss, the left side is for the estimated audiogram of the left ear and the right side is for estimated audiogram of the right ear.

Examples of audiograms (a, b) generated by multiple-frequency ASSR [26]

Table 1.1 is a chronological presentation illustrating the development of auditory evoked potentials over time. Many years have passed since the discovery and recording of the EEG by Hans Berger in 1924.

Table 1.1 The year of publication of various evoked potentials

2 Electrically Auditory Brainstem Responses (EABRs)

In 1979, the electrically evoked auditory brainstem response (EABR) was first recorded by Starr and Brackman [27] from scalp electrodes in response to biphasic square-wave electrical stimuli of implanted electrodes of the single channel cochlear implant within the cochleas of three patients. Since then, and for over the last 40 years, many clinical studies describing the applications of EABRs of the multichannel cochlear implant have been reported to evaluate the integrity of cochlear implants during surgery and for later auditory rehabilitation.

In 1985, Gardi presented intracochlear EABRs from three patients who rested quietly on a hospital bed in a darkroom, showing that they could be reliably recorded [28].

The EABR can be used to functionally evaluate the integrity of the auditory brainstem tracts during the initial activation of the cochlear implant and during its long-term use [29].

The evoked positive peaks, eII, eIII, and eV, of the EABR have a slightly different nomenclature than those of the ABR (Fig. 1.23). However, the familiar positive peaks of I, VI, and VII of acoustically evoked ABRs are not discerned in the EABR. Wave I of the ABR is masked by the stimulus artifact. Waves VI and VII of the ABR originate from the brachium of the inferior colliculus and medial geniculated bodies. This discrepancy in nomenclature, i.e., why eVI and eVII have not been recorded in the EABR, is an ongoing research endeavor. Waveforms produced during the EABR are evoked as the result of electrical stimuli to the cochlear nerve and thus differ from acoustic stimuli presented to the ear via headphones or insert earphones. The eV wave latency represents the total transmission time through the brainstem from discreet electrical stimuli of the cochlear nerve. This transmission time is essentially identical to the interpeak latency (IPL) interval (brainstem transmission time) as measured in the click-evoked ABR.

Fig. 1.23
In an image of a graph with the x-axis representing the voltage, and the y-axis representing the time, each division illustrates 1 millisecond. The curves are from different time periods. The peaks are marked as e I I, e I I I, and e V.

Typical EABRs recorded from an apical electrode of a cochlear implant (MED-EL)

EABRs have been helpful as a practical method in the programming of cochlear implants in young children and in evaluating patients with inner ear malformations or cochlear nerve deficiencies [30].

EABRs, using cochlear implant electrodes mediated stimuli, have also been used to evaluate auditory neuronal responses in the brainstem of patients with or without an inner ear malformation [31,32,33]. EABRs are a reliable and effective way to objectively confirm cochlear implant function and the implant-responsiveness of the peripheral auditory neurons up to the level of the brainstem in patients with inner ear malformations and cochlear nerve deficiencies.