Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This chapter presents the basics of human binocular vision: the longitudinal horopter, horizontal binocular disparity, binocular disparity gradients, binocular rivalry, spatio-temporal frequency processing, and visual pathways. Vertical disparity will not be discussed; for discussion of vertical disparity, see papers by Tyler (1983) and Tyler and Scott (1979).

1 Horopter and Binocular Disparity

Figure 2.1 depicts a top-down view of two eyes looking out into the visual field and fixating on stimulus F. Now imagine an arc that passes through the fixation point called the “horopter”. Positions along the length of the horopter define the locations of objects out in the visual field that give rise to pairs of left- and right-eye images that stimulate corresponding retinal points in the two eyes. Those locations possess zero binocular disparity. The horopter is important because it defines a set of baseline locations in space from which relative depth is judged. This is an important point: stereopsis is not perceived depth relative to the user; rather, stereopsis is perceived depth relative to the horopter (see Fig. 2.1).

Fig. 2.1
figure 1

Drawing depicting the basics of stereoscopic viewing. The two circles represent a top down view of the two eyes, fixation point F, the horopter passing through the fixation point, Panum’s fusional area, and object X and object Y. When fixation point F is fixated, as shown in the drawing, the images from F stimulate corresponding retinal points (foveae) in the two eyes and are fused. Object X is positioned in front of the horopter and thus carries a crossed disparity, but the images from X which stimulate non-corresponding (disparate) retinal points in the two eyes are fused because X is located within Panum’s fusional area. Object Y is positioned farther in front of the horopter and also carries a crossed disparity, but the images from Y which also stimulate disparate retinal points in the two eyes are seen as diplopic (double) because Y is located outside Panum’s fusional area. Because Y carries a large crossed disparity, and thus its two retinal images are shifted on very disparate retinal areas, the image of Y in the left eye may stimulate a retinal area that is corresponding to an area in the right eye which is stimulated by an image z from a different object in the visual field (not shown), which would provoke binocular rivalry. Reproduced from Figure 1 of Patterson (2007), Human factors of 3D displays, Journal of the Society for Information Display, 15, 861–871. Copyright Society for Information Display. By permission

The horopter defined empirically by psychophysical measurements is not the horopter defined geometrically (Ogle, 1964; Shipley & Rawlings, 1970). The geometric horopter is called the Vieth-Muller circle, which passes through the nodal points of the two eyes and the point of fixation (not shown in Fig. 2.1). The empirical horopter is based on several different criteria such as common perceived direction (nonius method) or the equidistant plane. The nonius horopter is the more appropriate measure from a physiological perspective (Shipley & Rawlings, 1970). Reference to the ‘horopter’ in this book will be referring to the nonius horopter.

The horopter is a reference or baseline depth plane passing through fixation from which the depth of objects located in other depth planes is judged. If these other objects are not positioned along the horopter, then they possess a non-zero magnitude of disparity. Objects positioned in depth in front of, or behind, the horopter give rise to pairs of images that stimulate non-corresponding, or disparate, retinal points in the two eyes. Objects positioned in depth in front of the horopter give rise to crossed disparity, and objects position in depth behind the horopter give rise to uncrossed disparity. These terms, crossed disparity and uncrossed disparity, are used as labels for the relative direction of the depth of objects from the horopter, either in front of (crossed) or behind (uncrossed) the horopter.

There is a spatial zone surrounding the horopter (yellow in Fig. 2.1) called Panum’s fusional area. Objects positioned in Panum’s fusional area will give rise to left- and right-eye images that are perceptually fused and seen as single objects. Objects positioned outside Panum’s area will give rise to left- and right-eye images that cannot be perceptually fused and thus are seen as double images (diplopia). The ability to fuse disparate images depends upon disparity magnitude; the largest disparity at which fusion occurs is called the disparity limit of fusion. This limit is measured with the diplopia threshold—the threshold at which fusion is lost and double images are perceived. This can be achieved by having an observer report whether he or she perceives a briefly-exposed (duration = 160 ms) stereo stimulus as single and fused, or as two unfused images, as the disparity magnitude of the stimulus is systematically varied. The stereo stimulus would be presented briefly so that vergence eye movements could not alter the sign or magnitude of disparity—the latency of vergence eye movements is about 160 ms.

Many factors affect the disparity limit of fusion (see Arditi, 1986, for review). There are factors that affect this limit that display designers could manipulate or control, such as stimulus size and stimulus retinal eccentricity, as shown in Table 2.1.

Table 2.1 Two stimulus sizes and two retinal eccentricities and their effects on the disparity limits for binocular fusion, patent stereopsis, and qualitative stereopsis

The terms patent stereopsis and qualitative stereopsis are sometimes used (e.g., Ogle, 1964). Patent stereopsis refers to the interval of z-axis depth over which perceived depth increases monotonically with disparity magnitude, either away from the horopter with increasing crossed disparity (stimulus moving toward the observer), or away from the horopter with increasing uncrossed disparity (stimulus moving away from the observer). As the limit of patent stereopsis is approached, binocular fusion is lost and double images (diplopia) are seen—the stimulus is now outside Panum’s fusional area (Fig. 2.2). Outside the range of patent stereopsis, depth perception with diplopia is called qualitative stereopsis and perceived depth becomes unreliable: further increases in crossed or uncrossed disparity continue to produce diplopia and perceived depth collapses inward toward the horopter. With diplopia, one of the two monocular images may be perceptually suppressed via a process called binocular rivalry (see below). Stimulus size and stimulus retinal eccentricity also affect the disparity limits of patent and qualitative stereopsis, again shown in Table 2.1.

Fig. 2.2
figure 2

Depiction of relative versus absolute disparity. In the left panel, the observer is fixating on object X, whose images stimulate the fovea in the two eyes (‘F’) and thus object X projects a zero disparity value to the visual system (stimulation of corresponding retinal points). The curved dashed line shows the horopter going through object X. Object Y projects a crossed disparity to the visual system. In the right panel, the observer converges the eyes and shifts fixation to object Y, whose images now stimulate the fovea in the two eyes (‘F’) and thus object Y now projects a zero disparity value to the visual system. The curved dashed line shows the horopter going through object Y. Object X now projects an uncrossed disparity to the visual system. In both cases, the relative disparity between objects X and Y remains the same

When vergence eye movements are made, fixation and the horopter are shifted to various positions in the visual field. An object that initially stimulates the binocular visual system with crossed disparity may end up stimulating the visual system with uncrossed disparity, or vice versa. In this case, the relative disparity between stationary objects in the visual field remains constant but their absolute disparity as projected to the visual system, which is the relevant cue for stereopsis (Cumming & Parker, 1999), will change whenever vergence eye movements are executed (Patterson, 2007). See Fig. 2.2. This is an important point: if you want to know precisely the disparity sign and magnitude that is stimulating the visual system, then you must know where the observer is looking.

Vergence eye movements may serve to increase the disparity range over which reliable depth perception occurs. A mental representation of the visual field may be constructed over time by the integration of depth information across vergence eye movements (Patterson et al., 2006; Patterson & Martin, 1992).

Voluntary eye movements have been shown to increase the disparity limits of fusion, from a limit of about 24–27 arcmin without eye movements to a limit of several degrees with eye movements (Yeh & Silverstein, 1990). Voluntary eye movements have been shown to improve stereoscopic depth perception (Foley & Richards, 1972). However, in other ways the effects of eye movements on stereopsis are complex: the longitudinal horopter is normal in the frontoparallel plane with symmetric convergence, but the horopter rotates horizontally with asymmetric convergence (i.e., fixation off the midsagittal plane; Ogle, 1964; Shipley & Rawlings, 1970), which would change the regions in the visual field that would support fusion and stereopsis.

It is commonly thought that vergence eye movements produce a conflict with accommodation when stereo displays are viewed: When viewing a stereo display, the stimuli for accommodation are images on the surface of the display. When a user changes vergence angle to converge to a virtual object appearing in depth in front of or behind the display, the vergence angle can be mismatched relative to the accommodative response. In Chap. 4, this issue is discussed in detail; such a conflict should occur only for short viewing distances and thus not be a general problem when vergence eye movements are made; a general remedy for the problem is given in Chap. 4.

The perception of relative depth from disparity is different from depth perception based on an observer making vergence eye movements. Although a change in vergence angle can be induced by variation in disparity, changes in vergence would provide only indirect information about relative depth. Relative depth in this case would be given by the sensing of a difference between two vergence positions via proprioception, not from disparity directly. Depth estimates from proprioception would be relatively imprecise compared with those from disparity (Patterson & Martin, 1992). Information from proprioception may augment the perception of depth, which is discussed in Chap. 9.

2 Binocular Disparity Gradient

A concept called the ‘binocular disparity gradient’ is important for achieving binocular fusion. The binocular disparity gradient can affect the ability of an individual to binocularly fuse and process multiple stimuli presented in stereoscopic depth. Given two objects that are laterally separated and also positioned in different depth planes, the binocular disparity gradient is defined as the difference in absolute disparity between the two objects divided by the mean angular separation between the combined images coming from one object versus the combined images coming from the other object (i.e., akin to the lateral separation between the objects). This concept is depicted in Fig. 2.3, which shows two viewing situations depicting a horizontal gradient of disparity.

Fig. 2.3
figure 3

Depiction of the binocular disparity gradient. Top-down view of two eyes (L.E. = left eye; R.E. = right eye) viewing two objects in the visual field, Object ‘O’ (fixated) and Object ‘X.’ The two lower boxes give an analysis of the disparity and separation in each drawing. Disparity gradient is defined as the angular disparity between the images of two objects divided by the angular separation. Angular separation is defined as the angle between the mean direction of the images of one object and the mean direction of the images of the other object (mean direction is given by the vertical dashed lines in the lower boxes). The two objects O and X in the left panel have a disparity gradient of less than 2, while the two objects in the right panel have a disparity gradient of 2. Reproduced from Figure 2.7 of Howard and Rogers (1995), Binocular vision and stereopsis. Oxford, UK: Oxford University Press. By permission of Oxford University Press, USA

In both panels of Fig. 2.3, the observer is fixating on object O and object X is positioned slightly to the left and behind object O. In the left panel, the two objects have a horizontal disparity gradient of less than 2, while in the right panel the two objects have a horizontal disparity gradient of 2. The critical disparity gradient is 1.0, a value above which the two disparities cannot be simultaneously fused (Burt & Julesz, 1980; Tyler, 1973). Burt and Julesz (1980) reported that when two objects have a disparity gradient greater than 1 (i.e., the disparity exceeds the mean angular separation between the images of the objects), only one object can be perceptually fused at a time. The disparity of multiple objects located in different depth planes can be more easily fused if the objects have a sufficient horizontal and/or vertical separation between them when viewed from the observer’s position.

The disparity gradient may affect the visual system’s ability to fuse disparity information when stereo displays are viewed if objects with relatively large disparities are located too close together in the x/y-plane when viewed from the observer’s location. This means that it may be prudent to keep objects with relatively large disparities sufficiently separated in the x/y-plane if possible. If not, then the observer may experience loss of fusion with one or more of the objects. This topic deserves to be investigated empirically with the kind of stereo displays employed in real-world applications to determine how serious the issue may be with real-world viewing.

3 Binocular Rivalry

There can be a potential problem when viewing a given stereo display whenever the images for the left and right eyes, coming from different display channels and/or optical systems, are misaligned or distorted to the point that observers cannot perceptually fuse portions of the two eyes’ views. When an individual who views a stereo display cannot perceptually fuse portions of the two eyes’ views, a visual process called binocular rivalry will be provoked. Binocular rivalry (Blake, 1989, 2001; Breese, 1899; Howard, 2002; Howard & Rogers, 1995; Levelt, 1965; Patterson, Winterbottom, Pierce, & Fox, 2007) refers to a state of competition between the eyes, such that one eye inhibits the visual processing of the other eye, when the two eyes view discordant stimuli. The visibility of the images in the two eyes fluctuates, with one eye’s view being visible while the other eye’s view is rendered invisible and suppressed, which reverses over time.

Binocular rivalry can be elicited by differences in attributes or characteristics between the images seen by the two eyes, such as differences in orientation, hue, luminance, contrast polarity, form, size, and/or motion velocity. Binocular rivalry can occur over a wide range of light levels throughout the visual field (Blake, 2001, pp. 8–9). The visual tolerance levels for interocular differences in stimulation are (Rash, Mozo, McEntire, & Licina, 1996; Tsou & Shenker, 2000): ±23 arcmin horizontal and ±11.5 arcmin vertical, horizontal or vertical differences in image size of up to 1.5 %, a rotational difference of up to ±10–12 arcmin, and deviation between centers of the two eyes’ views of 0.18 prism diopters.

Patterson et al. (2006) also suggested that an interocular difference in luminance of greater than 30 % will likely provoke rivalry. Moreover, a given stimulus viewed by one eye will typically dominate a rival stimulus seen by the other eye if the former possesses greater contour density, higher contrast, a wider range of spatial frequencies, or faster motion. Practice over a number of days (e.g., 10 days) may help individuals control the rate of rivalry alternations (Lack, 1969). As an example, Fig. 2.4 depicts left-eye and right-eye views of stimuli that would provoke vigorous binocular rivalry due to differences in orientation and size (i.e., spatial frequency) of the bars making up the patterns.

Fig. 2.4
figure 4

Left-eye view and right-eye view of stimuli that provoke binocular rivalry due to differences in orientation and size (i.e., spatial frequency) of the bars making up the patterns. To induce rivalry, set up a viewing arrangement (try a pocket mirror—and see Fig. 3.2) in which the left eye’s view is presented only to the left eye, and the right eye’s view is presented only to the right eye. The visibility of the images in your two eyes will fluctuate, with one eye’s view being visible while the other eye’s view is invisible and suppressed, which reverses over time. Note that rivalry can be induced with more subtle differences between the images seen by the two eyes; see text for details

When wavelengths are different enough to produce percepts of different hues, such differences can provoke rivalry (Hollins & Leung, 1978). This has implications for stereo displays that employ the anaglyph technique in which the two eyes’ views are separated via the use of different bands of wavelengths (e.g., red images to one eye, blue or green images to the other eye). The anaglyph technique may be prone to inducing binocular rivalry. In this author’s experience, as much as 15 % of individuals with normal stereo vision may experience chromatic rivalry when the anaglyph technique is used.

The inhibition provoked by binocular rivalry occurs at many levels of the visual system (Blake, 2001), and it can make visual processing unstable, unpredictable, and impair the ability of observers to visually guide and direct attention to targets in the visual field (Schall, Nawrot, Blake, & Yu, 1993). It is important to ensure that the two eyes’ views of a stereo display are fusable and that no rivalry is present owing to misalignment or distortion of the two display channels and/or optical systems.

4 Spatio-Temporal Frequency Processing

When discussing certain aspects of binocular vision and stereoscopic depth perception, it is necessary to cover the topic of the visual processing of spatial frequency and temporal frequency of luminance modulation. One of the basic visual abilities is the detection of luminance contrast in space and in time. The visual processing of spatial and temporal frequency of luminance modulation (contrast) is fundamental and is placed within the context of ‘frequency filtering’, the idea that the early visual system performs a kind of filtering operation on the spatial and temporal distribution of luminance within the visual field (e.g., modeled as Fourier analysis). Early visual processing can be modeled as a filtering operation that filters the retinal image based on the spatial frequency and temporal frequency content of stimulation, where frequency is defined as the rate of luminance modulation. At a higher stage of processing, the visual system is thought to integrate the frequency information into a composite that represents various objects and their movements.

The human visual system can process spatial frequencies within the range of about 30 cycles per degree (cyc/deg) of visual angle (20/20 vision) or higher at the high end, and down to about 0.1 cyc/deg at the low end, depending on conditions (Campbell & Robson, 1968; De Valois & De Valois, 1988). This range of spatial frequencies is visually processed by different sets of neurons, each set of which responds to a given smaller band of frequencies (i.e., a spatial-frequency visual ‘channel’). Some sets of neurons, high spatial frequency channels, respond to high spatial frequencies, which correspond to fine spatial detail. Such neurons would possess high spatial acuity (respond to the upper-most waveforms in the top panel of Fig. 2.5). Other sets of neurons, low spatial frequency channels, respond to low spatial frequencies, which correspond to coarse spatial detail. Such neurons would possess poor spatial acuity (respond to the lower most waveforms in the top panel of Fig. 2.5). The collection of channels represents neural responding to the entire range of spatial frequencies, responding which is integrated into various composite representations of objects and elements in the visual field at higher stages of processing.

Fig. 2.5
figure 5

Depiction of different spatial frequencies (upper panel) and temporal frequencies (bottom panel). In the figure, the x-axis represents space (degrees of visual angle; top panel) or time (seconds; bottom panel) and the y-axis represents relative luminance level (i.e., absolute position along the y-axis is to be discounted). The top panel depicts different rates of luminance modulation across space, or different spatial frequencies (in units of cycles per degree, or cyc/deg), and the bottom panel depicts different rates of luminance modulation in time, or different temporal frequencies (in units of cycles per second, or Hz). In each panel, a range of frequencies is depicted, from a low frequency positioned at the bottom of each panel to a high frequency positioned at the top of each panel; the frequencies are offset from one another along the y-axis arbitrarily

The visual system can process temporal frequencies within the range of about 50–60 cycles per second (Hz) at the high end, and down to 0 Hz (i.e., a steady-state stimulus) at the low end, depending on conditions (De Lange, 1952, 1954; Kelly, 1971). Some sets of neurons (high temporal frequency channels) respond to high rates of temporal variation in luminance, which corresponds to high temporal acuity (respond to the upper waveforms in the bottom panel of Fig. 2.5). Other sets of neurons (low temporal frequency channels) respond to lower rates of temporal variation, which corresponds to poor temporal acuity (respond to the lower waveforms in the bottom panel of Fig. 2.5). At higher stages of processing, the visual system integrates temporal frequency information into various composite representations of movement and temporal structure.

Across the spatial-temporal frequency spectrum, sensitivity to high spatial frequencies is typically associated with sensitivity to lower temporal frequencies, and sensitivity to low spatial frequencies is associated with sensitivity to higher temporal frequencies. High spatial acuity/low temporal acuity are characteristics of a pathway that projects from the central retina to higher visual cortical areas (ventral cortical stream, or VCS) and which detects small binocular disparities (small disparity limit of fusion, fine depth discrimination). Low spatial acuity/moderate or high temporal acuity are characteristics of a pathway that projects from the central and peripheral retina to different higher cortical areas (dorsal cortical stream, or DCS) and which detects large disparities (large disparity limit of fusion, poor depth discrimination). These two pathways, VCS and DCS, are discussed in more detail below.

The spatial frequency-temporal frequency content of displayed information will determine the range of available disparities that can be fused and processed by the binocular visual system. This topic is discussed more fully in Chap. 6.

5 Visual Pathways

This section briefly covers the visual pathways in primate vision with a particular focus on stereo processing. The functional significance of these visual pathways for human factors issues will be discussed in subsequent chapters. To anticipate, we will learn, for example, that the high spatial acuity/low temporal acuity pathway (ventral cortical stream) that detects small binocular disparities, and therefore supports performance on tasks such as fine stereo depth discrimination, may be impaired by spatial multiplexing methods that entail decreased display spatial resolution. On the other hand, the low spatial acuity/moderate or high temporal acuity pathway (dorsal cortical stream) that detects large disparities, and therefore supports performance on tasks such as heading control, may be impaired by temporal multiplexing (field sequential) methods that involve decreased display temporal resolution.

In a basic sketch of the primate visual system (Fig. 2.6; see Blake and Sekuler, 2005, for overview), visual processing begins in the retina with light being transduced into neural signals by the rods and cones. From the retina, signaling projects to layers in the thalamus in an area called the lateral geniculate nucleus, or LGN. In this projection, cells from the inner half of the retina of the left eye (called the nasal hemi-retina) cross the midline of the body at the optic chiasma and combine with cells from the outer half of the retina from the right eye (called the temporal hemi-retina) and project to the LGN on the right side of the body (in the right hemisphere of the brain). Because the left half of the visual field projects onto the nasal hemi-retina of the left eye and the temporal hemi-retina of the right eye, both of which project to the LGN on the right side, information located in the left visual field projects to the right side of the brain. Cells from the nasal hemi-retina of the right eye cross at the optic chiasma and combine with cells from the temporal hemi-retina of the left eye and project to the LGN on the left side of the brain. Because the right half of the visual field projects onto the nasal hemi-retina of the right eye and the temporal hemi-retina of the left eye, both of which project to the LGN on the left side, information located in the right visual field projects to the left side of the brain.

Fig. 2.6
figure 6

Basic sketch of a top-down view of the primate visual system. Visual processing begins in the retina (L.E. = left eye; R.E. = right eye). From the retina, signals project to layers in the thalamus in an area called the lateral geniculate nucleus (or body), or LGN. In this projection, cells from the inner half of the retina of the left eye (nasal hemi-retina) cross the midline of the body at the optic chiasma and combine with cells from the outer half of the retina from the right eye (temporal hemi-retina) and project to the LGN in the right hemisphere of the brain. Because the left half of the visual field projects onto the nasal hemi-retina of the left eye and the temporal hemi-retina of the right eye, both of which project to the LGN on the right side, information located in the left visual field projects to the right side of the brain. Cells from the nasal hemi-retina of the right eye cross at the optic chiasma and combine with cells from the temporal hemi-retina of the left eye and project to the LGN on the left side of the brain. The same organizational scheme thus applies to the right half of the visual field: the right half of the visual field projects to the LGN on the left side, thus information located in the right visual field projects to the left side of the brain. From the LGN, signals project into the occipital lobe of the visual cortex in area V1 in both hemispheres

From the LGN, signals project into the occipital lobe of the cortex in area V1 in both hemispheres. (Other pathways projecting to subcortical structures will not be discussed.) From V1, signals project into area V2, also located in the occipital lobe (not shown). From V2, signals project to various areas in parietal and temporal cortex, such as areas V3, V4, and V5 (also not shown). The neural projections from the retina into visual cortex are thought to comprise two parallel pathways, the ventral cortical stream and the dorsal cortical stream, as shown in Fig. 2.7 (Livingstone & Hubel, 1988; Milner & Goodale, 1995; Schiller, Logothetis, & Charles, 1990; Ungerleider & Mishkin, 1982).

Fig. 2.7
figure 7

Drawing showing the left side of the human brain. The rightmost region of the drawing (near the origin of the two arrows) is the occipital cortex and area V1, from which two functionally distinct pathways emerge. The ventral cortical stream (V.C.S. in the figure) projects into the temporal lobe and is thought to be involved in the functional analysis of spatial pattern information and object identification. The dorsal cortical stream (D.C.S. in the figure) projects into the parietal lobe and is thought to be involved in the functional analysis of motion information, heading control during locomotion, and action priming. See text for detail

6 Parallel Pathways

The ventral cortical stream (VCS, or V.C.S. in Fig. 2.7) draws connections mainly from the central retina and projects to areas in visual cortex, such as areas V1, V2, and V4. Cortical areas in the ventral stream functionally analyze spatial pattern information. The neurons in this pathway have a sluggish and sustained response, high spatial acuity, poor temporal acuity, and chromatic sensitivity. The VCS may be involved in the functional analysis of spatial pattern information, object identification, and conscious perception (Milner & Goodale, 1995).

The dorsal cortical stream (DCS or D.C.S. in Fig. 2.7) draws connections from both the central and peripheral retina and projects to areas in visual cortex, such as V1, V2, V5, and MST. Cortical areas in the dorsal cortical stream process optic flow information for heading control (Peuskens, Sunaert, Dupont, Van Hecke, & Orban, 2001), biological motion (Grossman & Blake, 2001; Grossman et al., 2000), and integrate vision with action (e.g., T’so & Roe, 1995; Van Essen & DeYoe, 1995; Yabuta, Sawatari, & Callaway, 2001). The neurons in this pathway have a transient response, high temporal acuity, poor spatial acuity, and lack chromatic sensitivity. The dorsal cortical stream is thought to be involved in the functional analysis of motion information, heading control during locomotion, and action priming (Milner & Goodale, 1995).

Farivar (2009) suggested that the ventral and dorsal cortical streams are not fully dissociated, anatomically or functionally. The dorsal stream processes dynamic 3D shape cues, which would suggest that the dorsal stream plays a role in object recognition, a role originally thought to belong exclusively to the ventral stream. Farivar concluded that the dorsal stream extracts 3-D shape information from certain dynamic cues and relates those representations to cue-invariant and view-invariant representations of objects in the ventral stream, an interpathway interaction (see also Hegde & Felleman, 2007).

Both pathways in the primate visual system, ventral and dorsal, contain areas responsive to binocular disparity (e.g., Backus, Fleet, Parker, & Heeger, 2001; Burkhalter & Van Essen, 1986; Georgieva, Peeters, Kolster, Todd, & Orban, 2009; Patten & Murphy, 2012; Tsao et al., 2003). For example, in a functional magnetic resonance imaging (fMRI) study, Likova and Tyler (2007) reported that a region of the dorsal stream in human cortex was activated by a pure disparity-defined stimulus moving in the z-axis.

As mentioned above, the functional significance of disparity processing in both the ventral and dorsal streams for human factors issues will be discussed in subsequent chapters. We now turn to a discussion of the stimulus arrangement for creating stereoscopic displays.