Keywords

1 Introduction

By far the most natural way to move through our environment is through locomotion. However, the seemingly effortless act of walking is an extremely complex process and a comprehensive understanding of this process involves scientific and clinical studies at different levels of analysis. Locomotion requires preparing the body posture before initiating locomotion, initiating and terminating locomotion, coordinating the rhythmic activation patterns of the muscles, of the limbs and of the trunk, and maintaining dynamic stability of the moving body [77]. There is also a need to modulate the speed of locomotion, to avoid obstacles, to select appropriate, stable foot placement, to accommodate different terrains, change the direction of locomotion, and guide locomotion towards endpoints that are not visible from the start. To this end, locomotion engages many different sensory systems, such as the visual, proprioceptive, auditory and vestibular systems, making it a particularly interesting multisensory problem. Importantly, these are also factors that must be considered when developing a realistic walking interface to be used with Virtual Reality (VR).

Although many of these aspects of locomotion have received extensive scientific attention, much of the earlier laboratory-based research, though highly valuable, has lacked ecological validity. Ultimately, scientific research should, when possible, evaluate human behaviors as they occur under natural, cue-rich, ecologically valid conditions. To this end, VR technology has been providing researchers with the opportunity to provide natural, yet tightly controlled, stimulus conditions, while also maintaining the capacity to create unique experimental scenarios that would (or could) not occur in the real world [16, 24, 68, 105]. An integral part of VR is to also allow participants to move through the Virtual Environments (VE) as naturally as possible. Until recently a very common way of having observers navigate through VEs was to have them manipulate unnatural control devices such as joysticks, computer mice, and keyboards. Despite having some advantages over mere visual stimulation, such rudimentary motion control devices are severely limited. While using such devices, the physical actions which drive self-motion are very different from the action of natural locomotion which they are intended to replace (e.g. clicking a mouse button to move forward versus stepping). Moreover, the sensory input is mainly visual and other important sensory information is lacking, notably proprioceptive feedback from the legs and vestibular feedback. Fortunately, more natural locomotion interfaces, such as bicycles, treadmills and fully-tracked free-walking spaces, are becoming more common (see [24] for a review). Although with these solutions locomotion is much closer to real life movements, they are still constrained in important ways. In the case of the bicycle, for instance, there is no absolute one-to-one relationship between the metrics of visual space and those of the proprioceptive movements because of the unknown scale of one pedal rotation (i.e., this would depend on the gear, for instance). Fully-tracked walking spaces are constrained by the size of the actual space within which they are contained. Treadmill setups are restrictive as most of them are rather small [94] and only allow walking in one direction. Indeed, in everyday navigational tasks, we rarely walk completely straight over extended periods of time. In short, today it is still difficult to allow people to freely walk through large scale VEs in an unconstrained manner.

It is this unsatisfactory situation that prompted some of the work reported in this volume and it likewise prompted the CyberWalk project. The goal of this project was the development of a novel, multimodal, omnidirectional walking interface, with at its core, a 4 \(\times \) 4  m omnidirectional treadmill. The project encompassed an international consortium dedicated to both scientific and technological research. The CyberWalk platform is the first truly omnidirectional treadmill of its size that allows for natural walking in any direction through arbitrarily large Virtual Environments. It is a major step towards having a single setup that allows for the study of the many facets of human locomotion, ranging from the biomechanical to the cognitive processes involved in navigating large areas. The platform consists of segmented belts which are mounted on two large chains in the shape of a torus, which allows it to move the walking surface in both horizontal directions and thereby enables indefinite omnidirectional walking and turning (see Fig. 6.7). It is integrated with additional VR capabilities so that a virtual world is presented through a head-mounted display (HMD) and updated as a function of the movements of the user. The platform is described more fully in [95] and in Sect. 6.5 of this chapter. More detailed descriptions of specific technological and engineering aspects of the platform can be found elsewhere [29, 8789, 94, 112].

The technological development of the platform had a strong human-centered approach and was guided by human gait and psychophysical research conducted at the Max Planck Institute for Biological Cybernetics (MPI), one of the consortium partners. Here we report on a selected number of these studies. Since a major objective was to develop a platform that enables natural walking, we studied basic gait parameters during natural unconstrained outdoor walking as a general reference. The CyberWalk platform has at its core a treadmill, and thus we investigated potential differences between normal overground walking and treadmill walking. Studies were also focused on the multisensory processes at play during human walking. While there is a wealth of research on the role of vision in locomotion, relatively little is known about the interaction between the different non-visual senses. Consequently, a series of studies was conducted to look at the interaction between vestibular and proprioceptive information during walking. Finally, a number of studies on human navigation were conducted on long-range navigational capabilities with and without the use of visual information.

2 Gait and Biomechanics

One of the major goals of the CyberWalk project was to enable natural and unconstrained walking on a treadmill based system. This original challenge introduced many questions and we highlight two of those here. First, in order to enable natural and unconstrained gait, a description of typical gait characteristics was needed. For instance, at what speed do people normally walk, how do they start and stop walking, how often and how much do they turn? Second, there is still a debate in the literature as to whether gait characteristics during treadmill walking are the same as during overground walking. Thus, we conducted a series of studies to address these questions. The results were intended to assign tangible constraints on a system intended to support natural walking (e.g., on the accelerations required and the size of the walking surface).

2.1 Natural Unconstrained Walking

There is, in fact, very little literature on natural unconstrained walking. One reason for this is a previous lack of measurement technologies suitable to capture gait with sufficient accuracy. In recent years, however, Global Positioning Systems (GPS) are providing a promising solution to this problem [70, 101]. For instance, Terrier and colleagues used a highly accurate GPS to show that inter- and intra-subject variability of gait characteristics can be measured outdoors [107, 108]. Moreover, GPS data can be combined with Inertial Measurement Unit (IMU) technologies to develop highly accurate measurement systems with high data rates [104]. Nevertheless, the few available studies that report GPS data are still highly constrained in a fashion reminiscent of laboratory research. For instance, participants are often asked to follow a modulated pace/frequency [86, 106, 108],which is known to significantly increase energy cost [115] or to walk/run along a predefined path [32, 104, 106, 107]. In studies where walking behavior was not constrained, data were collected over several days at very low sampling rates to form a picture of overall “behaviors” rather than basic gait parameters such as step length and frequency [26, 70, 76, 85, 110].

We conducted a study of unconstrained outdoor human walking that differed from previous studies in that we observed people walking for an extended period of time (1 h) and completely at their own volition [97]. We measured the position of the trunk and rotational rates of the trunk and head. The high accuracy required to capture trunk position outdoors was achieved by using a Carrier-Phase Differential GPS setup (C-DGPS). The C-DGPS utilizes a secondary static GPS unit (master station) to correct for errors in a mobile rover GPS (Novatel Propak, V3–L1). The rover was combined with an aviation grade, extremely light and compact antenna that was mounted onto a short pole fixed to a frame inside a backpack. Data were output at 5 Hz with a typical accuracy between 2 and 10 cm depending on environmental conditions (tree cover, reflections etc). For additional measures about movements of the trunk we used a 6-axis IMU (Crossbow Technology, IMU300), with measurement ranges of \(\pm 19.6\) m/s\(^{2}\) and \(\pm 100\,^\circ \!/{\text{ s }}\). The measuring unit was rigidly fixed to the bottom of the GPS antenna frame and logged data at 185 Hz. To measure the head we used a custom-built 3-axis IMU (ADXL202 and ADXRS150, logging at 1028 Hz) that was mounted on a head brace worn by the participants (total weight of less than 150 g). A strobe signal was used to align the data streams in post-processing. All devices plus data loggers and battery packs were fit in the backpack (just under 9 kg).

A task was designed that would induce the normal variability in walking behavior without imposing a stereotypical walking pattern. Fourteen participants walked through a residential area while searching for 30 predefined objects (e.g., street signs, statues) using a map of the area. The locations of the objects were indicated on the map by flags and participants were asked to note the time when they reached the location of an object. They were instructed to optimize the order in which they visited the targets such that they would visit the largest number of objects within one hour. Using recordings of the 3D position of the trunk, a wide range of walking parameters were computed including, step length (SL), step frequency (SF), and their ratio, also known as the walk ratio (WR). This ratio has been found to be invariant within a range of walking speeds [48, 90], and has been linked to optimal energy expenditure [56, 115]. Evidence of invariance in WR has been reported for walking at manipulated speeds along a 100 m straight athletic track [107] and a 400 m oval track [108], but never under free walking conditions. We also measured walking speed during straight and curved walking trajectories and starting and stopping behavior. Walking speed was calculated as the difference between consecutive positions of the trunk position in the horizontal (GPS) frame. Table 6.1 presents some individual and mean basic gait parameters computed from the GPS data. For a complete description of results please refer to [97].

Table 6.1 Individual and mean descriptive statistics of 1 h of unconstrained walking

Results demonstrated that when people walked on a straight path, the average walking speed was 1.53 m/s. This value is very similar to field survey data [41, 65]. Perhaps not surprisingly, walking speed decreased when people walked on a curved path. The magnitude of the decrease depended on both the radius and angle of the turn taken. For turn angle, walking speed decreased linearly with angle. Thus, it changed from 1.32  m/s at 45\(^\circ \) angles to around 1 m/s at complete turnarounds (i.e., 180\(^\circ \)). These values are in strong agreement with those observed in a controlled experiment conducted in a fully-tracked indoor lab space [98]. As for turn radius, walking speed was seemingly constant for turns with radii \({\ge }10\) m (1.49 m/s) and for turns with radii \({\le }\mathrm{5\,m }\) (1.1 m/s), while in between these radii values, walking speed changed in a fairly linear fashion.

Consistent with previous literature [90, 107] we found that WR was relatively invariant with respect to walking speed. After correcting for participant height (see [90]), we found that most of the adjusted values of WR were close to 0.4 m/steps/s. There were some outliers at slower walking speeds (i.e., below 1 m/s), which is again consistent with earlier reports [90], and the WR at these slower walking speeds was also more variable. The relative invariance of WR in natural (and controlled) walking underlines its usefulness as a clinical diagnostic tool for detecting abnormal gait but also to the scientific study of human locomotion in general.

Fig. 6.1
figure 1

Starting and stopping. The time (a) and accelerations (b) during starts (solid line) and stops (dotted line) as a function of steady walking speeds. Shaded regions indicate standard errors of the means. Also plotted (panel a) are the individual results (small dots), mean results (large dots) from two earlier studies. To illustrate the general trends, linear regressions are shown on all individual results (black lines)

The time that it takes to reach a steady walking speed depends on the desired speed (see Fig. 6.1). It took an average of 2 and 3 s to reach walking speeds of 0.5 and 2 m/s, respectively. The relationship between the time it took to stop and walking speed was very much the same. The dependence on walking speed, however, contradicts findings by Breniere and Do [11] who found that the time it takes to reach the desired walking speed is independent of walking speed. Dependence on walking speed has been found by others [60, 71], although we observe that in natural walking humans take more time to start and stop than in laboratory settings. To illustrate these differences Fig. 6.1 also includes the data from Breniere and Do [11] and Mann et al. [71] together with our own results. One possible cause for this difference is the protocol used in laboratory experiments [96]. Specifically, whereas earlier studies typically use an external “go” signal, our participants were free to start and stop as they pleased.

2.2 Overground Versus Treadmill Walking

While treadmills allow for the observation of walking behavior over extended periods of time, it is still a matter of debate as to whether gait during treadmill walking is different than overground walking [3]. There is evidence that treadmill walking can significantly alter the temporal [3, 30, 100, 114], kinematic [3], and energetic [78] characteristics of walking. One apparently robust finding is that walking on a (motorized) treadmill increases step frequency (cadence) by approximately 6 % [3, 30, 100, 114]. It has, therefore, been concluded by many researchers that motorized treadmills may produce misleading or erroneous results and that care should be taken in their interpretation. At the same time there are also studies that do not find any significant differences between overground and treadmill walking [75, 84]. Two possible sources for this discrepancy that we have addressed in our research are differences between walking surfaces and the availability of relevant visual feedback about self-motion during treadmill versus overground walking.

Treadmills are typically more compliant than the regular laboratory walking surfaces used in past studies, and it has been speculated that it is this difference in surface stiffness that affects locomotion patterns when directly comparing treadmill walking with overground walking (e.g., [30, 31]). Such speculations are warranted by other research showing significant effects of walking surface compliance on basic gait parameters such as step frequency and step length [72]. Interestingly, the one study that compared overground with treadmill walking using similar walking surfaces found no differences in gait parameters [84].

Another potential factor to consider is that participants typically have visual information available during walking. During natural, overground walking, dynamic visual information (i.e. optic flow), is consistent with the non-visual information specifying movement through space. However, during treadmill walking, a considerable sensory conflict is created between the proprioceptive information and the visual (and vestibular) information (see also Sect. 6.3.2) such that the former informs participants that they are moving, yet the latter informs them they are in fact stationary. Although it is not obvious how such a conflict might specifically alter gait parameters, there is evidence that walking parameters are affected by whether visual feedback is available or not. For instance, Sheik-Nainar and Kaber [91] evaluated different aspects of gait, such as speed, cadence, and joint angles when walking on a treadmill. They evaluated the effects of presenting participants with congruent and updated visuals (via a HMD projecting a simulated version of the lab space), compared to stationary visuals (real world lab space with reduced FOV to approximate HMD). These two conditions were compared to natural, overground walking. Results indicated that while both the treadmill conditions caused participants to walk slower and take smaller steps, when optic flow was consistent with the walking speed, gait characteristics more closely approximated that of overground walking. Further, Hallemans et al. [50] compared gait patterns in people with and without a visual impairment and compared the gait patterns of normally sighted participants under full vision and no vision conditions. Results demonstrated that participants with a visual impairment walked with a shorter step length than sighted individuals and that sighted participants who were blindfolded also showed similar changes in gait (see also [74]). Further, in the absence of vision, normally sighted participants walked slower and had lower step frequencies when blindfolded compared to when full vision was available, which was hypothesized to reflect a more cautious walking strategy when visual information was absent. However, it is not known whether walking is differentially affected by the presence and absence of congruent visual feedback.

Humans have a strong tendency to stabilize the head during walking (and various other locomotor tasks) in the sense that they minimize the dispersion of the angular displacement of the head [13]. Interestingly, visual feedback does not appear to be important for this stabilization [80]. However, the walking conditions under which this has been studied have been very limited. Participants were asked to walk at their own preferred speed or to step in place [80]. Very little is known about the generality of this lack of an effect of vision and whether there are differences between overground and treadmill walking.

Fig. 6.2
figure 2

The circular treadmill (CTM) at the Max Planck Institute for Biological Cybernetics. It consists of a large motorized wooden disc (Ø = 3.6 m) covered with a slip resistant rubber surface and a motorized handlebar. The disc and handlebar can be actuated independently from each other. The disc’s maximum angular velocity is 73 \(^\circ \)/s, and the handlebar can reach a maximum velocity of 150 \(^\circ \)/s. Walking on the CTM is natural and intuitive and does not require any explicit training (see also [42]). For the overground versus treadmill study (Sect. 6.2.2) the setup was equipped with a TrackIR: Pro 4 (NaturalPoint) optical tracking device for tracking the position and orientation of the head. It was fixed on top of the depicted laptop monitor that was mounted in front of the participant. The device has a \(46^\circ \) field of view and provides 6 DOF tracking with mm and sub-degree precision for position and orientations, respectively. For the experiments described in Sects. 6.3.2 and 6.2.3, a custom-built pointing device was mounted on the handlebar within comfortable reaching distance of the right hand (at a radius of 0.93 m from the center of the disk). The pointing device consisted of a USB mechanical rotary encoder (Phidgets, Inc.) with a pointing rod attached and encased in plastic (see also [42]). Note that the CTM has since moved to the department of Cognitive Neurosciences at Bielefeld University. (Photograph courtesy of Axel Griesch)

Fig. 6.3
figure 3

Overground versus treadmill walking. Group means with SEM for a step length, b step frequency, c walk ratio, d head sway from left to right, and e vertical head bounce, as a function of walking speed (in m/s). Walking was either overground (OG) or stationary walking on the treadmill (TM), and there was either visual feedback (V\(+\)) or not (V\(-\)). Fourteen participants, between the ages of 19 and 33 (7 females), walked 3 times for 30 s at a constant velocity for each condition. Dependent measures were obtained from measurements of the position of the head, and step length and frequency were corrected for individual heights (see also Sect. 6.2.1)

We investigated the effects of walking surface and visual feedback on basic gait parameters and on the movement of the head in an integrated manner. This experiment was conducted using a circular treadmill (CTM) at the MPI (see Fig. 6.2 and caption for additional details). The effect of surface stiffness on gait characteristics was controlled for by having participants walk in place and walk through space on the same treadmill surface. Specifically, overground walking consisted of simply leading the participant around on the stationary disc using the motorized handlebar. Stationary (“treadmill”) walking consisted of walking in place on the moving disc without moving through space. If the difference in surface is a major determinant in causing the previously reported differences between overground and treadmill walking, then we would expect this difference to disappear in this experiment. Visual feedback was also manipulated by having people walk while wearing a blindfold or not. Walking speeds were controlled by moving either the disc or the handlebar at one of four velocities (see caption of Fig. 6.3), for the stationary and walking through space conditions, respectively. The results demonstrated that there were indeed very few differences observed between the gait parameters measured during stationary walking versus overground walking. Step length (Fig. 6.3a) and walk ratio (Fig. 6.3c) were comparable across walking speeds. The exception was that for the slowest walking speed (0.7 m/s), the overground walking condition produced larger step lengths and walk ratios in comparison to stationary walking. This particular effect is consistent with previous findings that reflected higher walk ratios at slower overground walking speeds (e.g., [90]). This higher walk ratio at the slowest walking speed is likely due to an increase in step length given that step frequency was virtually identical across all conditions (see Fig. 6.3b). Results also demonstrated that during stationary walking there was a significant decrease in head sway (Fig. 6.3d) and head bounce (Fig. 6.3e) compared to overground walking. As for the effect of vision, the results demonstrated that, irrespective of the walking condition, step length and frequency were unaffected by the presence or absence of visual feedback. This is in contrast with above-described studies that did find significant decreases in both step length and frequency [50, 74].

In summary, with respect to basic gait parameters, there were hardly any differences between overground walking and stationary walking. Most notable was the complete absence of an effect on step frequency, which has typically been the most consistently observed difference in earlier studies. Our results are, however, consistent with several other earlier studies that also did not find a difference between overground and treadmill walking [75, 84] and lend support to the notion that previously reported differences may be (partially) due to the fact that walking surfaces were not controlled for. Another interesting finding is that stationary walking significantly reduced the lateral (sway) and vertical (bounce) head movements. It is currently unclear what the cause for this change is. However, it is thought that head stabilization behavior helps organize the inputs from the visual, vestibular, and even somatosensory systems [13]. It is possible that during treadmill walking head movements are reduced in order to establish a more stable reference frame because of the registered discrepancy between the proprioceptive sense that signals movement, and the vestibular and visual senses that signal a stationary position. As for visual feedback, the only statistically reliable effect of the visual manipulation was a reduction of the vertical movements of the head at the highest walking speeds during overground walking as compared to stationary walking. When visual feedback was not available, this produced some trends in the gait parameters (increases in step frequency and decreases in step length and walk ratio), although these were not statistically significant.

2.3 Potential Implications for CyberWalk

One specific finding that impacted the design specifications of the CyberWalk platform was that it took at least 2 s to accelerate the treadmill to the very slow speed of 0.5 m/s. As we will see in the following section, providing vestibular inputs by allowing movement through space is an important part of simulating natural locomotion. Thus, from this perspective it meant that the CyberWalk platform needed to ideally be big enough to accommodate such start up accelerations. The finding that stationary walking does not change the main walking parameters of step length and step frequency is encouraging as it means that the walking data on the treadmill should be representative of normal gait. This also affected the design of the platform, albeit in a more indirect fashion. We surmised that the platform should ideally have a surface that is as stiff as possible since the most typically studied walking surfaces are very stiff (e.g., sidewalks). Head movements, on the other hand, did change during stationary walking in that they were less pronounced than during overground walking. This might seem advantageous in light of the fact that on the CyberWalk, head mounted displays (HMDs) are the primary means of visually immersing the user in VR and therefore having less head bounce would reduce visual motion artifacts and potential tracking lags for rapid movements. However, it does raise the possibility that the normal head stabilization function during walking (e.g., [80]) may be different during treadmill walking, which may affect the role of the proprioceptive receptors in the neck and also the role of coincident vestibular inputs.

3 Multisensory Self-Motion Perception

A veridical sense of self-motion during walking is a crucial component for obtaining ecological validity in VR. Of particular interest to us is the multisensory nature of self-motion perception. Information about the extent, speed, and direction of egocentric motion is available through most of our sensory systems (e.g. visual, auditory, proprioceptive, vestibular), making self-motion perception during locomotion a particularly interesting problem with respect to multisensory processing. During self-motion perception there are important roles for the visual system (e.g. optic flow), the vestibular system (the inner ear organs including the otoliths and semicircular canals), the proprioceptive system (the muscles and joints), and efference copy signals representing the commands issued to generate our movements. There is also some suggestive evidence for a role of the auditory system (e.g., [99]) and somatosensory system (e.g., [33]). Much work has been done to understand how each of these sensory modalities contribute to self-motion individually, however, researchers have only recently begun to evaluate how they are combined to form a coherent percept of self-motion and the relative influences of each cue when more than one is available.

3.1 Multisensory Nature of Walking

Since no single sense is capable of operating accurately under all circumstances, the brain has evolved to exploit multiple sources of sensory information in order to ensure both a reliable perception of our environment (see [20]) and appropriate actions based on that perception [37]. A fundamental question in the cognitive neurosciences asks what mechanisms are used by the central nervous system to merge all of these sources of information to form a coherent and robust percept. It seems that it employs two strategies to achieve robust perception. The first strategy, sensory combination, describes interactions between sensory signals that are not redundant. That is, information is specified in different coordinate systems or units. The second strategy, sensory integration, reduces the variance of redundant sensory estimates, thereby increasing their reliability [37].

Human locomotion is particularly interesting from the perspective of sensory integration as it involves a highly dynamic system, meaning that the sensory inputs are continuously changing as a function of our movements. For instance, with each stride (i.e., from the heel strike of one foot to the next heel strike of the same foot) the head moves up and down twice in a near sinusoidal fashion [62, 106], thereby generating continuously changing accelerations that are registered by the vestibular system. Similarly, with each stride, the musculoskeletal system generates a set of dynamically changing motor signals, the consequences of which are registered by the proprioceptive system. Finally, the visual flow is likewise marked with periodic vertical and horizontal components. Thus, the various pertinent sensory inputs are in a systematic state of flux during walking. Moreover, findings that visual [54], somatosensory [116], and vestibular [6] signals exhibit phase-dependent influences on postural control during walking suggest the interesting possibility that the reliabilities of the sensory signals are also continuously changing and possibly in phase with the different stages of the gait cycle.

A particularly influential group of models of multisensory integration have considered the problem from the point of view of efficiency. These efforts are often referred to as the “Bayesian approach”, which was originally applied to visual perception (e.g., [15, 17, 64]). It is acknowledged that neural processes are noisy [38] and consequently, so are sensory estimates. The goal is then for the brain to come up with the most reliable estimate, in which case the variance (i.e., noise) of the final estimate should be reduced as much as possible. If the assumption is made that the noise attributable to individual estimates is independent and Gaussian, then an estimate with the lowest variance is obtained using Maximum Likelihood Estimation (MLE) [35]. MLE models have three general characteristics. First, information from two or more sensory modalities is combined using a weighted average. Second, the corresponding weights are based on the relative reliabilities of the unisensory cues (i.e., the inverse of their variances); the cue with the lowest unimodal variance will be weighted highest when the cues are combined. Third, as a consequence of integration, the variance of the integrated estimate will be lower than those observed in either of the individual estimates. There is now mounting evidence that humans combine information from across the senses in such a “statistically optimal” manner (e.g., [37]). Most of this work has been aimed at modeling cue integration between the exteroceptive senses such as vision, haptics, and hearing [2, 4, 12, 35, 36], or within the visuomotor system (e.g., [63, 66]), but very few studies have considered whether the same predictions apply to multisensory self-motion perception.

The Bayesian perspective is now just starting to be considered in the field of human locomotion (e.g., [25]), and self-motion in particular [18, 19, 21, 23, 39, 42]. For instance, a study by Campos et al. [23] highlights the dynamic nature in which optic flow and body-based cues are integrated during walking in the real world. The study shows that the notion of optic flow as an all-inclusive solution to self-motion perception [46] is too simplistic. In fact, when body-based cues (e.g. proprioceptive and vestibular inputs) are available during natural walking they can dominate over visual inputs in dynamic spatial tasks that require the integration of information over space and time (see also [21] for supporting evidence in VR). Other studies have attempted to look at body-based cues in isolation and investigate how these individual sources interact with visual information. For instance, a number of studies have considered the integration of optic flow and vestibular information for different aspects of self-motion perception (e.g., [19, 39, 40, 51, 61]). Evidence from both humans [18, 39], see also [69]) and non-human primates [40, 49] shows that visual-vestibular integration is statistically optimal when making heading judgments. This is reflected by a reported reduction in variance during combined cue conditions, compared to the response patterns when either cue is available alone. Interestingly, when the visual signal lacks stereoscopic information, visual-vestibular integration may no longer be optimal for many observers [19]. To date, the work on visual-vestibular interactions has been the most advanced with respect to cue integration during self-motion in the sense that it has allowed for careful quantitative predictions. Studies on the combinations of other modalities during self-motion perception have also started to provide qualitative evidence that support the MLE. For instance, Sun et al. [102], looked at the relative contributions of optic flow information and proprioceptive information to human performance on relative path length estimation (see also [103]). They found evidence for a weighted averaging of the two sources, but also that the availability of proprioceptive information increased the accuracy of relative path length estimation based on visual cues. These results are supported by a VR study [21] which demonstrated a higher influence of body-based cues (proprioceptive and vestibular) when estimating walked distances and a higher influence of visual cues during passive movement. This VR study further showed that although both proprioceptive and vestibular cues contributed to travelled distance estimates, a higher weighting of vestibular inputs were observed. These results were effectively described using a basic linear weighting model.

3.2 Integration of Vestibular and Proprioceptive Information in Human Locomotion

Consider walking through an environment that is covered in fog or walking in the pitch dark. While these scenarios render visual information less reliable, evidence shows that humans are still very competent in various locomotion tasks even in the complete absence of vision (e.g., [22, 34, 67, 73, 83, 103, 109]). Past research often reports that when either walking without vision or when passively moved through space, body-based cues are often sufficient for estimating travelled distance [7, 21, 24, 51, 58, 67, 73, 92, 102, 103] and to some extent self-velocity [7, 22, 58, 92].

A series of studies have also looked specifically at the interactions between the two main sources of body based cues; the proprioceptive system and the vestibular system. Studies that have investigated the role of vestibular and/or proprioceptive information in self-motion perception have done so by systematically isolating or limiting each cue independently. Typical manipulations include having participants walk on a treadmill (mainly proprioceptive information), or passively transporting them through space in a vehicle (mainly vestibular information specifying translations through space). The logic is that walking in place (WIP) on a treadmill produces proprioceptive but no vestibular inputs associated with self-motion through space, while during passive movement (PM), there are vestibular inputs but no relevant proprioceptive information from the legs specifying movement through space. These conditions can then be compared to normal walking through space (WTS), which combines the proprioceptive and vestibular inputs of the unisensory WIP and PM condition. For instance, Mittelstaedt and Mittelstaedt [73] reported that participants could accurately estimate the length of a travelled path when walking in place (proprioception), or when being passively transported (vestibular). In their study, even though both cues appeared sufficient in isolation, when both were available at the same time (i.e., when walking through space) proprioceptive information was reported to dominate vestibular information. But what this study could not specify was by how much it dominates or, more generally, what the relative weights of the individual cues are.

There is, however, a fundamental problem that makes it very difficult to make assessments of cue weighting and studying the multisensory nature of self-motion in general. The problem is that there is a very tight coupling between vestibular and proprioceptive information during normal walking. The two signals are confounded in the sense that under normal circumstances there can be no proprioceptive activity (consistent with walking) without experiencing concurrent vestibular excitation. In fact, this strong coupling has lead Frissen et al. [42] to argue for a “mandatory integration” hypothesis which holds that during walking the brain has adopted a strategy of always integrating the two signals. It also leads to substantial experimental difficulty when attempting to obtain independent measures from the individual senses (see also [24]). Consequently, during the often used, “proprioceptive only” walking in place condition, vestibular inputs are in fact concurrently present, yet specify a stationary position. This thus creates a potential sensory conflict when the aim is to obtain unbiased unisensory estimates. The reverse conflict occurs in the “vestibular only” PM condition, where the proprioceptive input specifies a stationary position. Although in this case, it should be noted, that there are numerous instances in which vestibular excitation is experienced without contingent proprioceptive information from the legs, including whenever we move our head, or when moving in a vehicle. In other words, in the case of passive movements, the coupling may not be as tight.

Despite the fact that it is difficult to obtain unisensory proprioceptive and vestibular estimates, it is possible to create conditions in which the conflict between the vestibular and proprioceptive cue are much reduced and, moreover, controllable. This will enable us to determine the relative weighting of the individual cues. One way is to use a rotating platform in combination with a handlebar that can be moved independently. An early example of this was a platform used by Pick et al. [79], which consisted of a small motorized turntable (radius 0.61 m) with a horizontal handle mounted on a motorized post extending vertically through the center. Using this setup Bruggeman et al. [14] introduced conflicts between proprioceptive and vestibular inputs while participants stepped around their earth-vertical body axis. Participants always stepped at a rate of 10 rotations per minute (rpm) (constituting the proprioceptive input), but because the platform rotated in the opposite direction, participants were moved through space at various different rates (constituting the vestibular input). They found that when the proprioceptive and vestibular inputs were of different magnitudes, the perceived velocity fell somewhere between the two presented unisensory velocities, thus suggesting that the brain uses a weighted average of vestibular and proprioceptive information as predicted by MLE (see also [5]). However, a limitation of this type of relatively small setup is that it only allows participants to perform rotations around the body axis. That is, it allows participants to step in place, which is a very constrained and rather unnatural mode of locomotion with biomechanics that are different from normal walking.

Such restrictions do not apply to the CTM (Fig. 6.2) which allows for full stride curvilinear walking. This unique setup also allows us to manipulate vestibular, proprioceptive (and visual) inputs independently during walking. In one of our recent studies we assessed multisensory integration during self-motion using a spatial updating paradigm that required participants to walk through space with and without conflicting proprioceptive and vestibular cues [42]. The main condition was the multisensory, “walking through space” condition during which both vestibular and proprioceptive systems indicated self-motion. This condition consisted of both congruent and incongruent trials. In the congruent trials, participants walked behind the handlebar while the treadmill disk remained stationary. Thus, the vestibular and proprioceptive inputs conveyed the same movement velocities; in other words, the proprioceptive-vestibular gain was 1.0. In the incongruent trials, systematic conflicts were introduced between the vestibular and proprioceptive inputs. This was achieved by having participants walk at one rate, while the disk was moved at a different rate. Specifically, proprioceptive gains of 0.7 and 1.4 were applied to two vestibular velocities (25 \(^\circ \)/s and 40 \(^\circ \)/s). To achieve a gain of 0.7, the disk moved in the same direction as the handlebar but at 30 % of its speed. To achieve a gain of 1.4, the disk moved at 40 % of the handlebar speed but in the opposite direction. We also tested two additional conditions. In the “walking in place” condition, participants walked in place on the treadmill but did not move through space. Like in previous studies, participants were instructed to use the proprioceptive information from their legs to update their egocentric position as if they were moving through space at the velocity specified by the CTM. In the “passive movement” condition, participants stood still while they were passively moved by the CTM. Spatial updating was measured using a continuous pointing task similar to that introduced by Campos et al. [22] and Siegle et al. [92], which expanded upon a paradigm originally developed by Loomis and colleagues [43, 67]. The task requires the participant to continuously point at a previously viewed target during self-motion in the absence of vision. A major advantage of this method is that it provides continuous information about perceived target-relative location and thus about self-velocity during the entire movement trajectory. The results were consistent with an MLE model in that participants updated their position using a weighted combination of the vestibular and proprioceptive cues, and that performance was less variable when both cues were available.

Fig. 6.4
figure 4

Relative weighting of vestibular information. The relative weighting of the vestibular and proprioceptive inputs were investigated by fixing the proprioceptive input to a single value and varying the vestibular input (note that in [42] the proprioceptive inputs were varied). Eight participants were tested with a 2-IFC paradigm with four standards that all had the same walking speed but had different vestibular inputs, as explained below. The conflicts we tested differed in size and in direction. In the first condition there was no conflict and both the vestibular and proprioceptive inputs were the same (i.e., both at 40 \(^\circ \)/s). In the second condition, the vestibular input was slower (20 or 30 \(^\circ \)/s) than the proprioceptive input, and in the third condition, the vestibular input was larger (50 \(^\circ \)/s) than the proprioceptive inputs. To illustrate how this last condition was achieved, the handlebar was moved at 50 \(^\circ \)/s to establish the vestibular input. However since this by itself would also give a proprioceptive input of 50 \(^\circ \)/s and not the desired 40 \(^\circ \)/s, the difference was created by moving the disc at 10 \(^\circ \)/s in the same direction as the handlebar. a The group means for the PSEs (and SEMs) for the main experiment (black markers) and the control experiment (grey markers, see text for details). The dotted diagonal lines illustrate hypothetical vestibular weighting schemes. b The estimated vestibular weights extracted from the results in panel (a). The horizontal dotted lines on the top and bottom of the panel represent hypothetical instances in which the perceived walking speed is entirely determined by the vestibular (top) or proprioceptive input (bottom)

Unfortunately the results did not allow us to determine the relative weighting of the two cues (see [42]). We therefore conducted a new experiment which employed a standard psychophysical 2-interval forced choice (2-IFC) paradigm (see [45], for an introduction). Experimental details are provided in the caption of Fig. 6.4. In each trial participants walked two times and they indicated in which of the two they had walked faster. In one interval (the standard) participants walked under various conditions of conflicting vestibular and proprioceptive signals, while in a second interval (the comparison) they walked through space without cue conflict. By systematically changing the comparison (i.e., handlebar velocity) we can determine the point at which the standard and comparison were perceptually equivalent (i.e., the point of subject equality, or PSE).

Figure 6.4a shows the mean PSEs as a function of vestibular input. In the conditions with conflicting inputs, the PSEs lie between the two extreme cases (solid horizontal and diagonal line). Also, the PSEs are not on a straight line, indicating that the relative weighting depends on the vestibular input. This is illustrated in Fig. 6.4b where the vestibular weights are plotted for the different conflict conditions. The proprioceptive input is weighted higher in the two conditions where the vestibular input was smaller (20 or 30 \(^\circ \)/s) than the proprioceptive input (40 \(^\circ \)/s). However, when the vestibular input was larger (50 \(^\circ \)/s) than the proprioceptive input, their respective weights were practically equal. This raises the question of whether, contrary to the instruction to judge their walking speed, participants were simply using their perceived motion through space (i.e., the vestibular input) to perform the task. This alternative interpretation is unlikely given the results of a control experiment in which two new participants were tested in the exact same experiment but with explicit instructions to judge how fast they were moving through space and to ignore how fast they were walking. The results are clearly different from those of the main experiment (Fig. 6.4a, grey markers). The PSEs are now close to the theoretical line for complete vestibular dominance. However, the PSEs are not exactly on the line but show an influence of the proprioceptive input, which is what we would expect under the mandatory integration hypothesis (i.e. even though participants were told to ignore their speed of proprioception, these proprioceptive cues still influenced their responses).

3.3 “Vection” from Walking

Under the mandatory integration hypothesis we expect that walking conditions, even with extreme conflicts between the proprioceptive and vestibular signals, will show evidence of a weighted averaging. Once again, walking in place creates a particularly interesting condition. Averaging a zero input (vestibular) with a non-zero input (proprioceptive) necessarily leads to a non-zero estimate. We therefore expect participants in this condition to experience illusory self-motion in the absence of actual movement through space (i.e., non-visual “vection”). There is indeed evidence that walking in place elicits nystagmus [9], and pseudo-coriolis effects [10], and self-motion aftereffects [8].

In one experiment we created five extreme sensory conflict conditions. The participants were moved through space at \(-\)10, \(-\)5, 0, 5, or 10 \(^\circ \)/s while walking at a fixed speed of 40 \(^\circ \)/s. Negative values indicate that the participant moved backwards through space. Thus, in two conditions the inputs were of the same sign (i.e., physical movement was in the same direction), but widely different in magnitude. In two other conditions, the sign was the opposite in direction such that participants stepped forward while being moved backwards through space. In the last condition they were walking in place. We used the same pointing task as in Frissen et al. [42] to measure perceived self-motion.

Fig. 6.5
figure 5

Self-motion perception during walking with extreme conflicts between proprioceptive and vestibular signals. a The perceived self-motion for eleven participants after averaging across the six replications of each condition. The solid black line represents the fit of the MLE model to the group means. The asterisk indicates a significant difference between test velocity and mean pointing rate. b Data categorized according to whether motion was perceived as backward (open black circles) or forward (filled black circles). The sizes of the circles reflect the relative proportion of trials that contributed to the represented mean value. Clearly there were a substantial number of cases in which direction was confused. Pointing rates were virtual mirror images for the forward and backward perceived trials. To illustrate this, the grey circles show the backward perceived motion but with the sign inversed. The solid lines represent fits of the adapted MLE model to the group means. The annotations show the estimates for the proprioceptive weights that correspond to the fitted model

Figure 6.5a shows the perceived self-motion. An estimate of the proprioceptive weight was obtained from fitting the MLE model to the group means and was 0.07 with a corresponding vestibular weight of 0.93. The fit is, however, rather poor and, except for the \(-\)\(^\circ \)/s condition, none of the pointing rates were significantly different from the test velocity, suggesting that participants used the vestibular input only. However, all participants at some point did experience illusory motion through space in the walking in place condition. Moreover, participants also confused the direction of motion on at least several trials. For instance, backward motion at 10 \(^\circ \)/s was perceived as forward movement on 30 % of the trials. Therefore, simply averaging the signed mean pointing rate would give an incorrect impression of actually perceived motion. If we categorize the data according to whether the motion was perceived as backward or forward, this results in the two curves shown in Fig. 6.5b. For about 58 % of the trials this motion was perceived as forward (at \(\sim \)\(^\circ \)/s) and for about 42 % of the trials as backward (at \({\sim }\!{6}\,^\circ \)/s). Thus, walking in place clearly induces an illusion of self-motion. Interestingly, these new trends can still be described by a simple weighted averaging. The difference is that only the magnitudes of the inputs are used irrespective of direction. Thus, the magnitude of the trends in Fig. 6.5b are well described by, \(\hat{S} = \sum \limits _{i} {w_{i} \left| {S_{i} } \right| }\) where Ŝ is the multisensory estimate, \(\left| {S_{i} } \right| \) the magnitude of the individual inputs, and \(w_{i}\) their relative weights. Estimates of the proprioceptive weights were obtained from fitting the adapted model to the group means. They were 0.12 and 0.07, for the motion that was perceived as forward and backward, respectively, which makes the corresponding vestibular weights 0.88 and 0.93.

What is most surprising about these results is that the odds of perceiving forward motion as opposed to backward motion were close to 1:1. This surprise comes from the fact that the proprioceptive input is directionally unambiguous. Two subsequent experiments, in which we manipulated either the walking speed or the walking direction, clearly showed that there is an effect of the proprioceptive input on the distribution of the number of trials that are perceived as forward or backward motion. For instance, the proportion of trials perceived as forward was, as before, close to 50 % when mechanically walking forward in place, but dropped to around 25 % when mechanically walking backwards. In other words, stepping backwards also made the participant feel like they were moving backwards most of the time, but not always. The contribution of the proprioceptive input to the perceived direction is therefore only partial. It remains an open question as to what all of the determining factors are for perceived direction.

3.4 Potential Implications for CyberWalk

Taken together, these studies reveal the clear importance of vestibular inputs for self-motion perception during walking. The vestibular sense registers primarily accelerations and will gradually stop responding once a constant speed has been reached. However, this cessation of sensory stimulation does not mean that there is lack of motion information. After all, if no change in velocity occurs, this would indicate that self-motion had not ceased [92]. Nevertheless, the most salient moments are during the acceleration phase (i.e., start walking) and deceleration phase (stop walking). When simulating normal walking on a treadmill, it is therefore important to retain these inertial cues as accurately as possible. The CyberWalk effectively achieves this. Specifically, when the user starts to walk from a standstill, he/she initially walks on a stationary surface and accelerates through space as they would during normal, overground walking. Only once the user starts to reach a constant walking speed will the treadmill start to move. Gradually, the treadmill brings the user back to the center of the platform (ideally sub-threshold), by moving them backwards through space while they continue to walk. Similarly, when the user stops walking or changes walking direction, the treadmill only responds gradually, allowing the normal inertial input to the vestibular system to occur. For this scheme to work, the walking surface has to be large enough to accommodate several steps without large changes in treadmill speed. In preliminary studies this system has been shown to work very well for controlling treadmill speed on a large linear treadmill [94]. Through these studies, we determined that the minimum size of the walking surface needed to accommodate this control scheme is 6 \(\times \) 6 m. However, financial and mechanical considerations limited the eventual size of the CyberWalk to 4 \(\times \) 4 m.

4 Large Scale Navigation

One field in which the CyberWalk is expected to have a large impact is human navigation. Navigation requires estimates of perceived direction and position while moving through our environments. In order to achieve this we can use external devices such as maps, street signs, compasses or GPS systems, or we can use our internal representations of space that come from multiple cognitive and sensory sources. Much of what we know about human spatial navigation has come from studies involving spaces of relatively small scale (i.e. room size or smaller), while comparatively fewer human studies have considered large-scale navigation. In one recent extensive real world study by our group, we evaluated the extent to which humans are able to maintain a straight course through a large-scale environment consisting of unknown terrain without reliable directional references [93]. The scenarios were those in which observers were transported to the Tunisian Sahara desert or to the Bienwald forest in western Germany and were asked to walk in a completely straight trajectory. The area used for the forest experiment was selected because it was large enough to walk in a constant direction for several hours and has minimal changes in elevation. The thick tree cover also made it impossible to locate distant landmarks to aid direction estimation.

According to a belief often referred to in popular culture, humans tend to walk in circles in the types of desert or forest scenarios described above, yet there had been no previous empirical evidence to support this. The Souman et al. [93] study showed that people do indeed walk in circles while trying to maintain a straight course, but only when traversing in the absence of reliable external directional references. This was particularly true when participants walked in a dense forest on a cloudy day, with the sun hidden behind the clouds. Most participants also repeatedly crossed their own path without any awareness of having done so. However, under conditions in which directional references such as landmarks or the solar azimuth were present, people were actually able to maintain a fairly straight path, even in an environment riddled with obstacles, such as a forest. A popular explanation for walking in circles is based on the assumption that people tend to be asymmetrical with respect to, for instance, leg length or leg strength. If this were true, it would be hypothesized that a particular individual would always turn in the same direction. However, this was not the case. In fact, inconsistency in turning and veering direction was very common across participants. Moreover, measured leg strength differences could not explain the turning behavior, nor could leg length.

Interestingly, the recorded walking trajectories show exactly the kind of behavior that would be expected if the subjective sense of straight ahead were to follow a correlated random walk. With each step, a random error is added to the subjective straight ahead, causing it to drift away from the true straight ahead. As long as the deviation stays close to zero, people walk in randomly meandering paths. When the deviation becomes large, it results in walking in circles. This implies that circles are not necessarily an indication of a systematic bias in the walking direction but can be caused by random fluctuations in the subjective straight ahead resulting from accumulating noise in the sensorimotor system, in particular the vestibular and/or motor system.

Another possible contribution to deviating from a straight path, not considered in Souman et al. [93] study is the instantaneous orientation of the head with respect to the trunk. It has been shown that eccentric eye orientation (e.g., [82]) and head orientations tend to be related to the direction of veer from a straight course. The most common finding is that people veer in the direction of eye/head orientation [113]. For instance, in a series of driving experiments, Readinger et al. [82] consistently found that deviations in a driver’s gaze can lead to significant deviations from a straight course. Specifically, steering was biased in the direction of fixation. They tested a large range of eye positions, between \(-\)45 \(^\circ \) and \(+\)45 \(^\circ \). Interestingly, the largest effect was obtained with an eccentric eye position of as little as 10 \(^\circ \) and leveled off beyond that. Thus, even a small deviation of 5 \(^\circ \) created a significant bias. A very similar bias has been found during visually guided walking [28, 111]. Jahn et al. [59] asked participants to walk straight towards a previously seen target placed 10 m away while they were blindfolded. Their results demonstrated, contrary to all previous work, that with the head rotated to the left, participants’ path deviated to the right, and vice versa. The effect of eye position showed the same pattern, but was not significant. The authors interpreted this as a compensation strategy for an apparent deviation in the direction of gaze due to the lack of the appropriate visual feedback.

Fig. 6.6
figure 6

The effect of head and eye orientation on veering. Thirteen participants walked in a large, fully tracked lab with different combinations of eye and head orientations. Each combination was tested 5 times for a total of 3 (Head: Left, Straight, Right) \(\times \) 3 (Eyes, Left, Straight, Right) \(\times \) 5 (repetitions) = 45 randomized trials. The head orientation was blocked and randomized across participants and within each block of head orientation, eye position was pseudo-randomized. For each trial the participant viewed the target straight ahead of them until they had a good mental image of its position and then walked to the target under the specified conditions of eye and head orientation. Except for when looking at the target, the participant was always blindfolded when walking. For safety an experimenter was always in the room with the participant and provided specific instructions prior to each trial. The participant’s position was recorded using a Vicon tracking system. To specify eye position, a pair of safety goggles were customized with three red LEDs that were positioned on the outer surface such that looking at them would create an angular position of the eyes of approximately \(45^\circ \) to the left or to the right, or straight ahead, relative to the head. To control head orientation, on the other hand, no explicit reference was provided, but rather participants were instructed to turn their heads as far as possible (to the left or right) without causing any discomfort, and to hold their head there for the duration of a trial. Compliance was checked by the experimenter. To prevent any view of the environment, an opaque black veil was donned after orienting the eyes and head. However, the head’s angle relative to the trunk was somewhat variable across trials and participants. The participant wore a wireless headset playing noise to mask any auditory feedback

Intrigued by the counterintuitive results of Jahn et al. [59] study we conducted a very similar experiment in an attempt to replicate these results. The results (see Fig. 6.6. and caption for details) suggest a bias in the direction of veering in the same direction as the head turn. The bias was asymmetric in that it was larger when the head was turned to the left than when the head was turned to the right. There was also an apparent interaction between the head and eye orientation such that the bias tended to diminish when the eyes were turned away from straight ahead and was stronger when the head and the eyes were oriented in opposite directions. Statistical analyses, however, showed marginally significant effects of head orientation and its interaction with eye position. Whereas these results are qualitatively consistent with those of Cutting et al. [28] and Readinger et al. [82], they are opposed to those of Jahn et al. [59]. In fact, when we compare the average values, our and Jahn et al.’s results, are highly negatively correlated (\(\mathrm{{r}}=-0.84\)). We can speculate that spontaneous head turns would have contributed to the effect of veering from a straight trajectory observed by [93], especially in the desert and forest experiments where participants were free to look around as they pleased.

4.1 Potential Implications for CyberWalk

The above-described large scale navigational studies demonstrate the need for a platform like the CyberWalk, more than to offer constraints on its design. Specifically, they demonstrate the real need for a laboratory setup that allows a walker to go in circles or to walk along meandering paths. Nevertheless, they show that more controlled environments are essential in studying human navigation. For instance, the forest experiment revealed that one apparently major factor in being able to stay on a straight trajectory was whether the sky was overcast or not. The CyberWalk achieves environmental control through the use of VR technologies which allow us to create large scale visual environments with high fidelity and control over environmental factors that are normally beyond control, such as the presence and position of the sun.

5 Putting it All Together: The CyberWalk Platform

The CyberWalk treadmill (Fig. 6.7) consists of 25 segmented belts each 5 m long and 0.5 m wide, which are mounted on two large chains in the shape of a torus. The entire setup is embedded in a raised floor. The belts constitute one direction of motion, while the chains form the perpendicular direction. The chains are capable of speeds up to 2 m/s, while the belts can run at 3 m/s. The chains are driven by four powerful motors placed at the corners of the platform and each belt segment has its own smaller motor. The drives are controlled such that they provide a constant speed independent of belt load. The walking surface is large enough to accommodate several steps without large changes in treadmill speed. This size allows for changes in treadmill speed which are low enough to maintain postural stability of the user, but makes it unavoidable that these accelerations will sometimes be noticeable to the user. To what extent this affects self-motion perception needs to be determined more closely, although Souman et al. [95] found that walking behavior and spatial updating on the CyberWalk treadmill approached that of overground walking.

The high-level control system determines how the treadmill responds to changes in walking speed and direction of the user in such a way that it allows the user to execute natural walking movements in any direction. It tries to keep the user as close to the center of the platform as possible, while at the same time taking into account perceptual thresholds for sensed acceleration and speed of the moving surface. The control law has been designed at the acceleration level to take into account the limitations of both the platform and the human user, while ensuring a smoothly changing velocity input to the platform (see [29]). The treadmill velocity is controlled using the head position of the user. The control scheme includes a dead-zone in the center of the treadmill where changes in the position of the user are not used when the user is standing still. This makes it much more comfortable for users to look around in the VE while standing still [95]. Users wear a safety harness connected to the ceiling to prevent them from falling and reaching the edge of the platform with their feet.

The setup is installed in a large hall (12 \(\times \) 12 m walking area). The hall is equipped with a 16 camera Vicon MX13 optical tracking system (Vicon, Oxford, United Kingdom) that is used to track the position and orientation of the participant’s head. To this end, participants wear a helmet with reflective markers. The tracking data are used to update the visualization presented through a head-mounted display (HMD) and to control the treadmill velocity. Presently the HMD used is an eMagin Z800 3DVisor (eMagin, Bellevue, USA) custom built into goggles, which prevents the participant from seeing anything else but the image on the displays. One advantage of this HMD is that it is lighter (\(<\)227 g) and less obtrusive than most other HMD systems, but also has a reduced field-of-view. If required, user responses can be collected via a wireless gamepad. When not in use, the treadmill can be covered with wooden boards with a thick rubber coating, creating one continuous, fully tracked walking area.

The omnidirectional capabilities of the platform form its largest contribution to the scientific study of human walking biomechanics. By definition, locomotion serves to transport us from one place to another. However, one of the major constraints on research has been space. For a typical research facility it is extremely expensive to maintain, and difficult to justify, a large instrumented, but otherwise empty room. Most locomotion laboratories are therefore rather small, especially in comparison to the scale of real walking. There is of course a relatively simple solution to the space limitation, and that is to put the participant on a treadmill so that she/he can walk forever. However, virtually all of these treadmills are relatively small and linear. Thus, the space limitation is only resolved for one dimension. In short, none of these restricted spaces enable truly normal walking behaviors like negotiating corners and walking along nonlinear trajectories. However, none of these spatial limitations apply to the CyberWalk platform. This then opens up a large range of possibilities for human locomotion research. One straightforward opportunity is the possibility of replicating the outdoor natural walking experiments described above (see Sect. 6.2.1). An issue with the natural walking study was the fact that turn angle and turn radius did not change independently from each other, another was the need for the 9 kilo backpack to hold all of the recording equipment. By utilizing a carefully designed virtual environment it becomes possible to control turn angles and radii. The backpack is no longer necessary since most of the measurements can be made directly through the optical tracking system, while other measurements (i.e., from the IMU) can be implemented such that there is no additional load on the walker. Such a study would effectively be an ideal marriage of the outdoor experiment [97] and the laboratory study on head-trunk interactions [98].

More generally, the platform’s optical tracking system is capable of full body tracking which has enormous potential for extending studies of biomechanics and dynamics (e.g., [30]) during real, unconstrained walking. Understanding unconstrained walking is not only of scientific value but can also advance computer vision technologies for tracking and recognizing human locomotion behavior (e.g., [1]). The platform’s tracking capability can be extended to support gaze tracking by including a portable eye tracking device, which is of great value to the study of the coordination of the eye, head, and trunk while making turns [52, 53, 57, 81, 98]. Space has also been a major limitation to earlier research using tracking technologies. Thus, walkers have typically been tracked while walking short distances, making predefined turns (e.g., [27, 57, 81]), or walking in repetitive artificial patterns like circles [47], figure eights [52] or cloverleaf patterns (e.g., [53]). Sreenivasa et al. [98] had participants walk along trajectories that consisted of turns of various angles (between \(45^\circ \) and \(135^\circ \), and \(180^\circ \) turns) interspersed with straight sections, in an attempt to simulate more closely the series of turns that occur in natural day-to-day walking. With the help of VE technologies it is also possible to strictly control the amount of visual information provided about upcoming turns. The effects of head/eye orientation on veering have only been studied when having participants walk for several meters. However, as the large scale navigation studies suggest, more complete evaluations are possible when assessing the effects of head/eye orientation on veering during walking trajectories that occur over longer periods of time, or across longer distances.

Fig. 6.7
figure 7

The CyberWalk platform

The CyberWalk platform also opens up a particularly large potential for human navigation research. For instance, recall the desert/forest experiments described in Sect. 6.4, for which it was necessary to travel to the Sahara desert. Without going through this level of effort and expense, conducting such experiments would be extremely difficult to test in the real world because of the need for a completely sparse environment through which an individual can walk for hours. However, such large scale experiments are now possible in the lab. VEs allow us to manipulate particular characteristics of the simulated world (e.g., position of the sun, or time of day) as a way of evaluating the exact causes of any observed veering behaviors, while still allowing for limitless walking capabilities in any direction. Other questions are now possible to address as well. Although, thanks to visual VE development programs, these large scale environments are relatively easy to create and manipulate, the platform is the first to enable truly unconstrained exploration of these environments. It thereby also creates much more ecologically valid, multisensory circumstances for studying questions about spatial cognition. The platform also creates unique opportunities for studying behavior in unfamiliar environments (e.g., [55]).

In conclusion, being able to physically walk through large VEs in an unrestricted manner opens up opportunities that go beyond the study of gait biomechanics, cognition, and spatial navigation in naturalistic environments [16, 105]. It also provides new possibilities for rehabilitation training [44], for edutainment (gaming, virtual museums), design (architecture, industrial prototyping) and various other applications. In summary, the CyberWalk treadmill has brought us a significant step forward towards natural walking in large VEs.