Abstract
The CyberWalk treadmill is the first truly omnidirectional treadmill of its size that allows for near natural walking through arbitrarily large Virtual Environments. The platform represents advances in treadmill and virtual reality technology and engineering, but it is also a major step towards having a single setup that allows the study of human locomotion and its many facets. This chapter focuses on the human behavioral research that was conducted to understand human locomotion from the perspective of specifying design criteria for the CyberWalk. The first part of this chapter describes research on the biomechanics of human walking, in particular, the nature of natural unconstrained walking and the effects of treadmill walking on characteristics of gait. The second part of this chapter describes the multisensory nature of walking, with a focus on the integration of vestibular and proprioceptive information during walking. The third part of this chapter describes research on large-scale human navigation and identifies possible causes for the human tendency to veer from a straight path, and even walk in circles when no external references are made available. The chapter concludes with a summary description of the features of the CyberWalk platform that were informed by this collection of research findings and briefly highlights the current and future scientific potential for this platform.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Human locomotion
- Omnidirectional treadmill
- Gait
- Biomechanics
- Multisensory integration
- Navigation
- Cognition
1 Introduction
By far the most natural way to move through our environment is through locomotion. However, the seemingly effortless act of walking is an extremely complex process and a comprehensive understanding of this process involves scientific and clinical studies at different levels of analysis. Locomotion requires preparing the body posture before initiating locomotion, initiating and terminating locomotion, coordinating the rhythmic activation patterns of the muscles, of the limbs and of the trunk, and maintaining dynamic stability of the moving body [77]. There is also a need to modulate the speed of locomotion, to avoid obstacles, to select appropriate, stable foot placement, to accommodate different terrains, change the direction of locomotion, and guide locomotion towards endpoints that are not visible from the start. To this end, locomotion engages many different sensory systems, such as the visual, proprioceptive, auditory and vestibular systems, making it a particularly interesting multisensory problem. Importantly, these are also factors that must be considered when developing a realistic walking interface to be used with Virtual Reality (VR).
Although many of these aspects of locomotion have received extensive scientific attention, much of the earlier laboratory-based research, though highly valuable, has lacked ecological validity. Ultimately, scientific research should, when possible, evaluate human behaviors as they occur under natural, cue-rich, ecologically valid conditions. To this end, VR technology has been providing researchers with the opportunity to provide natural, yet tightly controlled, stimulus conditions, while also maintaining the capacity to create unique experimental scenarios that would (or could) not occur in the real world [16, 24, 68, 105]. An integral part of VR is to also allow participants to move through the Virtual Environments (VE) as naturally as possible. Until recently a very common way of having observers navigate through VEs was to have them manipulate unnatural control devices such as joysticks, computer mice, and keyboards. Despite having some advantages over mere visual stimulation, such rudimentary motion control devices are severely limited. While using such devices, the physical actions which drive self-motion are very different from the action of natural locomotion which they are intended to replace (e.g. clicking a mouse button to move forward versus stepping). Moreover, the sensory input is mainly visual and other important sensory information is lacking, notably proprioceptive feedback from the legs and vestibular feedback. Fortunately, more natural locomotion interfaces, such as bicycles, treadmills and fully-tracked free-walking spaces, are becoming more common (see [24] for a review). Although with these solutions locomotion is much closer to real life movements, they are still constrained in important ways. In the case of the bicycle, for instance, there is no absolute one-to-one relationship between the metrics of visual space and those of the proprioceptive movements because of the unknown scale of one pedal rotation (i.e., this would depend on the gear, for instance). Fully-tracked walking spaces are constrained by the size of the actual space within which they are contained. Treadmill setups are restrictive as most of them are rather small [94] and only allow walking in one direction. Indeed, in everyday navigational tasks, we rarely walk completely straight over extended periods of time. In short, today it is still difficult to allow people to freely walk through large scale VEs in an unconstrained manner.
It is this unsatisfactory situation that prompted some of the work reported in this volume and it likewise prompted the CyberWalk project. The goal of this project was the development of a novel, multimodal, omnidirectional walking interface, with at its core, a 4 \(\times \) 4 m omnidirectional treadmill. The project encompassed an international consortium dedicated to both scientific and technological research. The CyberWalk platform is the first truly omnidirectional treadmill of its size that allows for natural walking in any direction through arbitrarily large Virtual Environments. It is a major step towards having a single setup that allows for the study of the many facets of human locomotion, ranging from the biomechanical to the cognitive processes involved in navigating large areas. The platform consists of segmented belts which are mounted on two large chains in the shape of a torus, which allows it to move the walking surface in both horizontal directions and thereby enables indefinite omnidirectional walking and turning (see Fig. 6.7). It is integrated with additional VR capabilities so that a virtual world is presented through a head-mounted display (HMD) and updated as a function of the movements of the user. The platform is described more fully in [95] and in Sect. 6.5 of this chapter. More detailed descriptions of specific technological and engineering aspects of the platform can be found elsewhere [29, 87–89, 94, 112].
The technological development of the platform had a strong human-centered approach and was guided by human gait and psychophysical research conducted at the Max Planck Institute for Biological Cybernetics (MPI), one of the consortium partners. Here we report on a selected number of these studies. Since a major objective was to develop a platform that enables natural walking, we studied basic gait parameters during natural unconstrained outdoor walking as a general reference. The CyberWalk platform has at its core a treadmill, and thus we investigated potential differences between normal overground walking and treadmill walking. Studies were also focused on the multisensory processes at play during human walking. While there is a wealth of research on the role of vision in locomotion, relatively little is known about the interaction between the different non-visual senses. Consequently, a series of studies was conducted to look at the interaction between vestibular and proprioceptive information during walking. Finally, a number of studies on human navigation were conducted on long-range navigational capabilities with and without the use of visual information.
2 Gait and Biomechanics
One of the major goals of the CyberWalk project was to enable natural and unconstrained walking on a treadmill based system. This original challenge introduced many questions and we highlight two of those here. First, in order to enable natural and unconstrained gait, a description of typical gait characteristics was needed. For instance, at what speed do people normally walk, how do they start and stop walking, how often and how much do they turn? Second, there is still a debate in the literature as to whether gait characteristics during treadmill walking are the same as during overground walking. Thus, we conducted a series of studies to address these questions. The results were intended to assign tangible constraints on a system intended to support natural walking (e.g., on the accelerations required and the size of the walking surface).
2.1 Natural Unconstrained Walking
There is, in fact, very little literature on natural unconstrained walking. One reason for this is a previous lack of measurement technologies suitable to capture gait with sufficient accuracy. In recent years, however, Global Positioning Systems (GPS) are providing a promising solution to this problem [70, 101]. For instance, Terrier and colleagues used a highly accurate GPS to show that inter- and intra-subject variability of gait characteristics can be measured outdoors [107, 108]. Moreover, GPS data can be combined with Inertial Measurement Unit (IMU) technologies to develop highly accurate measurement systems with high data rates [104]. Nevertheless, the few available studies that report GPS data are still highly constrained in a fashion reminiscent of laboratory research. For instance, participants are often asked to follow a modulated pace/frequency [86, 106, 108],which is known to significantly increase energy cost [115] or to walk/run along a predefined path [32, 104, 106, 107]. In studies where walking behavior was not constrained, data were collected over several days at very low sampling rates to form a picture of overall “behaviors” rather than basic gait parameters such as step length and frequency [26, 70, 76, 85, 110].
We conducted a study of unconstrained outdoor human walking that differed from previous studies in that we observed people walking for an extended period of time (1 h) and completely at their own volition [97]. We measured the position of the trunk and rotational rates of the trunk and head. The high accuracy required to capture trunk position outdoors was achieved by using a Carrier-Phase Differential GPS setup (C-DGPS). The C-DGPS utilizes a secondary static GPS unit (master station) to correct for errors in a mobile rover GPS (Novatel Propak, V3–L1). The rover was combined with an aviation grade, extremely light and compact antenna that was mounted onto a short pole fixed to a frame inside a backpack. Data were output at 5 Hz with a typical accuracy between 2 and 10 cm depending on environmental conditions (tree cover, reflections etc). For additional measures about movements of the trunk we used a 6-axis IMU (Crossbow Technology, IMU300), with measurement ranges of \(\pm 19.6\) m/s\(^{2}\) and \(\pm 100\,^\circ \!/{\text{ s }}\). The measuring unit was rigidly fixed to the bottom of the GPS antenna frame and logged data at 185 Hz. To measure the head we used a custom-built 3-axis IMU (ADXL202 and ADXRS150, logging at 1028 Hz) that was mounted on a head brace worn by the participants (total weight of less than 150 g). A strobe signal was used to align the data streams in post-processing. All devices plus data loggers and battery packs were fit in the backpack (just under 9 kg).
A task was designed that would induce the normal variability in walking behavior without imposing a stereotypical walking pattern. Fourteen participants walked through a residential area while searching for 30 predefined objects (e.g., street signs, statues) using a map of the area. The locations of the objects were indicated on the map by flags and participants were asked to note the time when they reached the location of an object. They were instructed to optimize the order in which they visited the targets such that they would visit the largest number of objects within one hour. Using recordings of the 3D position of the trunk, a wide range of walking parameters were computed including, step length (SL), step frequency (SF), and their ratio, also known as the walk ratio (WR). This ratio has been found to be invariant within a range of walking speeds [48, 90], and has been linked to optimal energy expenditure [56, 115]. Evidence of invariance in WR has been reported for walking at manipulated speeds along a 100 m straight athletic track [107] and a 400 m oval track [108], but never under free walking conditions. We also measured walking speed during straight and curved walking trajectories and starting and stopping behavior. Walking speed was calculated as the difference between consecutive positions of the trunk position in the horizontal (GPS) frame. Table 6.1 presents some individual and mean basic gait parameters computed from the GPS data. For a complete description of results please refer to [97].
Results demonstrated that when people walked on a straight path, the average walking speed was 1.53 m/s. This value is very similar to field survey data [41, 65]. Perhaps not surprisingly, walking speed decreased when people walked on a curved path. The magnitude of the decrease depended on both the radius and angle of the turn taken. For turn angle, walking speed decreased linearly with angle. Thus, it changed from 1.32 m/s at 45\(^\circ \) angles to around 1 m/s at complete turnarounds (i.e., 180\(^\circ \)). These values are in strong agreement with those observed in a controlled experiment conducted in a fully-tracked indoor lab space [98]. As for turn radius, walking speed was seemingly constant for turns with radii \({\ge }10\) m (1.49 m/s) and for turns with radii \({\le }\mathrm{5\,m }\) (1.1 m/s), while in between these radii values, walking speed changed in a fairly linear fashion.
Consistent with previous literature [90, 107] we found that WR was relatively invariant with respect to walking speed. After correcting for participant height (see [90]), we found that most of the adjusted values of WR were close to 0.4 m/steps/s. There were some outliers at slower walking speeds (i.e., below 1 m/s), which is again consistent with earlier reports [90], and the WR at these slower walking speeds was also more variable. The relative invariance of WR in natural (and controlled) walking underlines its usefulness as a clinical diagnostic tool for detecting abnormal gait but also to the scientific study of human locomotion in general.
The time that it takes to reach a steady walking speed depends on the desired speed (see Fig. 6.1). It took an average of 2 and 3 s to reach walking speeds of 0.5 and 2 m/s, respectively. The relationship between the time it took to stop and walking speed was very much the same. The dependence on walking speed, however, contradicts findings by Breniere and Do [11] who found that the time it takes to reach the desired walking speed is independent of walking speed. Dependence on walking speed has been found by others [60, 71], although we observe that in natural walking humans take more time to start and stop than in laboratory settings. To illustrate these differences Fig. 6.1 also includes the data from Breniere and Do [11] and Mann et al. [71] together with our own results. One possible cause for this difference is the protocol used in laboratory experiments [96]. Specifically, whereas earlier studies typically use an external “go” signal, our participants were free to start and stop as they pleased.
2.2 Overground Versus Treadmill Walking
While treadmills allow for the observation of walking behavior over extended periods of time, it is still a matter of debate as to whether gait during treadmill walking is different than overground walking [3]. There is evidence that treadmill walking can significantly alter the temporal [3, 30, 100, 114], kinematic [3], and energetic [78] characteristics of walking. One apparently robust finding is that walking on a (motorized) treadmill increases step frequency (cadence) by approximately 6 % [3, 30, 100, 114]. It has, therefore, been concluded by many researchers that motorized treadmills may produce misleading or erroneous results and that care should be taken in their interpretation. At the same time there are also studies that do not find any significant differences between overground and treadmill walking [75, 84]. Two possible sources for this discrepancy that we have addressed in our research are differences between walking surfaces and the availability of relevant visual feedback about self-motion during treadmill versus overground walking.
Treadmills are typically more compliant than the regular laboratory walking surfaces used in past studies, and it has been speculated that it is this difference in surface stiffness that affects locomotion patterns when directly comparing treadmill walking with overground walking (e.g., [30, 31]). Such speculations are warranted by other research showing significant effects of walking surface compliance on basic gait parameters such as step frequency and step length [72]. Interestingly, the one study that compared overground with treadmill walking using similar walking surfaces found no differences in gait parameters [84].
Another potential factor to consider is that participants typically have visual information available during walking. During natural, overground walking, dynamic visual information (i.e. optic flow), is consistent with the non-visual information specifying movement through space. However, during treadmill walking, a considerable sensory conflict is created between the proprioceptive information and the visual (and vestibular) information (see also Sect. 6.3.2) such that the former informs participants that they are moving, yet the latter informs them they are in fact stationary. Although it is not obvious how such a conflict might specifically alter gait parameters, there is evidence that walking parameters are affected by whether visual feedback is available or not. For instance, Sheik-Nainar and Kaber [91] evaluated different aspects of gait, such as speed, cadence, and joint angles when walking on a treadmill. They evaluated the effects of presenting participants with congruent and updated visuals (via a HMD projecting a simulated version of the lab space), compared to stationary visuals (real world lab space with reduced FOV to approximate HMD). These two conditions were compared to natural, overground walking. Results indicated that while both the treadmill conditions caused participants to walk slower and take smaller steps, when optic flow was consistent with the walking speed, gait characteristics more closely approximated that of overground walking. Further, Hallemans et al. [50] compared gait patterns in people with and without a visual impairment and compared the gait patterns of normally sighted participants under full vision and no vision conditions. Results demonstrated that participants with a visual impairment walked with a shorter step length than sighted individuals and that sighted participants who were blindfolded also showed similar changes in gait (see also [74]). Further, in the absence of vision, normally sighted participants walked slower and had lower step frequencies when blindfolded compared to when full vision was available, which was hypothesized to reflect a more cautious walking strategy when visual information was absent. However, it is not known whether walking is differentially affected by the presence and absence of congruent visual feedback.
Humans have a strong tendency to stabilize the head during walking (and various other locomotor tasks) in the sense that they minimize the dispersion of the angular displacement of the head [13]. Interestingly, visual feedback does not appear to be important for this stabilization [80]. However, the walking conditions under which this has been studied have been very limited. Participants were asked to walk at their own preferred speed or to step in place [80]. Very little is known about the generality of this lack of an effect of vision and whether there are differences between overground and treadmill walking.
We investigated the effects of walking surface and visual feedback on basic gait parameters and on the movement of the head in an integrated manner. This experiment was conducted using a circular treadmill (CTM) at the MPI (see Fig. 6.2 and caption for additional details). The effect of surface stiffness on gait characteristics was controlled for by having participants walk in place and walk through space on the same treadmill surface. Specifically, overground walking consisted of simply leading the participant around on the stationary disc using the motorized handlebar. Stationary (“treadmill”) walking consisted of walking in place on the moving disc without moving through space. If the difference in surface is a major determinant in causing the previously reported differences between overground and treadmill walking, then we would expect this difference to disappear in this experiment. Visual feedback was also manipulated by having people walk while wearing a blindfold or not. Walking speeds were controlled by moving either the disc or the handlebar at one of four velocities (see caption of Fig. 6.3), for the stationary and walking through space conditions, respectively. The results demonstrated that there were indeed very few differences observed between the gait parameters measured during stationary walking versus overground walking. Step length (Fig. 6.3a) and walk ratio (Fig. 6.3c) were comparable across walking speeds. The exception was that for the slowest walking speed (0.7 m/s), the overground walking condition produced larger step lengths and walk ratios in comparison to stationary walking. This particular effect is consistent with previous findings that reflected higher walk ratios at slower overground walking speeds (e.g., [90]). This higher walk ratio at the slowest walking speed is likely due to an increase in step length given that step frequency was virtually identical across all conditions (see Fig. 6.3b). Results also demonstrated that during stationary walking there was a significant decrease in head sway (Fig. 6.3d) and head bounce (Fig. 6.3e) compared to overground walking. As for the effect of vision, the results demonstrated that, irrespective of the walking condition, step length and frequency were unaffected by the presence or absence of visual feedback. This is in contrast with above-described studies that did find significant decreases in both step length and frequency [50, 74].
In summary, with respect to basic gait parameters, there were hardly any differences between overground walking and stationary walking. Most notable was the complete absence of an effect on step frequency, which has typically been the most consistently observed difference in earlier studies. Our results are, however, consistent with several other earlier studies that also did not find a difference between overground and treadmill walking [75, 84] and lend support to the notion that previously reported differences may be (partially) due to the fact that walking surfaces were not controlled for. Another interesting finding is that stationary walking significantly reduced the lateral (sway) and vertical (bounce) head movements. It is currently unclear what the cause for this change is. However, it is thought that head stabilization behavior helps organize the inputs from the visual, vestibular, and even somatosensory systems [13]. It is possible that during treadmill walking head movements are reduced in order to establish a more stable reference frame because of the registered discrepancy between the proprioceptive sense that signals movement, and the vestibular and visual senses that signal a stationary position. As for visual feedback, the only statistically reliable effect of the visual manipulation was a reduction of the vertical movements of the head at the highest walking speeds during overground walking as compared to stationary walking. When visual feedback was not available, this produced some trends in the gait parameters (increases in step frequency and decreases in step length and walk ratio), although these were not statistically significant.
2.3 Potential Implications for CyberWalk
One specific finding that impacted the design specifications of the CyberWalk platform was that it took at least 2 s to accelerate the treadmill to the very slow speed of 0.5 m/s. As we will see in the following section, providing vestibular inputs by allowing movement through space is an important part of simulating natural locomotion. Thus, from this perspective it meant that the CyberWalk platform needed to ideally be big enough to accommodate such start up accelerations. The finding that stationary walking does not change the main walking parameters of step length and step frequency is encouraging as it means that the walking data on the treadmill should be representative of normal gait. This also affected the design of the platform, albeit in a more indirect fashion. We surmised that the platform should ideally have a surface that is as stiff as possible since the most typically studied walking surfaces are very stiff (e.g., sidewalks). Head movements, on the other hand, did change during stationary walking in that they were less pronounced than during overground walking. This might seem advantageous in light of the fact that on the CyberWalk, head mounted displays (HMDs) are the primary means of visually immersing the user in VR and therefore having less head bounce would reduce visual motion artifacts and potential tracking lags for rapid movements. However, it does raise the possibility that the normal head stabilization function during walking (e.g., [80]) may be different during treadmill walking, which may affect the role of the proprioceptive receptors in the neck and also the role of coincident vestibular inputs.
3 Multisensory Self-Motion Perception
A veridical sense of self-motion during walking is a crucial component for obtaining ecological validity in VR. Of particular interest to us is the multisensory nature of self-motion perception. Information about the extent, speed, and direction of egocentric motion is available through most of our sensory systems (e.g. visual, auditory, proprioceptive, vestibular), making self-motion perception during locomotion a particularly interesting problem with respect to multisensory processing. During self-motion perception there are important roles for the visual system (e.g. optic flow), the vestibular system (the inner ear organs including the otoliths and semicircular canals), the proprioceptive system (the muscles and joints), and efference copy signals representing the commands issued to generate our movements. There is also some suggestive evidence for a role of the auditory system (e.g., [99]) and somatosensory system (e.g., [33]). Much work has been done to understand how each of these sensory modalities contribute to self-motion individually, however, researchers have only recently begun to evaluate how they are combined to form a coherent percept of self-motion and the relative influences of each cue when more than one is available.
3.1 Multisensory Nature of Walking
Since no single sense is capable of operating accurately under all circumstances, the brain has evolved to exploit multiple sources of sensory information in order to ensure both a reliable perception of our environment (see [20]) and appropriate actions based on that perception [37]. A fundamental question in the cognitive neurosciences asks what mechanisms are used by the central nervous system to merge all of these sources of information to form a coherent and robust percept. It seems that it employs two strategies to achieve robust perception. The first strategy, sensory combination, describes interactions between sensory signals that are not redundant. That is, information is specified in different coordinate systems or units. The second strategy, sensory integration, reduces the variance of redundant sensory estimates, thereby increasing their reliability [37].
Human locomotion is particularly interesting from the perspective of sensory integration as it involves a highly dynamic system, meaning that the sensory inputs are continuously changing as a function of our movements. For instance, with each stride (i.e., from the heel strike of one foot to the next heel strike of the same foot) the head moves up and down twice in a near sinusoidal fashion [62, 106], thereby generating continuously changing accelerations that are registered by the vestibular system. Similarly, with each stride, the musculoskeletal system generates a set of dynamically changing motor signals, the consequences of which are registered by the proprioceptive system. Finally, the visual flow is likewise marked with periodic vertical and horizontal components. Thus, the various pertinent sensory inputs are in a systematic state of flux during walking. Moreover, findings that visual [54], somatosensory [116], and vestibular [6] signals exhibit phase-dependent influences on postural control during walking suggest the interesting possibility that the reliabilities of the sensory signals are also continuously changing and possibly in phase with the different stages of the gait cycle.
A particularly influential group of models of multisensory integration have considered the problem from the point of view of efficiency. These efforts are often referred to as the “Bayesian approach”, which was originally applied to visual perception (e.g., [15, 17, 64]). It is acknowledged that neural processes are noisy [38] and consequently, so are sensory estimates. The goal is then for the brain to come up with the most reliable estimate, in which case the variance (i.e., noise) of the final estimate should be reduced as much as possible. If the assumption is made that the noise attributable to individual estimates is independent and Gaussian, then an estimate with the lowest variance is obtained using Maximum Likelihood Estimation (MLE) [35]. MLE models have three general characteristics. First, information from two or more sensory modalities is combined using a weighted average. Second, the corresponding weights are based on the relative reliabilities of the unisensory cues (i.e., the inverse of their variances); the cue with the lowest unimodal variance will be weighted highest when the cues are combined. Third, as a consequence of integration, the variance of the integrated estimate will be lower than those observed in either of the individual estimates. There is now mounting evidence that humans combine information from across the senses in such a “statistically optimal” manner (e.g., [37]). Most of this work has been aimed at modeling cue integration between the exteroceptive senses such as vision, haptics, and hearing [2, 4, 12, 35, 36], or within the visuomotor system (e.g., [63, 66]), but very few studies have considered whether the same predictions apply to multisensory self-motion perception.
The Bayesian perspective is now just starting to be considered in the field of human locomotion (e.g., [25]), and self-motion in particular [18, 19, 21, 23, 39, 42]. For instance, a study by Campos et al. [23] highlights the dynamic nature in which optic flow and body-based cues are integrated during walking in the real world. The study shows that the notion of optic flow as an all-inclusive solution to self-motion perception [46] is too simplistic. In fact, when body-based cues (e.g. proprioceptive and vestibular inputs) are available during natural walking they can dominate over visual inputs in dynamic spatial tasks that require the integration of information over space and time (see also [21] for supporting evidence in VR). Other studies have attempted to look at body-based cues in isolation and investigate how these individual sources interact with visual information. For instance, a number of studies have considered the integration of optic flow and vestibular information for different aspects of self-motion perception (e.g., [19, 39, 40, 51, 61]). Evidence from both humans [18, 39], see also [69]) and non-human primates [40, 49] shows that visual-vestibular integration is statistically optimal when making heading judgments. This is reflected by a reported reduction in variance during combined cue conditions, compared to the response patterns when either cue is available alone. Interestingly, when the visual signal lacks stereoscopic information, visual-vestibular integration may no longer be optimal for many observers [19]. To date, the work on visual-vestibular interactions has been the most advanced with respect to cue integration during self-motion in the sense that it has allowed for careful quantitative predictions. Studies on the combinations of other modalities during self-motion perception have also started to provide qualitative evidence that support the MLE. For instance, Sun et al. [102], looked at the relative contributions of optic flow information and proprioceptive information to human performance on relative path length estimation (see also [103]). They found evidence for a weighted averaging of the two sources, but also that the availability of proprioceptive information increased the accuracy of relative path length estimation based on visual cues. These results are supported by a VR study [21] which demonstrated a higher influence of body-based cues (proprioceptive and vestibular) when estimating walked distances and a higher influence of visual cues during passive movement. This VR study further showed that although both proprioceptive and vestibular cues contributed to travelled distance estimates, a higher weighting of vestibular inputs were observed. These results were effectively described using a basic linear weighting model.
3.2 Integration of Vestibular and Proprioceptive Information in Human Locomotion
Consider walking through an environment that is covered in fog or walking in the pitch dark. While these scenarios render visual information less reliable, evidence shows that humans are still very competent in various locomotion tasks even in the complete absence of vision (e.g., [22, 34, 67, 73, 83, 103, 109]). Past research often reports that when either walking without vision or when passively moved through space, body-based cues are often sufficient for estimating travelled distance [7, 21, 24, 51, 58, 67, 73, 92, 102, 103] and to some extent self-velocity [7, 22, 58, 92].
A series of studies have also looked specifically at the interactions between the two main sources of body based cues; the proprioceptive system and the vestibular system. Studies that have investigated the role of vestibular and/or proprioceptive information in self-motion perception have done so by systematically isolating or limiting each cue independently. Typical manipulations include having participants walk on a treadmill (mainly proprioceptive information), or passively transporting them through space in a vehicle (mainly vestibular information specifying translations through space). The logic is that walking in place (WIP) on a treadmill produces proprioceptive but no vestibular inputs associated with self-motion through space, while during passive movement (PM), there are vestibular inputs but no relevant proprioceptive information from the legs specifying movement through space. These conditions can then be compared to normal walking through space (WTS), which combines the proprioceptive and vestibular inputs of the unisensory WIP and PM condition. For instance, Mittelstaedt and Mittelstaedt [73] reported that participants could accurately estimate the length of a travelled path when walking in place (proprioception), or when being passively transported (vestibular). In their study, even though both cues appeared sufficient in isolation, when both were available at the same time (i.e., when walking through space) proprioceptive information was reported to dominate vestibular information. But what this study could not specify was by how much it dominates or, more generally, what the relative weights of the individual cues are.
There is, however, a fundamental problem that makes it very difficult to make assessments of cue weighting and studying the multisensory nature of self-motion in general. The problem is that there is a very tight coupling between vestibular and proprioceptive information during normal walking. The two signals are confounded in the sense that under normal circumstances there can be no proprioceptive activity (consistent with walking) without experiencing concurrent vestibular excitation. In fact, this strong coupling has lead Frissen et al. [42] to argue for a “mandatory integration” hypothesis which holds that during walking the brain has adopted a strategy of always integrating the two signals. It also leads to substantial experimental difficulty when attempting to obtain independent measures from the individual senses (see also [24]). Consequently, during the often used, “proprioceptive only” walking in place condition, vestibular inputs are in fact concurrently present, yet specify a stationary position. This thus creates a potential sensory conflict when the aim is to obtain unbiased unisensory estimates. The reverse conflict occurs in the “vestibular only” PM condition, where the proprioceptive input specifies a stationary position. Although in this case, it should be noted, that there are numerous instances in which vestibular excitation is experienced without contingent proprioceptive information from the legs, including whenever we move our head, or when moving in a vehicle. In other words, in the case of passive movements, the coupling may not be as tight.
Despite the fact that it is difficult to obtain unisensory proprioceptive and vestibular estimates, it is possible to create conditions in which the conflict between the vestibular and proprioceptive cue are much reduced and, moreover, controllable. This will enable us to determine the relative weighting of the individual cues. One way is to use a rotating platform in combination with a handlebar that can be moved independently. An early example of this was a platform used by Pick et al. [79], which consisted of a small motorized turntable (radius 0.61 m) with a horizontal handle mounted on a motorized post extending vertically through the center. Using this setup Bruggeman et al. [14] introduced conflicts between proprioceptive and vestibular inputs while participants stepped around their earth-vertical body axis. Participants always stepped at a rate of 10 rotations per minute (rpm) (constituting the proprioceptive input), but because the platform rotated in the opposite direction, participants were moved through space at various different rates (constituting the vestibular input). They found that when the proprioceptive and vestibular inputs were of different magnitudes, the perceived velocity fell somewhere between the two presented unisensory velocities, thus suggesting that the brain uses a weighted average of vestibular and proprioceptive information as predicted by MLE (see also [5]). However, a limitation of this type of relatively small setup is that it only allows participants to perform rotations around the body axis. That is, it allows participants to step in place, which is a very constrained and rather unnatural mode of locomotion with biomechanics that are different from normal walking.
Such restrictions do not apply to the CTM (Fig. 6.2) which allows for full stride curvilinear walking. This unique setup also allows us to manipulate vestibular, proprioceptive (and visual) inputs independently during walking. In one of our recent studies we assessed multisensory integration during self-motion using a spatial updating paradigm that required participants to walk through space with and without conflicting proprioceptive and vestibular cues [42]. The main condition was the multisensory, “walking through space” condition during which both vestibular and proprioceptive systems indicated self-motion. This condition consisted of both congruent and incongruent trials. In the congruent trials, participants walked behind the handlebar while the treadmill disk remained stationary. Thus, the vestibular and proprioceptive inputs conveyed the same movement velocities; in other words, the proprioceptive-vestibular gain was 1.0. In the incongruent trials, systematic conflicts were introduced between the vestibular and proprioceptive inputs. This was achieved by having participants walk at one rate, while the disk was moved at a different rate. Specifically, proprioceptive gains of 0.7 and 1.4 were applied to two vestibular velocities (25 \(^\circ \)/s and 40 \(^\circ \)/s). To achieve a gain of 0.7, the disk moved in the same direction as the handlebar but at 30 % of its speed. To achieve a gain of 1.4, the disk moved at 40 % of the handlebar speed but in the opposite direction. We also tested two additional conditions. In the “walking in place” condition, participants walked in place on the treadmill but did not move through space. Like in previous studies, participants were instructed to use the proprioceptive information from their legs to update their egocentric position as if they were moving through space at the velocity specified by the CTM. In the “passive movement” condition, participants stood still while they were passively moved by the CTM. Spatial updating was measured using a continuous pointing task similar to that introduced by Campos et al. [22] and Siegle et al. [92], which expanded upon a paradigm originally developed by Loomis and colleagues [43, 67]. The task requires the participant to continuously point at a previously viewed target during self-motion in the absence of vision. A major advantage of this method is that it provides continuous information about perceived target-relative location and thus about self-velocity during the entire movement trajectory. The results were consistent with an MLE model in that participants updated their position using a weighted combination of the vestibular and proprioceptive cues, and that performance was less variable when both cues were available.
Unfortunately the results did not allow us to determine the relative weighting of the two cues (see [42]). We therefore conducted a new experiment which employed a standard psychophysical 2-interval forced choice (2-IFC) paradigm (see [45], for an introduction). Experimental details are provided in the caption of Fig. 6.4. In each trial participants walked two times and they indicated in which of the two they had walked faster. In one interval (the standard) participants walked under various conditions of conflicting vestibular and proprioceptive signals, while in a second interval (the comparison) they walked through space without cue conflict. By systematically changing the comparison (i.e., handlebar velocity) we can determine the point at which the standard and comparison were perceptually equivalent (i.e., the point of subject equality, or PSE).
Figure 6.4a shows the mean PSEs as a function of vestibular input. In the conditions with conflicting inputs, the PSEs lie between the two extreme cases (solid horizontal and diagonal line). Also, the PSEs are not on a straight line, indicating that the relative weighting depends on the vestibular input. This is illustrated in Fig. 6.4b where the vestibular weights are plotted for the different conflict conditions. The proprioceptive input is weighted higher in the two conditions where the vestibular input was smaller (20 or 30 \(^\circ \)/s) than the proprioceptive input (40 \(^\circ \)/s). However, when the vestibular input was larger (50 \(^\circ \)/s) than the proprioceptive input, their respective weights were practically equal. This raises the question of whether, contrary to the instruction to judge their walking speed, participants were simply using their perceived motion through space (i.e., the vestibular input) to perform the task. This alternative interpretation is unlikely given the results of a control experiment in which two new participants were tested in the exact same experiment but with explicit instructions to judge how fast they were moving through space and to ignore how fast they were walking. The results are clearly different from those of the main experiment (Fig. 6.4a, grey markers). The PSEs are now close to the theoretical line for complete vestibular dominance. However, the PSEs are not exactly on the line but show an influence of the proprioceptive input, which is what we would expect under the mandatory integration hypothesis (i.e. even though participants were told to ignore their speed of proprioception, these proprioceptive cues still influenced their responses).
3.3 “Vection” from Walking
Under the mandatory integration hypothesis we expect that walking conditions, even with extreme conflicts between the proprioceptive and vestibular signals, will show evidence of a weighted averaging. Once again, walking in place creates a particularly interesting condition. Averaging a zero input (vestibular) with a non-zero input (proprioceptive) necessarily leads to a non-zero estimate. We therefore expect participants in this condition to experience illusory self-motion in the absence of actual movement through space (i.e., non-visual “vection”). There is indeed evidence that walking in place elicits nystagmus [9], and pseudo-coriolis effects [10], and self-motion aftereffects [8].
In one experiment we created five extreme sensory conflict conditions. The participants were moved through space at \(-\)10, \(-\)5, 0, 5, or 10 \(^\circ \)/s while walking at a fixed speed of 40 \(^\circ \)/s. Negative values indicate that the participant moved backwards through space. Thus, in two conditions the inputs were of the same sign (i.e., physical movement was in the same direction), but widely different in magnitude. In two other conditions, the sign was the opposite in direction such that participants stepped forward while being moved backwards through space. In the last condition they were walking in place. We used the same pointing task as in Frissen et al. [42] to measure perceived self-motion.
Figure 6.5a shows the perceived self-motion. An estimate of the proprioceptive weight was obtained from fitting the MLE model to the group means and was 0.07 with a corresponding vestibular weight of 0.93. The fit is, however, rather poor and, except for the \(-\)5 \(^\circ \)/s condition, none of the pointing rates were significantly different from the test velocity, suggesting that participants used the vestibular input only. However, all participants at some point did experience illusory motion through space in the walking in place condition. Moreover, participants also confused the direction of motion on at least several trials. For instance, backward motion at 10 \(^\circ \)/s was perceived as forward movement on 30 % of the trials. Therefore, simply averaging the signed mean pointing rate would give an incorrect impression of actually perceived motion. If we categorize the data according to whether the motion was perceived as backward or forward, this results in the two curves shown in Fig. 6.5b. For about 58 % of the trials this motion was perceived as forward (at \(\sim \)7 \(^\circ \)/s) and for about 42 % of the trials as backward (at \({\sim }\!{6}\,^\circ \)/s). Thus, walking in place clearly induces an illusion of self-motion. Interestingly, these new trends can still be described by a simple weighted averaging. The difference is that only the magnitudes of the inputs are used irrespective of direction. Thus, the magnitude of the trends in Fig. 6.5b are well described by, \(\hat{S} = \sum \limits _{i} {w_{i} \left| {S_{i} } \right| }\) where Ŝ is the multisensory estimate, \(\left| {S_{i} } \right| \) the magnitude of the individual inputs, and \(w_{i}\) their relative weights. Estimates of the proprioceptive weights were obtained from fitting the adapted model to the group means. They were 0.12 and 0.07, for the motion that was perceived as forward and backward, respectively, which makes the corresponding vestibular weights 0.88 and 0.93.
What is most surprising about these results is that the odds of perceiving forward motion as opposed to backward motion were close to 1:1. This surprise comes from the fact that the proprioceptive input is directionally unambiguous. Two subsequent experiments, in which we manipulated either the walking speed or the walking direction, clearly showed that there is an effect of the proprioceptive input on the distribution of the number of trials that are perceived as forward or backward motion. For instance, the proportion of trials perceived as forward was, as before, close to 50 % when mechanically walking forward in place, but dropped to around 25 % when mechanically walking backwards. In other words, stepping backwards also made the participant feel like they were moving backwards most of the time, but not always. The contribution of the proprioceptive input to the perceived direction is therefore only partial. It remains an open question as to what all of the determining factors are for perceived direction.
3.4 Potential Implications for CyberWalk
Taken together, these studies reveal the clear importance of vestibular inputs for self-motion perception during walking. The vestibular sense registers primarily accelerations and will gradually stop responding once a constant speed has been reached. However, this cessation of sensory stimulation does not mean that there is lack of motion information. After all, if no change in velocity occurs, this would indicate that self-motion had not ceased [92]. Nevertheless, the most salient moments are during the acceleration phase (i.e., start walking) and deceleration phase (stop walking). When simulating normal walking on a treadmill, it is therefore important to retain these inertial cues as accurately as possible. The CyberWalk effectively achieves this. Specifically, when the user starts to walk from a standstill, he/she initially walks on a stationary surface and accelerates through space as they would during normal, overground walking. Only once the user starts to reach a constant walking speed will the treadmill start to move. Gradually, the treadmill brings the user back to the center of the platform (ideally sub-threshold), by moving them backwards through space while they continue to walk. Similarly, when the user stops walking or changes walking direction, the treadmill only responds gradually, allowing the normal inertial input to the vestibular system to occur. For this scheme to work, the walking surface has to be large enough to accommodate several steps without large changes in treadmill speed. In preliminary studies this system has been shown to work very well for controlling treadmill speed on a large linear treadmill [94]. Through these studies, we determined that the minimum size of the walking surface needed to accommodate this control scheme is 6 \(\times \) 6 m. However, financial and mechanical considerations limited the eventual size of the CyberWalk to 4 \(\times \) 4 m.
4 Large Scale Navigation
One field in which the CyberWalk is expected to have a large impact is human navigation. Navigation requires estimates of perceived direction and position while moving through our environments. In order to achieve this we can use external devices such as maps, street signs, compasses or GPS systems, or we can use our internal representations of space that come from multiple cognitive and sensory sources. Much of what we know about human spatial navigation has come from studies involving spaces of relatively small scale (i.e. room size or smaller), while comparatively fewer human studies have considered large-scale navigation. In one recent extensive real world study by our group, we evaluated the extent to which humans are able to maintain a straight course through a large-scale environment consisting of unknown terrain without reliable directional references [93]. The scenarios were those in which observers were transported to the Tunisian Sahara desert or to the Bienwald forest in western Germany and were asked to walk in a completely straight trajectory. The area used for the forest experiment was selected because it was large enough to walk in a constant direction for several hours and has minimal changes in elevation. The thick tree cover also made it impossible to locate distant landmarks to aid direction estimation.
According to a belief often referred to in popular culture, humans tend to walk in circles in the types of desert or forest scenarios described above, yet there had been no previous empirical evidence to support this. The Souman et al. [93] study showed that people do indeed walk in circles while trying to maintain a straight course, but only when traversing in the absence of reliable external directional references. This was particularly true when participants walked in a dense forest on a cloudy day, with the sun hidden behind the clouds. Most participants also repeatedly crossed their own path without any awareness of having done so. However, under conditions in which directional references such as landmarks or the solar azimuth were present, people were actually able to maintain a fairly straight path, even in an environment riddled with obstacles, such as a forest. A popular explanation for walking in circles is based on the assumption that people tend to be asymmetrical with respect to, for instance, leg length or leg strength. If this were true, it would be hypothesized that a particular individual would always turn in the same direction. However, this was not the case. In fact, inconsistency in turning and veering direction was very common across participants. Moreover, measured leg strength differences could not explain the turning behavior, nor could leg length.
Interestingly, the recorded walking trajectories show exactly the kind of behavior that would be expected if the subjective sense of straight ahead were to follow a correlated random walk. With each step, a random error is added to the subjective straight ahead, causing it to drift away from the true straight ahead. As long as the deviation stays close to zero, people walk in randomly meandering paths. When the deviation becomes large, it results in walking in circles. This implies that circles are not necessarily an indication of a systematic bias in the walking direction but can be caused by random fluctuations in the subjective straight ahead resulting from accumulating noise in the sensorimotor system, in particular the vestibular and/or motor system.
Another possible contribution to deviating from a straight path, not considered in Souman et al. [93] study is the instantaneous orientation of the head with respect to the trunk. It has been shown that eccentric eye orientation (e.g., [82]) and head orientations tend to be related to the direction of veer from a straight course. The most common finding is that people veer in the direction of eye/head orientation [113]. For instance, in a series of driving experiments, Readinger et al. [82] consistently found that deviations in a driver’s gaze can lead to significant deviations from a straight course. Specifically, steering was biased in the direction of fixation. They tested a large range of eye positions, between \(-\)45 \(^\circ \) and \(+\)45 \(^\circ \). Interestingly, the largest effect was obtained with an eccentric eye position of as little as 10 \(^\circ \) and leveled off beyond that. Thus, even a small deviation of 5 \(^\circ \) created a significant bias. A very similar bias has been found during visually guided walking [28, 111]. Jahn et al. [59] asked participants to walk straight towards a previously seen target placed 10 m away while they were blindfolded. Their results demonstrated, contrary to all previous work, that with the head rotated to the left, participants’ path deviated to the right, and vice versa. The effect of eye position showed the same pattern, but was not significant. The authors interpreted this as a compensation strategy for an apparent deviation in the direction of gaze due to the lack of the appropriate visual feedback.
Intrigued by the counterintuitive results of Jahn et al. [59] study we conducted a very similar experiment in an attempt to replicate these results. The results (see Fig. 6.6. and caption for details) suggest a bias in the direction of veering in the same direction as the head turn. The bias was asymmetric in that it was larger when the head was turned to the left than when the head was turned to the right. There was also an apparent interaction between the head and eye orientation such that the bias tended to diminish when the eyes were turned away from straight ahead and was stronger when the head and the eyes were oriented in opposite directions. Statistical analyses, however, showed marginally significant effects of head orientation and its interaction with eye position. Whereas these results are qualitatively consistent with those of Cutting et al. [28] and Readinger et al. [82], they are opposed to those of Jahn et al. [59]. In fact, when we compare the average values, our and Jahn et al.’s results, are highly negatively correlated (\(\mathrm{{r}}=-0.84\)). We can speculate that spontaneous head turns would have contributed to the effect of veering from a straight trajectory observed by [93], especially in the desert and forest experiments where participants were free to look around as they pleased.
4.1 Potential Implications for CyberWalk
The above-described large scale navigational studies demonstrate the need for a platform like the CyberWalk, more than to offer constraints on its design. Specifically, they demonstrate the real need for a laboratory setup that allows a walker to go in circles or to walk along meandering paths. Nevertheless, they show that more controlled environments are essential in studying human navigation. For instance, the forest experiment revealed that one apparently major factor in being able to stay on a straight trajectory was whether the sky was overcast or not. The CyberWalk achieves environmental control through the use of VR technologies which allow us to create large scale visual environments with high fidelity and control over environmental factors that are normally beyond control, such as the presence and position of the sun.
5 Putting it All Together: The CyberWalk Platform
The CyberWalk treadmill (Fig. 6.7) consists of 25 segmented belts each 5 m long and 0.5 m wide, which are mounted on two large chains in the shape of a torus. The entire setup is embedded in a raised floor. The belts constitute one direction of motion, while the chains form the perpendicular direction. The chains are capable of speeds up to 2 m/s, while the belts can run at 3 m/s. The chains are driven by four powerful motors placed at the corners of the platform and each belt segment has its own smaller motor. The drives are controlled such that they provide a constant speed independent of belt load. The walking surface is large enough to accommodate several steps without large changes in treadmill speed. This size allows for changes in treadmill speed which are low enough to maintain postural stability of the user, but makes it unavoidable that these accelerations will sometimes be noticeable to the user. To what extent this affects self-motion perception needs to be determined more closely, although Souman et al. [95] found that walking behavior and spatial updating on the CyberWalk treadmill approached that of overground walking.
The high-level control system determines how the treadmill responds to changes in walking speed and direction of the user in such a way that it allows the user to execute natural walking movements in any direction. It tries to keep the user as close to the center of the platform as possible, while at the same time taking into account perceptual thresholds for sensed acceleration and speed of the moving surface. The control law has been designed at the acceleration level to take into account the limitations of both the platform and the human user, while ensuring a smoothly changing velocity input to the platform (see [29]). The treadmill velocity is controlled using the head position of the user. The control scheme includes a dead-zone in the center of the treadmill where changes in the position of the user are not used when the user is standing still. This makes it much more comfortable for users to look around in the VE while standing still [95]. Users wear a safety harness connected to the ceiling to prevent them from falling and reaching the edge of the platform with their feet.
The setup is installed in a large hall (12 \(\times \) 12 m walking area). The hall is equipped with a 16 camera Vicon MX13 optical tracking system (Vicon, Oxford, United Kingdom) that is used to track the position and orientation of the participant’s head. To this end, participants wear a helmet with reflective markers. The tracking data are used to update the visualization presented through a head-mounted display (HMD) and to control the treadmill velocity. Presently the HMD used is an eMagin Z800 3DVisor (eMagin, Bellevue, USA) custom built into goggles, which prevents the participant from seeing anything else but the image on the displays. One advantage of this HMD is that it is lighter (\(<\)227 g) and less obtrusive than most other HMD systems, but also has a reduced field-of-view. If required, user responses can be collected via a wireless gamepad. When not in use, the treadmill can be covered with wooden boards with a thick rubber coating, creating one continuous, fully tracked walking area.
The omnidirectional capabilities of the platform form its largest contribution to the scientific study of human walking biomechanics. By definition, locomotion serves to transport us from one place to another. However, one of the major constraints on research has been space. For a typical research facility it is extremely expensive to maintain, and difficult to justify, a large instrumented, but otherwise empty room. Most locomotion laboratories are therefore rather small, especially in comparison to the scale of real walking. There is of course a relatively simple solution to the space limitation, and that is to put the participant on a treadmill so that she/he can walk forever. However, virtually all of these treadmills are relatively small and linear. Thus, the space limitation is only resolved for one dimension. In short, none of these restricted spaces enable truly normal walking behaviors like negotiating corners and walking along nonlinear trajectories. However, none of these spatial limitations apply to the CyberWalk platform. This then opens up a large range of possibilities for human locomotion research. One straightforward opportunity is the possibility of replicating the outdoor natural walking experiments described above (see Sect. 6.2.1). An issue with the natural walking study was the fact that turn angle and turn radius did not change independently from each other, another was the need for the 9 kilo backpack to hold all of the recording equipment. By utilizing a carefully designed virtual environment it becomes possible to control turn angles and radii. The backpack is no longer necessary since most of the measurements can be made directly through the optical tracking system, while other measurements (i.e., from the IMU) can be implemented such that there is no additional load on the walker. Such a study would effectively be an ideal marriage of the outdoor experiment [97] and the laboratory study on head-trunk interactions [98].
More generally, the platform’s optical tracking system is capable of full body tracking which has enormous potential for extending studies of biomechanics and dynamics (e.g., [30]) during real, unconstrained walking. Understanding unconstrained walking is not only of scientific value but can also advance computer vision technologies for tracking and recognizing human locomotion behavior (e.g., [1]). The platform’s tracking capability can be extended to support gaze tracking by including a portable eye tracking device, which is of great value to the study of the coordination of the eye, head, and trunk while making turns [52, 53, 57, 81, 98]. Space has also been a major limitation to earlier research using tracking technologies. Thus, walkers have typically been tracked while walking short distances, making predefined turns (e.g., [27, 57, 81]), or walking in repetitive artificial patterns like circles [47], figure eights [52] or cloverleaf patterns (e.g., [53]). Sreenivasa et al. [98] had participants walk along trajectories that consisted of turns of various angles (between \(45^\circ \) and \(135^\circ \), and \(180^\circ \) turns) interspersed with straight sections, in an attempt to simulate more closely the series of turns that occur in natural day-to-day walking. With the help of VE technologies it is also possible to strictly control the amount of visual information provided about upcoming turns. The effects of head/eye orientation on veering have only been studied when having participants walk for several meters. However, as the large scale navigation studies suggest, more complete evaluations are possible when assessing the effects of head/eye orientation on veering during walking trajectories that occur over longer periods of time, or across longer distances.
The CyberWalk platform also opens up a particularly large potential for human navigation research. For instance, recall the desert/forest experiments described in Sect. 6.4, for which it was necessary to travel to the Sahara desert. Without going through this level of effort and expense, conducting such experiments would be extremely difficult to test in the real world because of the need for a completely sparse environment through which an individual can walk for hours. However, such large scale experiments are now possible in the lab. VEs allow us to manipulate particular characteristics of the simulated world (e.g., position of the sun, or time of day) as a way of evaluating the exact causes of any observed veering behaviors, while still allowing for limitless walking capabilities in any direction. Other questions are now possible to address as well. Although, thanks to visual VE development programs, these large scale environments are relatively easy to create and manipulate, the platform is the first to enable truly unconstrained exploration of these environments. It thereby also creates much more ecologically valid, multisensory circumstances for studying questions about spatial cognition. The platform also creates unique opportunities for studying behavior in unfamiliar environments (e.g., [55]).
In conclusion, being able to physically walk through large VEs in an unrestricted manner opens up opportunities that go beyond the study of gait biomechanics, cognition, and spatial navigation in naturalistic environments [16, 105]. It also provides new possibilities for rehabilitation training [44], for edutainment (gaming, virtual museums), design (architecture, industrial prototyping) and various other applications. In summary, the CyberWalk treadmill has brought us a significant step forward towards natural walking in large VEs.
References
Aggarwal JK, Cai Q (1999) Human motion analysis: a review. Comp Vis Image Underst 73(3):428–440
Alais D, Burr D (2004) The ventriloquist effect results from near-optimal bimodal integration. Curr Biol 14(3):257–262
Alton F, Baldey L, Caplan S, Morrissey MC (1998) A kinematic comparison of overground and treadmill walking. Clin Biomech 13(6):434–440
Battaglia PW, Jacobs RA, Aslin RN (2003) Bayesian integration of visual and auditory signals for spatial localization. J Opt Soc Am A 20(7):1391–1397
Becker W, Nasios G, Raab S, Jürgens R (2002) Fusion of vestibular and podokinesthetic information during self-turning towards instructed targets. Exp Brain Res 144(4):458–474
Bent LR, Inglis JT, McFadyen BJ (2004) When is vestibular information important during walking? J Neurophysiol 92(3):1269–1275
Berthoz A, Israël I, Georges-François P, Grasso R, Tsuzuku T (1995) Spatial memory of body linear displacement: what is being stored? Science 269(5220):95–98
Bles W (1981) Stepping around: circular vection and coriolis effects. In: Long J, Baddeley A (eds) Attention and performance IX. Lawrence Erlbaum, Hillsdale, NJ
Bles W, Kapteyn TS (1977) Circular vection and human posture: I. Does the proprioceptive system play a role? Aggressologie 18(6): 325–328.
Bles W, de Wit G (1978) La sensation de rotation et la marche circulaire (Sensation of rotation and circular walking). Aggressologie 19(A): 29–30
Breniere Y, Do MC (1986) When and how does steady state gait movement induced from upright posture begin? J Biomech 19(12):1035–1040
Bresciani J-P, Dammeier F, Ernst MO (2006) Vision and touch are automatically integrated for the perception of sequences of events. J Vis 6(5):554–564
Bril B, Ledebt A (1998) Head coordination as a means to assist sensory integration in learning to walk. Neurosci Biobehav Rev 22(4):555–563
Bruggeman H, Piuneu VS, Rieser JJ, Pick HL Jr (2009) Biomechanical versus inertial information: stable individual differences in perception of self-rotation. J Exp Psychol: Hum Percept Perform 35(5):1472–1480
Bülthoff HH, Mallot HA (1988) Integration of depth modules: stereo and shading. J Opt Soc Am 5(10):1749–1758
Bülthoff HH, van Veen HJ (2001) Vision and action in virtual environments: Modern psychophysics. In: Jenkin ML and Harris L (eds.), Spatial cognition research. Vision and attention, 233–252. Springer Verlag, New York.
Bülthoff HH, Yuille A (1991) Bayesian models for seeing shapes and depth. Comments on Theor Biol 2(4):283–314
Butler JS, Smith ST, Campos JL, Bülthoff HH (2010) Bayesian integration of visual and vestibular signals for heading. J Vis 10(11):Article 23
Butler JS, Campos JL, Bülthoff HH, Smith ST (2011) The role of stereo vision in visual-vestibular integration. Seeing Perceiving 24(5):453–470
Calvert GA, Spence C, Stein BE (2004) The handbook of multisensory processes. MIT Press, Boston
Campos JL, Butler JS, Bülthoff HH (2012) Multisensory integration in the estimation of walked distance. Exp Brain Res 218(4):551–565
Campos JL, Siegle J, Mohler BJ, Loomis JM, Bülthoff HH (2009) Imagined self-motion differs from perceived self-motion: Evidence from a novel continuous pointing method. PLoS ONE 4(11):e7793. doi:10.1371/journal.pone.0007793
Campos JL, Byrne P, Sun H-J (2010) The brain weights body-based cues higher than vision when estimating walked distances. Eur J Neurosci 31(10):1889–1898
Campos JL, Bülthoff HH (2011) Multisensory Integration during self-motion in virtual reality. In: Wallace M, Murray M (eds) Frontiers in the Neural Bases of Multisensory Processes. Taylor and Francis Group, London
Cheng K, Shettleworth SJ, Huttenlocher J, Rieser JJ (2007) Bayesian integration of spatial information. Psychol Bull 133(4):625–637
Cooper AR, Page AS, Wheeler BW, Griew P, Davis L, Hillsdon M, Jago R (2010) Mapping the walk to school using accelerometry combined with a global positioning system. Am J Prev Med 38(2):178–183
Courtine G, Schieppati M (2003) Human walking along a curved path. I. Body trajectory, segment orientation and the effect of vision. Eur J Neurosci 18(3):177–190
Cutting JE, Readinger WO, Wang RF (2002) Walking, looking to the side, and taking curved paths. Percept Psychophys 64(3):415–425
De Luca A, Mattone R, Robuffo Giordano P, Bülthoff HH (2009) Control design and experimental evaluation of the 2D CyberWalk platform. IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, pp 5051–5058
Dingwell JB, Cusumano JP, Cavanagh PR, Sternad D (2001) Local dynamic stability versus kinematic variability of continuous overground and treadmill walking. J Biomech Eng 123(1):27–32
Dufek JS, Mercer JA, Griffin JR (2009) The effects of speed and surface compliance on shock attenuation characteristics for male and female runners. J Appl Biomech 25(3):219–228
Duncan MJ, Mummery WK, Dascombe BJ (2007) Utility of global positioning system to measure active transport in urban areas. Med Sci Sports Exerc 39(10):1851–1857
Eils E, Nolte S, Tewes M, Thorwesten L, Völker K, Rosenbaum D (2002) Modified pressure distribution patterns in walking following reduction of plantar sensation. J Biomech 35(10):1307–1313
Elliott D (1986) Continuous visual information may be important after all: A failure to replicate Thomson. J Exp Psychol: Human Percept Perform 12(3):388–391
Ernst MO (2006) A Bayesian view on multimodal cue integration. In: Knoblich G, Thornton IM, Grosjean M, Shiffrar M (eds) Perception of the human body from the inside out. Oxford University Press, New York, USA, pp 105–131
Ernst MO, Banks MS (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415(6870):429–433
Ernst MO, Bülthoff HH (2004) Merging the senses into a robust percept. Trends Cogn Sci 8(4):162–169
Faisal A, Selen LPJ, Wolpert DM (2008) Noise in the nervous system. Nat Rev Neurosci 9(4):292–303
Fetsch CR, Turner AH, DeAngelis GC, Angelaki DE (2009) Dynamic reweighting of visual and vestibular cues during self-motion perception. J Neurosci 29(49):15601–15612
Fetsch CR, DeAngelis GC, Angelaki DE (2010) Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory. Eur J Neurosci 31(10):1721–1729
Finnis KK, Walton D (2008) Field observations to determine the influence of population size, location, and individual factors on pedestrian walking speeds. Ergonomics 51(6):827–842
Frissen I, Campos JL, Souman JL, Ernst MO (2011) Integration of vestibular and proprioceptive signals for spatial updating. Exp Brain Res 212(2):163–176
Fukusima SS, Loomis JM, Da Silva JA (1997) Visual perception of egocentric distance as assessed by triangulation. J Exp Psychol: Human Percept Perform 23(1):86–100
Fung J, Malouin F, McFadyen BJ, Comeau F, Lamontagne A, Chapdelaine S, Beaudoin C, Laurendeau D, Hugheyh L, Richards CL (2004) Locomotor rehabilitation in a complex virtual environment. Proceedings of the 36th annual international conference of the IEEE-EMBS, San Francisco, Sept 1–5, pp 4859–4862
Gescheider GA (1997) Psychophysics: The fundamentals, 3rd edn. Lawrence Erlbaum, Mahwah, NJ
Gibson JJ (1950) Perception of the visual world. Houghton Mifflin, Boston
Grasso R, Glasauer S, Takei Y, Berthoz A (1996) The predictive brain: anticipatory control of head direction for the steering of control. NeuroReport 7(6):1170–1174
Grieve DW, Gear RJ (1966) The relationships between length of stride, step frequency, time of swing and speed of walking for children and adults. Ergonomics 9(5):379–399
Gu Y, Angelaki DE, DeAngelis GC (2008) Neural correlates of multisensory cue integration in macaque MSTd. Nat Neurosci 11(10):1201–1210
Hallemans A, Ortibus E, Meire F, Aerts P (2010) Low vision affects dynamic stability of gait. Gait Posture 32(4):547–551
Harris LR, Jenkin M, Zikovitz DC (2000) Visual and non-visual cues in the perception of linear self-motion. Exp Brain Res 135(1):12–21
Hicheur H, Vieilledent S, Berthoz A (2005a) Head motion in humans alternating between straight and curved walking path: Combination of stabilizing and anticipatory orienting mechanisms. Neurosci Lett 383(1–2):87–92
Hicheur H, Vieilledent S, Richardson MJE, Flash T, Berthoz A (2005b) Velocity and curvature in human locomotion along complex curved paths: a comparison with hand movements. Exp Brain Res 162(2):145–154
Hollands MA, Marple-Horvat DE (1996) Visually guided stepping under conditions of step cycle-related denial of visual information. Exp Brain Res 109(2):343–356
Hölscher C, Büchner SJ, Meilinger T, Strube G (2009) Adaptivity of wayfinding strategies in a multi-building ensemble: The effects of spatial structure, task requirements, and metric information. J Environ Psychol 29(2):208–219
Holt KG, Jeng SF, Ratcliffe R (1995) Energetic cost and stability during human walking at the preferred stride frequency. J Motor Behav 27(2):164–178
Imai T, Moore ST, Raphan T, Cohen B (2001) Interaction of the body, head, and eyes during walking and turning. Exp Brain Res 136(1):1–18
Israël I, Berthoz A (1989) Contributions of the otoliths to the calculation of linear displacement. J Neurophysiol 62(1):247–263
Jahn K, Kalla R, Karg S, Strupp M, Brandt T (2006) Eccentric eye and head positions in darkness induce deviation from the intended path. Exp Brain Res 174(1):152–157
Jian Y, Winter DA, Ishac MG, Gilchrist L (1993) Trajectory of the body COG and COP during initiation and termination of gait. Gait Posture 1(1):9–22
Jürgens R, Becker W (2006) Perception of angular displacement without landmarks: evidence for Bayesian fusion of vestibular, optokinetic, podokinesthetic and cognitive information. Exp Brain Res 174(3):528–543
Kerrigan DC, Viramontes BE, Corcoran PJ, LaRaia PJ (1995) Measured versus predicted vertical displacement of the sacrum during gait as a tool to measure biomechanical gait performance. Am J Phys Med Rehabil 74(1):3–8
Knill DC (2005) Reaching for visual cues to depth: The brain combines depth cues differently for motor control and perception. J Vis 5(2):103–115
Knill DC, Richards W (1996) Perception as Bayesian Inference. Cambridge University Press, Cambridge
Knoblauch RL, Pietrucha MT, Nitzburg M (1996) Field studies of pedestrian walking speed and start-up time. Transp Res Record 1538:27–38
Körding KP, Wolpert DM (2004) Bayesian integration in sensorimotor learning. Nature 427(6971):244–247
Loomis JM, Da Silva JA, Fujita N, Fukusima SS (1992) Visual space perception and visually directed action. J Exp Psychol: Hum Percept Perform 18(4):906–921
Loomis JM, Blascovich JJ, Beall AC (1999) Immersive virtual environment technology as a basic research tool in psychology. Behav Res Methods Instrum Comp 31(4):557–564
MacNeilage PR, Banks MS, Berger DR, Bülthoff HH (2007) A Bayesian model of the disambiguation of gravitoinertial force by visual cues. Exp Brain Res 179(2):263–290
Maddison R, Mhurchu CN (2009). Global positioning system: a new opportunity in physical activity measurement. Int J Behav Nutr Phys Act 6:Article 73. doi:10.1186/1479-5868-6-73
Mann RA, Hagy JL, White V, Liddell D (1979) The initiation of gait. J Bone Jt Surg 61(2):232–239
Menz HB, Lord SR, Fitzpatrick RC (2003) Acceleration patterns of the head and pelvis when walking on level and irregular surfaces. Gait Posture 18(1):35–46
Mittelstaedt ML, Mittelstaedt H (2001) Idiothetic navigation in humans: Estimation of path length. Exp Brain Res 139(3):318–332
Mohler BJ, Campos JL, Weyel M, Bülthoff HH (2007) Gait parameters while walking in a head-mounted display virtual environment and the real world. 13th Eurographics symposium on virtual environments and 10th immersive projection technology workshop (IPT-EGVE 2007), Aire-la-Ville, Switzerland, 85–88
Murray MP, Spurr GB, Sepic SB, Gardner GM, Mollinger LA (1985) Treadmill versus floor walking: kinematics, electromyogram, and heart rate. J Appl Physiol 59(1):87–91
Oliver M, Badland H, Mavoa S, Duncan MJ, Duncan S (2010) Combining GPS, GIS, and accelerometry: Methodological issues in the assessment of location and intensity of travel behaviors. J Phys Act Health 7(1):102–108
Patla AE (1997) Understanding the roles of vision in the control of human locomotion. Gait Posture 5(1):54–69
Pearce ME, Cunningham DA, Donner AP, Rechnitzer PA, Fullerton GM, Howard JH (1983) Energy cost of treadmill and floor walking at self-selected paces. Eur J Appl Physiol 52(1):115–119
Pick HL, Wagner D, Rieser JJ, Garing AE (1999) The recalibration of rotational locomotion. J Exp Psychol: Hum Percept Perform 25(5):1179–1188
Pozzo T, Berthoz A, Lefort L (1989) Head kinematic during various motor tasks in humans. Prog Brain Res 80:377–383
Prévost P, Ivanenko Y, Grasso R, Berthoz A (2002) Spatial invariance in anticipatory orienting behaviour during human navigation. Neurosci Lett 339(3):243–247
Readinger WO, Chatziastros A, Cunningham DW, Bülthoff HH, Cutting JE (2002) Gaze-Eccentricity Effects on Road Position and Steering. J Exp Psychol: Appl 8(4):247–258
Rieser JJ, Ashmead DH, Talor CR, Youngquist GA (1990) Visual perception and the guidance of locomotion without vision to previously seen targets. Perception 19(5):675–689
Riley PO, Paolini G, Della Croce U, Paylo KW, Kerrigan DC (2007) A kinematic and kinetic comparison of overground and treadmill walking in healthy subjects. Gait Posture 26(1):17–24
Rodriguez DA, Brown AL, Troped PJ (2005) Portable global positioning units to complement accelerometry-based physical activity monitors. Med Sci Sports Exerc 37(11 Suppl):S572–S581
Schutz Y, Herren R (2000) Assessment of speed of human locomotion using a differential satellite global positioning system. Med Sci Sports Exerc 32(2):642–646
Schwaiger M, Thümmel T, Ulbrich H (2007a) Cyberwalk: An advanced prototype of a belt array platform. Proceedings of IEEE International Workshop on Haptic Audio Visual Environments and their Applications, Ottawa, Canada
Schwaiger M, Thümmel T, Ulbrich H (2007b) Cyberwalk: Implementation of a ball bearing platform for humans. Proceedings of Conference on Human Computer Interaction, Beijing, China
Schwaiger M, Thümmel T, Ulbrich H (2007c) A 2D-motion platform: the cybercarpet. Proceedings of world haptics conference 2007, pp 415–420 Tsukuba, Japan
Sekiya N, Nagasaki H, Ito H, Furuna T (1996) The invariant relationship between step length and step rate during free walking. J Hum Mov Stud 30(6):241–257
Sheik-Nainar MA, Kaber DB (2007) The utility of a Virtual Reality locomotion interface for studying gait behavior. Hum Factors 49(4):696–709
Siegle J, Campos JL, Mohler BJ, Loomis JM, Bülthoff HH (2009) Measurement of instantaneous perceived self-motion using continuous pointing. Exp Brain Res 195(3):429–444
Souman JL, Frissen I, Sreenivasa MN, Ernst MO (2009) Walking straight into circles. Curr Biol 19(18):1538–1542
Souman JL, Robuffo Giordano P, Frissen I, De Luca A, Ernst MO (2010) Making virtual walking real: perceptual evaluation of a new treadmill control algorithm. ACM Trans Appl Percept 7(2):1–14 Article 11
Souman JL, Robuffo Giordano P, Schwaiger M, Frissen I, Thümmel T, Ulbrich H, De Luca A, Bülthoff HH, Ernst MO (2011) CyberWalk: enabling unconstrained omnidirectional walking through virtual environments. ACM Trans Appl Percept 8(4):Article 24.
Sparrow WA, Tirosh O (2005) Gait termination: a review of experimental methods and the effects of ageing and gait pathologies. Gait Posture 22(4):362–371
Sreenivasa M (2007) Statistics of natural walking (Master’s thesis). Universität Duisberg-Essen, Duisberg, Germany
Sreenivasa M, Frissen I, Souman J, Ernst MO (2008) Walking along curved paths of different angles: the relationship between head and trunk turning. Exp Brain Res 191(3):313–320
Stoffregen TA, Pittenger JB (1995) Human echolocation as a basic form of perception and action. Ecol Psychol 7(3):181–216
Stolze H, Kuhtz-Buschbeck JP, Mondwurf C, Boczek-Funcke A, Jöhnk K, Deuschl G, Illert M (1997) Gait analysis during treadmill and overground locomotion in children and adults. Electroencephalogr clin Neurophysiol 105(6):490–497
Stopher P, FitzGerald C, Zhang J (2008) Search for a global positioning system device to measure person travel. Transp Res Part C 16(3):350–369
Sun H-J, Campos JL, Chan GSW (2004a) Multisensory integration in the estimation of relative path length. Exp Brain Res 154(2):246–254
Sun H-J, Campos JL, Young M, Chan GSW, Ellard C (2004b) The contributions of static visual cues, nonvisual cues, and optic flow in distance estimation. Perception 33(1):49–65
Tan H, Wilson AM, Lowe J (2008) Measurement of stride parameters using a wearable GPS and inertial measurement unit. J Biomech 41(7):1398–1406
Tarr MJ, Warren WH (2002) Virtual reality in behavioral neuroscience and beyond. Nat Neurosci 5:1089–1092
Terrier P, Ladetto Q, Merminod B, Schutz Y (2000) High-precision satellite positioning system as a new tool to study the biomechanics of human locomotion. J Biomech 33(12):1717–1722
Terrier P, Schutz Y (2003) Variability of gait patterns during unconstrained walking assessed by satellite positioning (GPS). Eur J Appl Physiol 90(5–6):554–561
Terrier P, Turner V, Schutz Y (2005) GPS analysis of human locomotion: Further evidence for long-range correlations in stride-to-stride fluctuations of gait parameters. Hum Mov Sci 24(1):97–115
Thomson JA (1983) Is continuous visual monitoring necessary in visually guided locomotion? J Exp Psychol: Hum Percept Perform 9(3):427–443
Troped PJ, Oliveira MS, Matthews CE, Cromley EK, Melly SJ, Craig BA (2008) Prediction of activity mode with global positioning system and accelerometer data. Med Sci Sports Exerc 40(5):972–978
Vallis LA, Patla AE (2004) Expected and unexpected head yaw movements result in different modifications of gait and whole body coordination strategies. Exp Brain Res 157(1):94–110
Van den Bergh M, Koller-Meier E, Van Gool L (2009) Real-time Body Pose Recognition using 2D or 3D Haarlets. Int J Comput Vis 83(1):72–84
Wann JP, Swapp DK (2000) Why you should look where you are going. Nat Neurosci 3(7):647–648
Warabi T, Kato M, Kiriyama K, Yoshida T, Kobayashi N (2005) Treadmill walking and overground walking of human subjects compared by recording sole-floor reaction force. Neurosci Res 53(3):343–348
Zarrugh MY, Radcliffe CW (1978) Predicting metabolic cost of level walking. Eur J Appl Physiol 38(3):215–223
Zehr EP, Stein RB (1999) What functions do reflexes serve during human locomotion. Prog Neurobiol 58(2):185–205
Acknowledgments
The work reported in this chapter was funded by the European 6th Framework Programme, CyberWalk (FP6-511092). We would like to thank Jan Souman for his invaluable contributions to the work reported in this chapter.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media New York
About this chapter
Cite this chapter
Frissen, I., Campos, J., Sreenivasa, M., Ernst, M. (2013). Enabling Unconstrained Omnidirectional Walking Through Virtual Environments: An Overview of the CyberWalk Project. In: Steinicke, F., Visell, Y., Campos, J., Lécuyer, A. (eds) Human Walking in Virtual Environments. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-8432-6_6
Download citation
DOI: https://doi.org/10.1007/978-1-4419-8432-6_6
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4419-8431-9
Online ISBN: 978-1-4419-8432-6
eBook Packages: EngineeringEngineering (R0)