Introduction

The perception of owning our body and the ability to locate it in space are two fundamental requirements for self-consciousness to develop. While the majority of us take these functions for granted, there are some pathological conditions in which these mechanisms are disrupted (e.g. in autotopagnosia, a condition arising from brain damage to posterior parietal cortices, in which the ability to localise one’s own body parts is affected; Guariglia et al. 2002; Pick 1922; Semenza and Goodglass 1985).

Chronic pain patients have a distorted body image, leading to difficulties not only in self-representing the correct size of their affected limb (Moseley 2005), but also its position in space (Lotze and Moseley 2007). This close relationship between the position of our body in space and processing of sensory input is further supported by evidence in healthy participants that the processing of tactile stimuli to the hands is impaired when the hands are crossed over the body midline (Aglioti et al. 1999; Azañón and Soto-Faraco 2008; Eimer et al. 2003; Yamamoto and Kitazawa 2001). This crossed-hands deficit has been interpreted as a result of the mismatch between somatotopical and space-based frames of reference in determining the position of the external stimuli (e.g. when the right hand occupies the left hand of space and vice versa). Interestingly, the deficit also includes the intensity of the sensation, such that tactile or noxious stimuli to the hands are perceived as less intense if the hands are crossed than if they are not (Gallace et al. 2011; Sambo et al. 2013; Torta et al. 2013).

Knowing where our body is allows us to navigate our environment efficiently, avoid obstacles and perform our daily activities. In the healthy population, the central nervous system (CNS) integrates the range of internal and external cues, with ongoing motor commands (e.g. “efferent copy”, Holst and Mittelstaedt 1950; Sperry 1950) to generate a unique, coherent, multisensory experience. Although the CNS typically integrates multiple cues from different senses, it is still possible to locate one’s own body when a number of sensory cues are not available, for example when vision is occluded (for a comprehensive review on non-visual contributions to body position sense, see Proske and Gandevia 2012). Indeed, neurologically intact people are quite accurate in reaching for one hand with the other while keeping their eyes closed. Furthermore, when information about position is available from both visual and proprioceptive modalities, it has been shown that the perceived location of the limb more closely aligns with the visual information about its location than with the proprioceptive information about its location (van Beers et al. 1999a).

Thus, what is the relative role of vision and proprioception in correctly locating one’s own body part? Several studies have investigated this issue (Ernst and Banks 2002; Ernst and Bülthoff 2004; Smeets et al. 2006; van Beers et al. 1998, 1999a, b, 2002) and most of them support the idea that the CNS optimises the estimated position by integrating visual and proprioceptive signals. These studies have also shown that humans are less accurate in judging the position of their hand when they cannot directly see it, which suggests a relevant role of visual information. A consistent finding is that when vision is occluded, the perceived location of the hand drifts towards the body (Block 1890; Paillard and Brouchon 1968; Craske and Crawshaw 1975; Wann and Ibrahim 1992). This drifting effect, however, does not occur immediately after vision is occluded, suggesting that the visually encoded body position maintains an influence on the localisation of one’s own body. It has been proposed that this influence reduces as the visually encoded position decays, and then proprioception takes over (Desmurget et al. 2000; see Table 1).

Table 1 Several studies investigated the mislocalisation of limbs under different conditions

Critically, the observed drift occurs not only along the sagittal axis, but also the transverse axis. During visual occlusion, estimates of hand location decrease in accuracy, leading healthy participants to judge their left hand as more leftward and their right hand as more rightward during both reaching estimation (i.e. localisation by pointing with the seen hand; Crowe et al. 1987; Ghilardi et al. 1995; Haggard et al. 2000) and proprioceptive estimation (i.e. no movement of the seen hand; Jones et al. 2010). This directional bias is explained in terms of a misperception of the hand location relative to the body midline (Jones et al. 2010) and confirms again the predominant role of vision in localising one’s own hands (Newport et al. 2001).

Despite the increasing evidence for the importance of vision in localising the hands, the time course of the interaction between vision and proprioception during visual occlusion remains unclear. We suggest that with one hand hidden from view, participants will initially rely more on vision, locating the hidden hand where they last saw it, on the basis of its visually encoded position. Over time, this visual trace will decay, such that the estimation of location will become more dependent on proprioceptive inputs (Chapman et al. 2000). Our suggestion would be in line with the maximum likelihood estimation rule (Ernst and Banks 2002). This theory states that in order to create a unified perception of a stimulus by means of different senses, the nervous system combines the information coming from the different sensory modalities in a statistically optimal fashion. This would suggest that the sensory modality that carries less variance dominates in determining the final percept. Further, the variance is direction-specific and sense-specific. In fact, research has demonstrated that proprioception-based localisations are more precise in the radial direction (reference shoulder; thus carrying more variance in the azimuthal direction), while vision-based localisations are more precise in the azimuthal direction (reference cyclopean eye; thus, carrying more variance in the radial direction; van Beers et al. 1999a, b).

In the present study, we investigated how vision and proprioception interact over time in localising one’s own hidden hand by using a bodily visual illusion that alters the perception of where one’s hand is. That is, the hand appears to be located where it is not. In order to test our hypothesis, we used a new illusion based on the disappearing hand trick (DHT) using the MIRAGE system (Newport and Gilpin 2011). Our illusion allowed us to manipulate the relationship between the seen and felt location of the right hand.

We hypothesised that when making hand localisation judgements, participants would initially rely primarily on the visually encoded position of the hand. However, we expected that, over time, there would be a shift to rely more heavily on proprioception, as the visually encoded position decays. As such, we hypothesised that in the 3 min following an illusory condition in which the visually encoded (perceived) position of the hand is rendered incongruent with its proprioceptively encoded (physical) position, we would observe a faster and larger drift towards the hidden right hand than in a non-illusory condition where the visually and proprioceptively encoded positions are congruent. A drift towards the right for the right hand is expected in any condition, but according to our hypothesis, its nature would not be the same. In particular, in Congruent conditions, when the visually encoded position of the hand is not manipulated (i.e. it is congruent with the proprioceptively encoded position), we predict that the participants will localise their hidden hand as more rightwards, in line with previous research reported above. In the Incongruent condition, where the visually encoded position of the hand has been manipulated (i.e. only the proprioceptively encoded position of the hand is correct) instead, we predict that we will find a summation between the directional bias towards right and an increasing reliance on proprioception over time that would lead to a larger and faster drift towards right than in the Congruent condition during the three minutes following the illusion.

Additionally, in order to better clarify the role of vision in hand localisation, we manipulated the rate of decay of the visually encoded position by asking participants, after their right hand was occluded, to either close their eyes (during which that decay of the visually encoded position will be accelerated; Chapman et al. 2000) or to continue to look at the blank space (during which that decay of the visually encoded position will be slower; Chapman et al. 2000). Furthermore, an increase in the amount of visual exposure to an incorrect visual trace has been found to decrease the reliance on proprioception during a reaching task (Holmes and Spence 2005). As a consequence, a faster decay of the visual trace might accelerate the reliance on proprioception and thus provide a larger and faster drift towards the right when eyes are closed prior to localisation judgements.

Experiment 1

Materials and methods

Participants

Sixteen healthy volunteers (eight males, mean age 31 ± 11 years) participated. All participants had normal or corrected-to-normal visual acuity and were right-handed (self-reported). They had no current or past neurological impairment and no current pain or history of a significant pain disorder. They were also naïve about the purpose of the study. All the participants gave written consent prior to their participation to the experiment. The study was performed in accordance with the ethical standards laid down in the 1991 Declaration of Helsinki and was approved by the Human Research Ethics Committee of the University of South Australia.

Apparatus and experimental setup

Participants viewed a real-time video image of their hands in first person perspective using the MIRAGE system (Newport et al. 2009). A combination of mirrors and camera allowed participants to view their hands in an identical spatial location and from the same perspective as if directly viewing their real hands (Newport et al. 2010). The seen position of the participants’ right hand could be manipulated and presented in real time via customised in-house software. In particular, the participants’ right hand could appear to them in its true location, where vision and proprioception offered congruent input (i.e. control congruent conditions) or in an alternative location in which vision and proprioception were incongruent.

Procedure

In all conditions, participants were seated at a table with their hands resting inside the MIRAGE system (Fig. 1). In this position, they could see an online image of their hands. A fabric, opaque bib was secured around participants’ necks and the bottom edge was attached to the MIRAGE to conceal the position of their elbows and thus remove any additional visual cues to hand location. The height of the chair was adjusted such that participants were able to look inside the MIRAGE and to comfortably raise their hands and forearms above the surface of the table.

Fig. 1
figure 1

Experimental setup. The participants were seated at a table with their hands resting inside the MIRAGE system. A fabric bib was attached to prevent the participants of seeing the position of their elbows. The chair was adjusted for each participant in order to have a comfortable position during the experiment. The pictures on the right also show that participants perspective while watching their hands moving between the blue bars inside the Mirage

Before starting the experiment, participants underwent a training procedure to familiarise with the localisation task. During the training task, participants practised hand localisation by stopping a visual arrow (that was presented via MIRAGE software, directly above their actual hand location) when the arrow reached the middle finger of their hidden right hand. The main goals of the training procedure were: (1) fixating on a spot within a blank space without being distracted by the movement of the arrow moving and (2) being able to stop the arrow accurately, even with time constraints. The training involved three stages, for a total of 22 practise localisations. The participants were allowed to practice until they felt they were totally confident with the task and also with the timing. Then, the experimental conditions commenced (see Supplemental Materials for an extensive explanation of the practice trials). Importantly, the training trials were performed at the very beginning of the experimental session, such that the aim of this procedure was just to ensure that the participants had fully understood the task and that they were totally familiar with it.

In all experimental conditions (see Fig. S1, Supplemental Materials), participants underwent an adaptation procedure in which they were asked to hold their hands approximately 5 cm above the table surface and maintain the position of their hands between two moving blue bars either side of their hands (see Supplemental Materials). In all the conditions, both hands were initially positioned approximately 13 cm laterally from the body midline. During the adaptation procedure, the positions of the blue bars were manipulated laterally, so that the positions of the hands could be gradually shifted relative to their seen position by independently moving the seen image of the hands relative to their real locations. The position of the right hand was varied across three conditions (Incongruent, Congruent Outer, Congruent Inner). In the Incongruent condition, the seen image of the right hand moved inwards at approximately 25 mm/s. Thus, in order to maintain the appearance of their right hand remaining stationary, participants were (unknowingly) required to move their right hand outwards at the same rate. This adaptation yielded to a visuo-proprioceptive discrepancy between the seen and real positions of the hand. In this illusory condition, the adaptation procedure resulted in the actual position of the participants’ right hand being 11 cm further to the right (20 cm from midline) than the seen position (9 cm from midline). Conversely, in the Congruent control conditions, the movement of the visual image was identical to the real movement of the right hand.

There were two Congruent conditions based on final hand position: the Congruent Outer condition (right hand moves from 13 to 20 cm from the midline) and the Congruent Inner condition (right hand moves from 13 to 9 cm from midline). These two conditions were designed in order to control for both the seen position of the hand (9 cm from midline) and the real position of the hand (20 cm from midline) in the Incongruent condition. The final true location of the right hand was identical between the Incongruent and the Congruent Outer conditions, and the final seen location of the right hand was identical between the Incongruent and the Congruent Inward condition. The movement of the left hand seen on the screen was congruent with the participant’s real hand movement in all the conditions, such that its final position was 9 cm from the body midline (4 cm more inwards than the initial position).

Immediately after the adaptation procedure, the experimenter placed the participant’s hands on the table (maintaining their position between the blue bars) and participants kept both hands still. They were instructed to fixate on their right hand. In all conditions, the right hand was then occluded from view (i.e. disappeared from the screen). The participants were then either asked to close their eyes for 20 s (Eyes Closed, EC) or to fixate on the space in which they had seen their right hand (Eyes Open, EO). Thus, each of the three conditions (Incongruent, Congruent Outer and Congruent Inner) was repeated twice—once with the eyes open and once with the eyes closed. In the EC condition, once the eyes were open again, participants were instructed to fixate on the location where they felt their hand to be. Then the localisation task commenced (Fig. S1 Supplemental Materials, see description below). In order to avoid any reaching error bias due to mislocalisation of the non-experimental hand, we used a localisation task that did not require any hand movement (i.e. a moving arrow as used in the training task).

Participants performed the six conditions in a randomised, counterbalanced order: Congruent Inner, EO and EC; Congruent Outer, EO and EC; Incongruent, EO and EC (see Table S1, Supplemental Materials). Following each condition, participants verbally responded to a questionnaire (see Table S2, Supplemental Materials), giving a number from zero to ten in accordance with their agreement with each sentence, in order to check whether they were aware of the visual illusion performed in the Incongruent conditions. The questionnaire was a shortened version of that used in the original DHT experiment (Newport and Gilpin 2011). At the very end of the experimental session, the experimenters briefly interviewed the participants. The participants were told that in one or more conditions, the seen position of their hands was not their actual position, because a visual illusion was elicited. They were then asked whether they were aware of it and whether they could try to report in which condition (or conditions) this illusion had been performed.

Localisation task

The localisation task did not require any movement of either hand. Reaching tasks are typically used to localise one’s own body part and require reach planning. Such tasks have been shown to utilise proprioceptive information, rely on an accurate localisation of the non-experimental hand (Jones et al. 2010) and incorporate effort and motor command components (Proske and Gandevia 2012). As mentioned above, participants were fixated on the point of the screen corresponding to their perceived location of the middle finger of their hidden right hand. An arrow (controlled by the experimenter) was displayed centrally in the upper part of the screen, pointing towards the participants. The arrow moved at a constant speed (2.65 cm/s) horizontally in the direction of the right hand (i.e. outwards from midline). Participants were instructed to say “stop” when they judged the arrow to be aligned vertically with the tip of their hidden right middle finger. This gave the experimenter a numerical value corresponding to the position of the arrow on the screen. This value was recorded for each localisation. It was not possible to blind the experimenters to the conditions, so the experimenter who was controlling the arrow looked away from the screen during the localisation task in order to minimise any possible interference due to expectation about the localisation outcome. The same experimenter also visually monitored the participants’ gaze direction. The arrow was displayed 20 s after the right hand had disappeared from view during which the participants either kept looking at the spot where they felt their right hand to be (EO conditions) or they had their eyes closed (EC conditions). The arrow returned to the starting point in the centre of the screen immediately after each localisation. Participants performed the localisation task every 15 s for a total of 13 localisation values. A second experimenter recorded each value before the arrow was returned to the starting point by the first experimenter. Following the localisation task, participants remained with their hands in position inside the MIRAGE but viewed a blank screen, allowing the experimenters to record the numerical value of the real position of the participant’s right hand without revealing this to the participant. This was done exactly with the same procedure used in the localisation task, so recording the numerical value of the arrow when it was placed exactly on the participants’ fingertip.

Data analysis

For each participant, and for each condition, the localisation error was calculated (i.e. the error values, calculated as the difference score between the participants’ judged location and the true location of their hidden hand). The true hand location was set at zero, such that overestimations (i.e. mislocalisation to the right of the hidden hand) were represented by positive values and underestimations (i.e. mislocalisation to the left of the hidden hand) by negative values. Because the data did not satisfy the assumptions of a conventional ANOVA (i.e. the homogeneity of the regression slopes), we undertook a random effect analysis of variance in order to analyse the error values. The random effect model is used when the factor levels are meant to be representative of the general population of possible levels. Random effects in ANOVA assume that the groups are a random sampling of many potential groups. The researchers are usually interested in whether the factor has a significant effect in explaining the responses in a general way. The objective of the researcher is to extend the conclusion based on a sample to all levels in the population. Random effect ANOVA assumes that the researchers randomly selected groups or subjects from all the groups or subjects in the population, even the ones not included in the research. It seeks to answer the effect of the factor in general. For a random (effect) factor, data are collected for a random sample of possible levels, with the hope that these levels are representative of all levels in that factor. This approach can be appropriate where there are a large number of possible levels (e.g. see Larson 2008). Based on the graphical plot of the data and on the Wald Z Test, the factor Participants was considered as a random factor and the factors Congruency (Congruent Outer, Congruent Inner, Incongruent), Sight (Eyes Open, EO; Eyes Closed, EC), Time (13 points over 3 min), and their interactions (Congruency × Sight, Congruency × Time, Time × Sight) as fixed factors. Different models were taken into account based on the Schwarz’s Bayesian criterion (BIC), and the model with the best fit including a random intercept (Participants) and random slopes (Condition, Sight, Time) was identified.

In order to investigate overall differences in error values between conditions (i.e. participants’ accuracy), all error scores were normalised to the first localisation judgement, and a 2 (EO, EC) × 3 (Congruent Outer, Congruent Inward, Incongruent) repeated-measures ANOVA compared error across conditions. Since Mauchly’s test for sphericity was significant, a Greenhouse–Geisser correction was applied.

As far as the questionnaire scores are concerned, we performed a one-way repeated-measure ANOVA to compare the participants’ rating scores across conditions (Congruent Outer, Congruent Inward, Incongruent) for each of the seven questionnaire items. If the Mauchly’s test for sphericity was significant, a Greenhouse–Geisser correction was applied.

In addition, Pearson’s correlation coefficients were computed to assess the relationship between the error scores at T13 (i.e. the last localisation) and the rating scores for the question “I couldn’t tell where my hand was”. For each of the main condition (i.e. Congruent Outer, Congruent Inner and Incongruent), the mean rating score at the above item was calculated between the rating scores given at the end of the Eyes Open and the rating scores given at the end of Eyes Closed condition.

All the analyses have been carried out using SPSS statistic package (IBM SPSS Statistics 21).

Results

There was a significant effect on error values of Congruency [F(2, 36.31) = 105.63, p < 0.001; r = −0.661] (Fig. 2a) and of Time [F(1,15.03) = 11.64, p < 0.005; r = 0.102]. That is, the overall error value reduced over time. No main effect of Sight [F(1,18.69) = 0.072, p = 0.791] was detected. This indicates that both the Congruency and Time modulated the perceived location of the participants’ hidden hand. We observed a significant interaction between Congruency and Sight [F(2,1162.68) = 9.60, p < 0.001] and Congruency and Time [F(2,1162.68) = 9.72, p < 0.001] (Fig. 2b, c), suggesting that the main effect of Time was mainly driven by Incongruent condition (as this effect has been detected in Incongruent but not in Congruent conditions). No other interactions were found to be significant.

Fig. 2
figure 2

Results (Experiment 1). For the factor Congruency (a), in the Incongruent conditions the error values were significantly different to those found in the Congruent Inner (p < 0.001) or Congruent Outer (p < 0.001) conditions, in which mean error values were both positive. b We set to zero the very first localisation (and, consequently, we recalculated the other error points), in order to highlight the increase in error over time and the fact that the significant interaction between congruency and time (p < 0.001) showed a larger and quicker drift towards the right in the Incongruent conditions than in either of the Congruent conditions. c The significant interaction between Sight and Congruency is shown. In all the figures, the mean errors scores are plotted, while the vertical bars represent the standard errors

For the factor Congruency (Fig. 2a), in the Incongruent conditions, participants error values across all localisations were negative (i.e. left of the actual location of the hand) and were significantly different to those found in the Congruent Inner [t(60.24) = 9.766, p < 0.001] or Congruent Outer [t(60.24) = 9.006, p < 0.001] conditions, in which mean error values were both positive (i.e. right of the actual location of the hand). Thus, the fact that the error values were positive (i.e. drifted more towards the right with respect to the real hand position) in the control conditions suggests an overestimation of the hand position to the right of the real location of the hand. This is in line with previous studies that showed a drift towards right when the occluded hand was the right one (Jones et al. 2010). Conversely, the fact that the error values in the Incongruent conditions were negative suggests an underestimation of hand position, that is, to the left of the real location of the hand. Of interest, in the Incongruent condition, we found a mislocalisation towards left, that is, in the opposite direction of the drift found in the Congruent conditions. Since the last seen position was actually more leftwards than the real position of the hand, these findings support that the initial localisation judgements were captured by the visual trace of the hand.

The Congruency by Time interaction (Fig. 2b) showed that the change in error values over time was larger in the Incongruent condition than it was in the Congruent Inner condition [b = −0.99, t(1162.68) = −1.98, p = 0.048] or the Congruent Outer condition [b = −1.44, t(1162.68) = −2.85, p = 0.004] (Fig. 2b). In the Incongruent condition, this change was from larger negative error values to smaller negative error values—i.e. moving towards the correct hand position. In the Congruent conditions, this change in error values was from smaller positive error values to larger error values—i.e. moving away from the correct hand position. This result suggests a greater amount of drift over time in the Incongruent condition than in the control conditions. This drift was consistently in a rightwards direction as in the Congruent conditions, but since this significant difference, we can hypothesise that the localisations in the Incongruent conditions are not just rightwards, but they are also towards the real location of the hidden hand.

Finally, closing the eyes for 20 s after the hand disappears led to more rapid improvement in localisations during the Incongruent conditions but not during the Congruent conditions (Fig. 2c), that is, during the Incongruent conditions, the mean localisation error was 6.53 cm, (90 % CI −7.56 to −5.50 for the EO trials and −5.62 cm, 90 % CI −6.66 to −4.59 for EC trials). However, during the congruent trials, the mean localisation error was (EO mean = 1.48 cm, 90 % CI 0.45 to 2.51; EC mean = 1.00 cm, 90 % CI −0.03 to 2.03) or the Congruent Outer conditions (EO mean = 0.86 cm, 90 % CI −0.17 to 1.89; EC mean = 0.17 cm, 90 % CI −0.86 to 1.20).

The analysis of the overall differences in error values between conditions (i.e. participants’ accuracy) revealed a significant effect of Congruency [Wilks’ Lambda = 0.390, F(2,14) = 10.958, p < 0.001], but no effect of Sight [Wilks’ Lambda = 0.970, F(1,15) = 0.460, p = 0.508] and no interaction effect [Wilks’ Lambda = 0.861, F(2,14) = 1.126, p = 0.139]. Bonferroni-corrected pairwise comparisons (α = 0.0167) revealed that accuracy was significantly lower in the Incongruent condition than in both the Congruent Inner (p = 0.001) and Congruent Outer (p = 0.007) conditions. No significant difference was found between the two Congruent conditions. We interpreted this result as confirmation that the position of the right hand was actually deceived and that this deception lasted over time. Alternatively, one may argue that the mislocalisation of the right hand during the Incongruent condition could be simply explained as a visual capture of hand position (Pavani et al. 2000). Thus, in order to check that the participants were indeed unaware of the difference between the Congruent and Incongruent conditions and so to rule out the possibility that the effect that we found was merely due to a visual capture, we analysed the questionnaire ratings. Of specific relevance was the question, I couldn’t tell where my right hand was, as higher ratings for this question in the Incongruent condition (vs. Congruent conditions) would suggest that participants were aware of the deception and thus were unsure of their actual hand position. Nonetheless, this effect might simply reflect a low sensitivity of the questionnaire itself in evaluating the ability of the participants to determine where their right hand was. The repeated-measure ANOVA to compare the participants’ rating scores across conditions (Congruent Outer, Congruent Inward, Incongruent) showed no effect of Congruency for any of the questionnaire items (see Table S1 in the Supplemental materials). This result supports the fact that the participants were naïve to the experimental manipulations. The lack of awareness regarding the experimental manipulations is also supported by the participants’ final self-report. In fact, none of the participants claimed to be aware of the illusion and when asked to try to identify the condition(s) in which the illusion was performed, they reported to be guessing. None of the participants correctly identified both of the Incongruent conditions.

Finally, the Pearson’s correlation coefficient, computed for either Congruent Inner, Congruent Outer and Incongruent conditions, did not reveal any significant correlation between the error scores at T13 and score at the questionnaire for the question “I couldn’t tell where my hand was” for any of the conditions analysed (Congruent Outer: r = −0.292, n = 16, p = 0.273; Congruent Inner: r = −0.42, n = 16, p = 0.105; Incongruent: r = −0.326, n = 16, p = 0.217). This result might suggest that amount of error at the last localisation was not correlated with the perceived ability of the participants to locate their hidden hand. Also, that no correlation was significant supports the idea that the participants did not realise that the presence of a visuo-proprioceptive manipulation was performed in just the Incongruent condition (and not in the Congruent conditions). This was also confirmed by a debriefing with the participants, at the end of the experimental session.

Experiment 2

In order to rule out the possibility that the shift towards right was merely an effect of the arrow movement direction used in the localisation task, we designed a second experiment, in which we simply varied this direction. The participants performed two conditions (both incongruent) that differed only for the starting point and direction of the arrow.

Materials and methods

Participants

Eighteen healthy volunteers (10 males, mean age 33 ± 9 years) participated. The conditions were randomised and counterbalanced across participants. All participants had normal or corrected-to-normal vision and were right-handed (self-reported). They had no current or past neurological impairment and no current pain or history of significant pain disorder. They were also naïve to the aims of the study. All the participants gave written consent prior to their participation to the experiment. The study was performed in accordance with the ethical standards laid down in the 1991 Declaration of Helsinki and was approved by the Human Research Ethics Committee of the University of South Australia.

Procedures

The participants underwent the original DHT (Roger Newport and Gilpin 2011) twice. Note that this illusion differed from the illusion used for Experiment 1 just for the fact that both hands were actually moving. However, we know from pilot data that this difference does not modulate the effects of the arrow direction on the localisation responses. Importantly, since the aim of this experiment was not to barely replicate the results of Experiment 1, the change in the procedure was performed in order to maximise the effect of the adaptation procedure on localisation task. In fact, in Experiment 2, since both hands are involved, a possible asymmetry in the arms movement can be ruled out, such the participants are able to focus just on the localisation task. During the localisation task, in one condition, the arrow was starting from the centre of the screen and moving rightwards (as in Experiment 1), while in the other condition, the arrow was moving at the same velocity but from the right-hand side of the screen towards left. The task was exactly the same as that described above.

Results

We performed a 2 (Arrow Direction: Centre to Right, Right to Centre) by 2 (Time: T0, T12) repeated-measures ANOVA. A main effect of Time [η 2 = 0.52, F(1,17) = 18.38, p < 0.001] showed that localisation error scores were more accurate (i.e. less negative) on the last judgment (T12 mean = −9.23 cm, SE = 0.86, 95 % CI −11.05 to −7.40) than they were on the first (T0 mean = −11.64 cm, SE = 0.46, 95 % CI −12.61 to −10.58). There was no main effect of Arrow Direction [F(1,17) = 3.17, p = 0.093] nor a significant interaction between the Arrow Direction and Time [F(1,17) = 2.06, p = 0.170]. Thus, in line with the Experiment 1, participants became more accurate over time, but the direction of the arrow did not influence the extent of rightward drift (see Fig. 3).

Fig. 3
figure 3

Results (Experiment 2). Experiment 2 showed no significant difference in the performance at the localisation task either when the arrow used for the task was moving from the centre of the screen towards the hidden hand or from the right towards the centre

Discussion

Our results support our prediction that, when the perceived hand position is different from the physical hand position (due to a visual illusion), in the three minutes following visual occlusion of the hand, participants rely less on vision and more on proprioception, such that hand localisation judgements become more accurate (i.e. closer to the physical position of the hand) over time. Conversely, we hypothesised that providing participants with a congruent physical and perceived location of the hand would result in more accurate hand localisation judgements than when a visuo-proprioceptive incongruency was introduced. We controlled for the role of vision, accelerating the decay of the visual trace by closing the eyes immediately after hand occlusion. When the participants were forced to rely more on proprioception (i.e. the physical position of the hand was different from its perceived position), the switch to proprioception occurred earlier when they closed their eyes before the localisation task than when they kept them open.

Accuracy in the localisation task decreased over time after the visual occlusion of the hand, as evidenced by the increase in error values detected over the three minutes following the hand occlusion. That is, when the visually encoded (perceived) hand position was the same as the proprioceptively encoded (physical) position, the localisation judgements diverged from the physical position of the hand over time according to a directional bias. Also, our hypothesis that a visuo-proprioceptive incongruence (yielded by the illusion) would increase the use of proprioception to localise the hand was confirmed by our finding of an acceleration of the drift towards the real position of the hand in the condition in which the illusion was performed. This result is consistent with the maximum likelihood estimation theory of multisensory integration (Ernst and Banks 2002), suggesting that the sensory modality that dominates over the others in a given situation is the one that carries the lower level of variance. In the Incongruent condition, the increased accuracy in time since the last visual confirmation of hand position would suggest that remembered visual information has more variance (due to decay of the visually encoded position) than ongoing proprioceptive information—even in stationary sitting, there are continual perturbations incurred by breathing, cardiac rhythm and postural sway that are sufficient to activate low-threshold proprioceptive organs (see Proske and Gandevia 2012). This idea seems to be supported by the finding that acceleration of the visual trace decay, by closing the eyes, results in better performance in hand localisation for only the Incongruent condition when visual information is inaccurate. While we did not predict that the effect of closing the eyes would be specific for the Incongruent condition, it is a reasonable prediction. In fact, we hypothesised that vision would interfere with the correct localisation only when the visual trace is inaccurate. We hypothesised that when this occurs (i.e. in the Incongruent condition), the participants would rely more on proprioception over time, leading to an increase in the accuracy of hand localisation. Thus, an earlier decay of the visual trace could quicken the onset of the switch from vision to proprioception. Our results support this idea—closing the eyes only matters when an inaccurate visual trace is provided and this leads to more accurate localisations compared with keeping the eyes open.

One might argue that the effect we found might be due to a spontaneous return towards the real position. However, once the illusion is in place, the hands are stationary and there would not be any reason for updating their perceived position. In Newport and Gilpin’s study (2011), immediately after the right hand disappeared from view, the participants were required to reach across with their left hand to touch their right hand. All the participants failed in touching their disappeared hand, showing that the real position of the hands was not updated yet. We can argue that, in our experiment, until otherwise proved, the visually encoded position of the hands is maintained. However, our results show that, even though there is no actual or potential motor requirement, the location of the hand is updated on the basis of the available data, in this case proprioceptive input (i.e. visual input is no longer available). One would predict that if there is a biological advantage to be ready for movement even though none is expected, then this constant updating or recalibration would be helpful. Importantly, the shift in weighting given to proprioception is not immediate and complete, but rather occurs gradually over time.

Alternatively, during the adaptation procedure of the Incongruent condition, it is possible that a recalibration of the felt position of the hand with the seen position of the hand occurred, such that the relationship between proprioceptive and visual information was updated, to the detriment of proprioception. A decay of this recalibration between proprioception and visual information may be another possibility for the increased accuracy over time of hand localisation judgements in the Incongruent condition. Previous work using prism adaptation, in which the seen position of the hand is manipulated, suggests that the participants, under certain conditions, might start to use new visuospatial coordinates for their limb (Rossetti et al. 1998). Importantly, when the adaptation is removed, this re-calibration spontaneously decays (Newport and Schenk 2012). It may be that our data are a corollary of this spontaneous decay seen in prism adaptation. Again, that the decay occurred quicker when visual information was removed would support this idea. Our data are in line with both the MLE and the recalibration hypothesis; however, it was not our intent to differentially interrogate those theories.

Early prioritisation of vision

In line with our hypotheses, in all conditions, participants first localised their hidden right hand at a point located towards the last seen location of the hand. This was true both for the Congruent conditions (where the last seen location matched the true location of the hand) and for the Incongruent condition (where the last seen location did not match the true location of the hand). In the Incongruent condition, localisation scores were significantly leftwards (i.e. towards the last seen location) and less accurate than those in the two control conditions, which supports the dominant role of vision in localisation of our hands. Our data confirm and extend the previous findings that relate the amount of visual exposure (in terms of time) with the reliance on proprioception (Holmes and Spence 2005). In fact, Holmes and Spence found that the longer the participants were allowed to look at the (incorrect) position of their right hand, the less they relied on proprioception, tending rather to rely on vision. We found that also the opposite holds, by showing that with time, when the decay of the visually encoded position is accelerated (by closing eyes), the relative weighting and reliance on incoming sensory information switches sooner to proprioception, to the detriment of vision.

The directional bias and the proprioceptively encoded position of the hand

Regardless of the Congruency, a rightward drift was found in all the experimental conditions. A number of studies have shown that a mislocalisation of one’s own arm and hand occurs when vision is occluded (Block 1890; Paillard and Brouchon 1968; Craske and Crawshaw 1975; Desmurget et al. 2000; Smeets et al. 2006; Wann and Ibrahim 1992). It is well established that when healthy participants are asked to locate their own hidden hand in space, there is a directional bias towards the attended side of space (i.e. the right hand is overestimated as being more rightwards, while the left hand as more leftwards) (Crowe et al. 1987; Ghilardi et al. 1995; Haggard et al. 2000; Jones et al. 2010; van Beers et al. 1998). Thus, the significant rightward shift in localisations over time in all conditions is consistent with the drift reported in prior studies. Not only did we observe the same drift (in this case towards right) in all conditions, but we also found that this drift increased over time. We propose that this directional bias is driven by the fact that the participants were engaged in task that occurred in that portion of space. Due to the well-established decay of the visually encoded position after hand occlusion over time (Chapman et al. 2000), the influence of this bias, although present since the first localisation, seems to become prevalent, leading to localisations that are increasingly shifted towards the side to which the participants were performing the localisation task (i.e. to the right in our experiment). Thus, over time, the ability to localise one’s own limb in space becomes less accurate due to the reliance on a rapidly fading visually encoded position. However, if the fading visually encoded position was the only reason for less accurate localisations, the localisation judgements would be randomly distributed around the real hand location, to both the right and to the left of the real hand position. Instead, a specific trend towards the right, beyond the true (or last seen) location, was found. The question addressed here is why, when the participants start to become less accurate in localising their hidden right hand, do they systematically localise it increasingly towards the right? Our hypothesis accounts for this peculiar trend, suggesting that while the visual trace decays, a bias towards the space in which the experiment is occurring seems to guide the localisations. We can also exclude that this directional bias was simply the product of the arrow shifting, as clearly showed by the results from Experiment 2.

One might argue that the shift towards right is simply due to a cumulative error effect (i.e. the successive summation of the error produced by each consecutive response in a task; Bock and Arnold 1993; Dijkerman and de Haan 2007) caused by the repeated measures. However, the cumulative error effects have been found, and related to, motor tasks. For example, in Bock and Arnold’s study (1993), the cumulation errors were directly related to the motor component of the task. Also, Jones et al. (2010), on the basis of work Dijkerman and de Haan (2007), noted that reaching tasks might lead to kinematic errors that cannot be disentangled by localisation errors. Our protocol did not involve repeated movements, but repeated judgements of an independently moved arrow. Moreover, in the Incongruent condition, our protocol did not show accumulating error, but accumulating accuracy. However, even if the drift reflected an accumulating error, relative to the visually encoded location of the hand, then it would be consistent across conditions, which it is was not.

Importantly, the drift towards the right side was significantly different between the Congruent and the Incongruent conditions. We interpreted this significant difference as evidence of the contribution of proprioception when there is a visuo-proprioceptive incongruence (i.e. the physical and perceived position of the hand are different), but not when it is just faded away (i.e. when the hand is simply hidden from view). In fact, equal accuracy in the localisation task across the three conditions would have suggested reliance primarily on proprioception (i.e. in the Incongruent conditions, no matter where the perceived position was, the participants correctly would localise the position of the hidden hand). We contend that the greater rightwards drift when vision was occluded confirms that an updated proprioceptive input drives the rightward shift over and above any generic directional bias. On the other hand, a similar amount of drift towards the right side across all the conditions would have suggested that the localisations were mainly guided by the directional bias. Our findings clearly confirm that vision is prioritised over proprioception even when the visual input is inaccurate, but over time, in turn, proprioception is prioritised over the directional bias.

Our experiments do not exclude that a proprioceptive component was also present in the two control conditions. However, they do suggest that this component is stronger when vision is unreliable. Importantly, the adaptation procedure used in the Incongruent conditions resulted in participants being unaware of any difference between the control conditions and the illusion, as reported after the experiment and confirmed by the responses to the questionnaire. Crucially, this indicates that the switch from a visually to a proprioceptively encoded location of the hidden hand occurred entirely outside of participants’ awareness. There is clearly a complex interaction between visual, proprioceptive (and task-related) factors in self-localisation of one’s own hand. In particular, we shed light on the relative roles of vision and proprioception over time, concluding that sighted, neurologically healthy participants tend to rely heavily on vision even when the visually encoded position of their hidden hand has decayed and made unreliable, which in turn seems to result in a strong directional bias due to the task itself. In addition to this, our findings also underlined the important contribution of proprioception when vision is unreliable. In fact, although in most cases the physical (proprioceptively encoded) position of the hand is ignored (or perhaps just underestimated), there are some circumstances in which proprioception can be utilised effectively in accurately locating one’s own body part. Vision gives us distal information about the external world, allowing us to make prediction without directly contacting a potentially dangerous stimulus (Gregory 1997). It seems then an evolutionary advantageous choice to adopt a heavy reliance on visual information in a number of situations. However, there are cases in which proprioception becomes not just useful but essential. In particular, people who are blind or partially blind and who are in a condition similar to the one described here should choose to rely on proprioception (in fact, the occluded hand is inserted into a box-like system, making other strategies, such as echolocation highly unlikely). Gaining knowledge about the relative weighting of sensory inputs for self-localisation is also of importance for a variety of disorders in which proprioception is known to be damaged. In cerebral palsy, for example, a deficit to visual-proprioceptive system has been observed (e.g. Wann 1991). In addition, patients whose sense of touch is severely damaged (as in case of deafferentation) are also unable to locate their body in space and navigate in the environment. In order to successfully execute a movement, these patients need to visually monitor their limbs during the execution (Cole and Paillard 1995). Also, it is well-known that chronic pain involves disturbances in the motor system (e.g. Moseley 2004) and body image (e.g. Moseley 2005) that may also disrupt proprioception (see Lotze and Moseley 2007, for review). Besides, recent research has pointed out the relationship between the mechanisms underlying the processing of body location and nociception (Gallace et al. 2011; Sambo et al. 2013; see also Moseley et al. 2012, for a review).

Future directions

In order to provide further support to the idea that the drift towards right (i.e. towards the real position of the hand) in the Incongruent conditions is indeed due to the heavier weight assigned to proprioception, future experiments will need to investigate a condition where the seen position is to the right of the true hand position (instead of to the left, as in the present experiments). In line with the results of our study, a drift guided by proprioception towards the real position of the hand (i.e. towards the body midline and in the opposite direction than in the present study) should be found also under this condition. However, according to the extant literature (Crowe et al. 1987; Ghilardi et al. 1995; Haggard et al. 2000; Jones et al. 2010) and as confirmed by the results in the present study, also a rightward mislocalisation of the right hand (i.e. the directional bias) should be found. The directional bias, thus, being in the opposite direction of the drift guided by proprioception, might reduce the effect of this leftward drift. Thus, further experiments will be needed in order to investigate the effect of the direction of the adaptation procedure on self-localisation performance when directional bias and proprioception guided drift tend to opposite directions.

Conclusion

In conclusion, when the perceived hand position is different from the physical hand position (due to a visual illusion), we demonstrate a time-dependent shift from relying on visually encoded to proprioceptively encoded information, experimentally reversing the seemingly usual dominance of vision in localising the body. In addition to this, we showed the time course of self-localisation abilities when visual information becomes less reliable and, possibly, when proprioception starts to be more reliable (due to consistent signals coming from the limb to be localised). In fact, over time, the participants switch from a visual-based localisation strategy to a proprioceptive-based one. Last, we show new evidence supporting the claim that the brain updates limb location, even when there is no conscious need to do so (Haggard and Wolpert 2001).