Introduction

The disappearing hand trick (Newport and Gilpin 2011) manipulates vision, proprioception and touch to create the multisensory illusion that the participant’s hand is no longer present. Through a process of visuo-proprioceptive adaptation, this illusion gradually separates the seen location of the hand from the real location, without conscious awareness, and results in the hand being perceived to be much closer to the midline than it really is. Following this adaptation process, the hand in the seen location can be hidden from view so that, when the participant reaches across to touch it, it cannot be felt (because it is really in another location)—the empty table top can be seen and felt in its place. Although first described in 2011, little is known about the underlying processes that make the illusion and accompanying experience of a missing body part so effective. Newport and Gilpin interpreted the lack of skin conductance response while threatening the disappeared hand as a sense of disownership over the ‘disappeared’ hand (Newport and Gilpin 2011), while recent research (Bellan et al. 2015) seemed to suggest that healthy participants physically lose the location of their hidden hand, but not its existence—they know it is somewhere, but they do not know where.

The ability to localise one’s own hand involves assigning weight to relevant inputs from numerous sensory sources and combining this information to make a judgement of location [i.e. (Ernst and Bulthoff 2004)]. The first objective of the present study was to determine the extent to which the weighting assigned to a particular area of space (i.e. spatial weighting) influences perceived location of a limb in that space. To determine this, we manipulated spatial weighting by activating the auditory sensory channel (Experiment 1). In particular, we used brief tones that were emitted at either the left or right side of the participant, immediately before they made a judgement about the location of their right hand. According to the spatial rule of multisensory integration [see (Spence 2013) for an extensive discussion on the spatial rule], this kind of auditory cue interferes with the spatial weighting driven by vision. It is well established that the localisation of a target is faster when the target and an auditory cue appear on the same side than when they appear on opposite sides (Bernstein et al. 1969; Bernstein and Edelstein 1971; Ro et al. 2009; Simon and Craft 1970; Spence and McDonald 2004; Spence et al. 2004, 2008; Spence and Driver 1994). In general, when a new cue is presented in the periphery, there is an automatic redirection of the attention towards its location (Jonides 1981). Thus, a new auditory cue coming from the right-hand side elicits a saccade towards its location, changing the weight assigned to the right portion of the space (Kean and Crawford 2008). It is important to note that the inferior colliculus (IC) plays a crucial role not only in the processing of sounds (auditory pathway), but it is heavily involved in multisensory and non-auditory processing of inputs (Gruters and Groh 2012). This supports the idea that the processing of visual information, for example, can be affected by concurrent auditory information, assigning to the auditory system an essential role in alertness. We hypothesised that localisation judgements would be modulated by the location of the auditory cues, such that judgments were shifted towards the side on which the auditory cue was presented, thus reflecting a contribution of spatial weighting to hand localisation.

The second objective of the present study was to investigate the extent to which weighting of proprioceptive and visually encoded data is modulated by explicit knowledge of incongruence between the two senses (Experiments 2 and 3). After inducing the disappearing hand trick (Newport and Gilpin 2011), in which the last seen location of the hand does not match its true location, the participants reached over with their opposite hand to touch the area where they perceived their hand to be. That they feel only the table surface during this illusion evokes a powerful realisation that their hand is not where it is perceived to be. In line with previous research (Bellan et al. 2015), we would predict that this manoeuvre would rapidly increase the weighting placed on somatosensory proprioceptive input because visual information has been unequivocally proved inaccurate. We hypothesised that this type of incongruent feedback would induce more accurate localisations than those observed without the reaching component, thus indicating a more rapid shift in relative weighting from vision to proprioception.

To iterate, we had two primary hypotheses. First, that weight given to different portions of the space contributes to hand localisation accuracy. Second, that explicit knowledge that the perceived location of the hand is incorrect accelerates the shift in relative weighting from vision to proprioception.

Methods

Ethical approval

All participants gave written consent prior to participating in the study. The study was performed in accordance with the ethical standards laid down in the 1991 Declaration of Helsinki and was approved by the Human Research Ethics Committee of the University of South Australia.

Participants

Eighteen healthy volunteers (10 males, mean age: 33 ± 9 years) took part in Experiment 1; nine randomly selected participants (5 males, mean age: 29 ± 8 years) also took part in Experiment 2. Nine naïve participants were further selected for Experiment 3 (3 males, mean age: 22 ± 3). Sample size was determined a priori based on previous research (Bellan et al. 2015). Within each experiment, the conditions were randomised and counterbalanced across participants. All participants had normal or corrected-to-normal vision and were right-handed (self-reported). They had no current or past neurological impairment involving the upper limbs, and no current pain or history of significant pain disorder. They were also naïve to the aims of the study.

Procedures (Fig. 1)

In both experiments, participants were seated at a table with their hands resting inside a device called MIRAGE that allows manipulation of an online video image of the participants’ hands seen in the same physical plane as the real hand (Bellan et al. 2015; Newport and Gilpin 2011; Newport et al. 2009). In this position, they could see an online image of their hands. A fabric, opaque bib was secured around participants’ necks, and the bottom edge was attached to the MIRAGE to conceal the position of their elbows and thus remove any additional visual cues to hand location. The height of the chair was adjusted such that participants were able to look inside the MIRAGE and to comfortably raise their hands and forearms above the surface of the table.

Fig. 1
figure 1

a Actual and seen position of the hands after adaptation. b Distances and angles between right and left hands and between hands and loudspeaker during the localisation task

Before starting the experiment, participants underwent a training procedure to familiarise with the localisation task (Bellan et al. 2015). During the training task, participants practised hand localisation by stopping a visual arrow (that was presented via MIRAGE software, directly above their actual hand location) when the arrow reached the middle finger of their hidden right hand. The main goals of the training procedure were: (1) fixating on a spot within a blank space without being distracted by the movement of the arrow moving and (2) being able to stop the arrow accurately, even with time constraints. The training involved three stages, for a total of 22 practise localisations. The participants were allowed to practice until they felt they were totally confident with the task and also with the timing. Then, the experimental conditions commenced. Importantly, the training trials were performed at the very beginning of the experimental session, such that the aim of this procedure was just to ensure that the participants had fully understood the task and that they were totally familiar with it (see Bellan et al. 2015 for further details).

In both experiments, the participants underwent the adaptation component of the illusion called disappearing hand trick, in which the visual and proprioceptive position of their hands were rendered incongruent. During this phase, the participants held their hands (initially each positioned about 7 cm laterally from the body midline, thus 14 cm between them in total) approximately 5 cm above the table surface and maintained the position of their hands between two moving blue bars either side of their hands (Bellan et al. 2015; Newport and Gilpin 2011). The positions of the blue bars were manipulated laterally, so that, in turn, the positions of the hands could be gradually shifted relative to their seen position by independently moving the seen image of the hands relative to their real locations. In all the conditions, the seen image of the hands moved inwards at approximately 25 mm/s. Thus, in order to maintain the appearance of their hands remaining stationary, participants were (unknowingly) required to move both their hands outwards at the same rate. This adaptation yielded to a visuo-proprioceptive discrepancy between the seen and real positions of the hands. Hence, the adaptation procedure resulted in the actual position of the participants’ hands being 20 cm from midline (40 cm from the other hand) than in the seen position (7 cm from midline, 14 cm from the other hand) (see Fig. 1a). After the adaptation procedure, the participants’ right hand was hidden from view and the participants were asked to perform the localisation task (which they had the chance to familiarise with before starting the experimental session). The participants focused on the perceived position of their hidden right hand in the blank space while a red arrow was moving from the centre of the screen towards the right-hand side [we have previously shown that the direction of arrow movement does not affect localisation performance (Bellan et al. 2015)]. The participants said ‘stop’ when they felt the arrow was aligned with their right middle finger. A total of 13 localisation trials (one every 15 s) were performed for each condition [for a detailed description of the localisation task and the training session see (Bellan et al. 2015)]. Pilot data collected after Experiment 2 suggested to reduce the number of trials to 7 in order to decrease the cognitive burden without affecting the average localisation errors. Thus, in Experiment 3 only 7 localisation trials were performed.

In Experiment 1, we addressed the question: Do auditory cues used to induce changes in the weight given to different portion of the space affect localisation judgments? Based on the phenomenon called ‘attentional capture’, when a new cue is presented in the periphery, there is an automatic redirection of the attention towards its location (Jonides 1981). In other words, we can say that, during the computation processing of the surrounding events, the portion of the space where a new cue is presented gains weight. It has been shown that auditory cues can influence the saccades direction, that is, for instance, a new auditory cue coming from the right-hand side elicits a saccade towards its location, changing the weight assigned to the right portion of the space (Kean and Crawford 2008). This idea is generally supported by the spatial rule of multisensory integration (Spence 2013). Therefore, there were three different conditions—in two of them a tone was played at each localisation, and a third condition no tone was played (no tone, NT). In the ‘tone’ trials, each time that the arrow started moving, a single 44.1-kHz tone (duration 0.1 s, retrieved 12 September 2013 from http://www.soundjay.com/button/sounds/beep-08b.mp3) was played. The tones originated from a loudspeaker placed on the left (tone left, TL) or on the right (tone right, TR) side of the MIRAGE device, approximately 65–70 cm away from the participant’s chest, with the loudspeaker position standardised between participants (see Fig. 1b). The loudspeaker was hidden behind the machine, such that it was not visible to the participants.

In the original disappearing hand trick, described in Newport and Gilpin (2011), the participants were asked to reach across with their left hand to touch their hidden right hand. Due to the adaptation procedure, the participants failed to touch their right hand and instead saw their left-hand touch only empty workspace, and this generated the feeling of a disappeared hand. However, in Experiment 1 of the current study, participants were not allowed to move either hand following the adaptation procedure. So, the question we addressed in Experiment 2 was: Can the reaching procedure modulate the perceived position of the hidden right hand by means of providing a conscious knowledge that the hand is not where they thought it was? Immediately after the right hand disappeared from view, participants were either told to keep both hands perfectly still (i.e. the No Reach condition) or to reach across their hidden right hand with their left hand (i.e. the Reach condition). The localisation task commenced immediately after the left hand had returned to its original position following the reach (Reach condition) or immediately after the right hand had disappeared from view (No Reach condition).

In Experiment 3, we aimed to disentangle whether the difference between the Reach and No Reach conditions was due to the realisation that the right hand was not where it was thought to be or, rather, to the reaching movement itself. Thus, two conditions were performed: the No Reach condition (same as Experiment 2) and the Reach Forward condition. Participants underwent a brief training in which they learnt how to perform the ‘reaching forward’ movement with their left hand. With both participants’ hands in view, the experimenter placed a coin in front of their left hand and asked them to reach for it with a single smooth movement of the left hand. Then, participants repeated the same movement but, this time, while looking at their right hand. Finally, their right hand was hidden from view and participants practised few times in order to reach the coin while looking at the spot in the blank space where they thought their right hand was. Participants were then told that, during the experiment, the coin was going to be out of view but still there. In fact, the coin was not present during the experiment in order to recreate the same ‘failing to reach’ sensation participants had in the Reach condition in Experiment 2. Participants practised the reaching forward movement as long as they felt confident with the task and, then, the experiment could start. The Reach Forward condition was completely comparable to the Reach condition performed in Experiment 2, but this time, after the right hand disappeared from view, participants had to reach forward to touch the coin, take the left hand back to the original position and start the localisation task. The order of the conditions was randomised and counterbalanced between participants.

Statistics

Localisation error scores were calculated (i.e. the difference between the participants’ judged location and the true location of their hidden hand). Localisation error scores were adjusted so that true hand location was set at 0, and mislocalisations to the left of the hidden hand were represented by negative values. Thus, as negative values approached zero, this indicated an increase in accuracy (i.e. a rightward shift). In Experiment 1, in order to investigate the effect of auditory cued weight given to different portions of the space, a repeated measure within-subject ANOVA (factor: tone—TL, tone left, vs.TR, tone right, vs. S, silent) was performed on the average localisation error scores calculated for each condition and for each participant. In Experiment 2, in order to investigate whether explicit knowledge that the perceived location of the hand is incorrect accelerates the shift in relative weighting from vision to proprioception, a within-subject t test (Reach vs. No Reach) was performed on the average localisation error scores calculated for each condition and for each participant. In addition to this, a trend analysis was performed to compare the slopes at each point in time (i.e. each subsequent localisation) for the two conditions (Reach and No Reach). In Experiment 3, in order to investigate whether a simple movement with the contralateral hand accelerates the shift in relative weighting from vision to proprioception, a within-subject t test (Reach Forward vs. No Reach) was performed on the average localisation error scores calculated for each condition and for each participant. In addition to this, in order to exclude baseline differences between the two groups, a between-subject t test was performed between the No Reach conditions of Experiments 2 and 3. Finally, to directly compare the effect of explicit knowledge on hand position and the effect of a general movement of the hand in view, a third between-subject t test was conducted (Reach vs. Reach Forward).

All the analyses were performed on raw data expressed in pixels. To make the findings more meaningful, a conversion from pixels to centimetre was applied after analysis, so that the results (means and standard deviations) could be expressed in centimetres.

At the end of the experimental session, the experimenters collected participants’ self-reported reflections on the experiment (see Supplementary Material).

Results

The test for normality examining standardised skewness and the Shapiro–Wilks test indicated that the data from all experiments were statistically normal. The hypothesis that modulation of spatial weighting by auditory cueing would modulate hand localisation was not supported. The repeated measures ANOVA revealed no significant effect of tone (TL, tone left, vs.TR, tone right, vs. NT, no tone; Wilk’s Lambda = 0.86, F[2, 16] = 1.23, p = 0.134]. Mean localisation error during TL condition was −11.14 (SD 2.72) cm, during TR condition it was −10.13 (SD 3.24) cm, and during NT condition it was −10.89 (SD 3.21) cm.

The hypothesis that explicit knowledge that the hand is not where it was perceived to be accelerates the shift in relative weighting from vision to proprioception, thus improving localisation accuracy, seemed to be supported. That is, a paired sample t test showed more accurate localisation in the Reach (R) condition than in the No Reach (NR) condition [R = −8.64 (SD 2.70) cm; NR = −11.35 (SD 2.35) cm; t(8) = 3.21, p = .013] (Fig. 2). That is, reaching across to touch the disappeared hand before starting the localisation task increased the accuracy of the localisations (Fig. 3).

Fig. 2
figure 2

Effect of acoustic signals on weight given to different portion of the space (bars indicate standard deviation). Negative values represent underestimation of hand position (i.e. mislocalisation to the left of the hand)

Fig. 3
figure 3

Effect of ‘reaching’ (R, Reach; Rf, Reach Forward; NR, No Reach) on weight given to different portion of the space (bars indicate standard deviation). Negative values represent underestimation of hand position (i.e. mislocalisation to the left of the hand)

We also performed a trend analysis to compare the R and NR slopes over time, and we found similar yet significantly offset slopes between the R and NR conditions (p < 0.001). This can be explained by the shift in relative weighting of vision and proprioception occurring immediately, rather than accelerating over time. Indeed, the lines do not intersect (see Fig. 4).

Fig. 4
figure 4

Localisation errors over time (repetitions) in Reach and No Reach conditions (Experiment 2). The data are averaged across participants (bars indicate standard deviation)

Taken together, these results suggest that the advantage produced by the reaching movement affects the very first localisation and is preserved in all the successive localisations, representing a more general advantage in the localisation task.

However, an alternative explanation could be that the increase in accuracy was due to a general update of the sensory–motor system rather than explicit knowledge of hand position. This second explanation was actually confirmed. That is, a paired sample t test showed more accurate localisation in the Reach Forward (Rf) condition than in the No Reach (NR) condition [Rf = −8.04 (SD 3.39) cm; NR = −10.50 (SD 2.60) cm; t(8) = −3.58, p = 0.008] (Fig. 3). That is, reaching forward to touch the coin before starting the localisation task increased the accuracy of the localisations. Comparisons between data from Experiment 2 and Experiment 3 also showed that the two samples did not perform differently in the No Reach conditions [t(8) = 0.74, p = 0.483] or in the Reach conditions [t(8) = 0.35, p = 0.739] (see Table 1 in Supplementary Material for raw data from Experiment 2 and Experiment 3). That is, a general reaching movement before starting the localisation task is sufficient to increase the accuracy of the localisations, even if the hand position is kept unknown.

Discussion

The experiments reported here had two primary hypotheses: (1) that weight given to different portion of the space contributes to hand localisation accuracy and (2) that explicit knowledge that perceived location is inaccurate accelerates the shift in relative weighting from vision to proprioception. Our findings did not support either of those.

Our results replicated our previous observation (Bellan et al. 2015) and extended it by showing that proprioception plays a powerful role in localising the body when vision is—or suddenly becomes—unreliable. The shift in localisations seems most likely to be driven by a shift in the relative weighting of vision and proprioception, rather than a shift in the relative weighting of the space. Hence, the current findings support the idea that the drift in localisation towards right is not due to the major weight assigned to the right side of the space (i.e. the side towards which the participants were attending). If this were true, the sound coming from the other side of the space (i.e. left) would have impaired this drift. Thus, it is possible to rule out the possibility that the effect found here, as well as in Bellan et al. (2015), was simply due to spatial weighting rather than to proprioception.

The results of our second experiment show that a reaching movement towards the hidden right hand accelerated the shift in relative weighting from vision to proprioception. In the original disappearing hand trick (Newport and Gilpin 2011), the procedure included the reaching movement, that is, reaching across with the left hand with the aim of touching the hidden right hand. Disconfirming the right hand’s location led to reported sensations of disownership over it. Our post-experiment interviews with open ended questions did not reveal this, but they did show that participants really did not know where their hand was. The crucial aspect of this interpretation is that, regardless of beliefs about actual hand position (Table 2, Supplementary Material), participants were consistently more accurate after gaining explicit knowledge that they had been wrong. This cannot be explained by presuming their hand was further right—their explanations about the hand location showed that, of the whole cohort, only one participant predicted that his right hand had been more rightwards than where he reached. That is, improved accuracy was not simply a logical deduction—if it were, we would expect it to resemble the cognitive explanations.

There are at least two processes that might play a role in enhancing the participants’ accuracy. The first is a sensory–cognitive process, that is, explicit knowledge driven by tactile information (i.e. not touching the right hand and, in turn, not feeling the right hand touched) disconfirms the visually driven perceived hand location. In other words, according to this first explanation, participants explicitly interpreted the (unexpected) lack of tactile input on both hands as an indication that their right hand was not where they thought it to be. However, when participants remained unaware that a manipulation of hand localisation took place [i.e. no tactile information was involved, like during the NR conditions, but also in Experiment 1 and in our previous findings (Bellan et al. 2015)], some unconscious process led them to shift their hand localisations to the right anyway (i.e. closer to the real location of their hidden hand). Thus, this explicit knowledge driven by tactile input is not necessary for localisations to improve.

Instead, a sensory–motor process explanation driven by proprioception might be plausible. Previous studies using illusions to manipulate hand position or features of the body part using tendon vibrations (Ehrsson et al. 2005; Lackner 1988; Longo et al. 2009) have found that, after inducing a new proprioceptive input, participants are able to rapidly readjust their body representation and position accordingly. For example, it has been showed that it is possible to modify the perceived orientation of the entire body just by vibrating the biceps tendon (Lackner 1988). In fact, that work induced impossible perceived configurations of body parts, for example, the fist being inside the head, on the basis of proprioceptive input. These findings suggest that proprioceptive information alone has the ability to significantly alter body localisation. It should also be considered that the sensory–motor system offers information about the position of different body parts in relation to each other [e.g. (Dijkerman and de Haan 2007)] and that proprioceptive organs can have a differential involvement during different tasks—pointing versus matching, for example (Tsay et al. 2014). When both hands are resting on the table without any movement, the sensory–motor system will not detect any changes over and above subtle inputs that result from normal bodily sway or respiration. Conversely, when the left hand performs the reaching movement, the sensory–motor system is required to update the current position of the body because the right hand is not located where it was last seen. This recalibration of body position could, in turn, initiate the weighting on proprioception in making hand localisation judgements. Nonetheless, research demonstrated that amputees can learn to perform a physiologically impossible movement of their intact phantom limb (Moseley and Brugger 2009), supporting the idea that modifications in the body representation do not necessarily depend upon proprioceptive input, but can also be induced by purely top-down mechanisms.

If the sensory–motor recalibration explanation (i.e. the one triggered by the movement of the participant’s left hand) were true, we would expect an increase in accuracy not only after a movement specifically directed to the hidden hand, but also after any generic reaching movement, either active or passive, as this would still be sufficient to induce the sensory–motor system to update information about body position. This alternative explanation was tested in Experiment 3. The results show that a reaching movement of the hand in view towards a neutral non-visible object (a coin) yields significantly more accurate localisations than when no movement was performed (i.e. No Reach condition). Interestingly, localisation errors after reaching the coin or reaching towards the hidden hand were not significantly different, confirming that the explicit knowledge of the hand position is not a crucial factor in increasing the localisation accuracy. That a baseline difference in localisation abilities between the two groups of participants (Experiment 2 and Experiment 3) could provide a better explanation for the results was ruled out by comparing the correspondent No Reach conditions. This comparison was indeed not significant.

Our findings highlight the switch between the role of vision and somatosensory proprioceptive input. The position of the hidden hand seems to be recalculated on the basis of new sensory information (after the reaching movement). However, that somatosensory proprioceptive input is given more weighting only once the visual input has been proved to be unreliable was not supported by our data. This is interesting because previous research on hand localisation always focused on the role of vision and proprioception by rendering one or the other not available (i.e. hiding the hand from view). We extended those findings by rendering inaccessible visual position of the hand (that is, we made it disappear from view), but also by manipulating proprioception (via the proprioceptive adaptation manoeuvre).

Finally, our auditory cueing task did not modulate localisation accuracy, although a direction-specific trend indicating smaller mean localisation error during TL condition (−11.14 cm) compared to TR condition (−10.13 cm) and NT condition (−10.89 cm) raises the possibility that we might have been underpowered to detect the effect. Even so, the effect—if present—is clearly quite small and the contribution of spatial weighting therefore relatively low in comparison with proprioceptive input.

Part of our motivation for this study lies in disorders of spatial processing and perceptual acuity that we have seen in people with pathological pain states (Moseley et al. 2009, 2012a, 2013; Reid et al. 2015) (see Moseley et al. 2012b for review). Those studies raise the possibility that localisation problems reflect differential spatial weighting centred on the body midline. The current findings suggest against this possibility, although it remains possible that in a disordered system, such as that found in pathological pain, a different relationship between spatial weighting and localisation exists.