Introduction

Peripersonal space is the multisensory space immediately surrounding the body, or more specifically, the sector of space that closely surrounds a certain body part. A crucial factor that distinguishes the perception of immediate surroundings from that of more distant space is the potential ability to interact with objects within the peripersonal space, that is, to reach and grasp, or to avoid.

In evolutionary terms, visually based detection of nearby objects would be very useful to prepare appropriate motor actions. In higher vertebrates, detection of nearby objects can be derived by integrating multiple sources of sensory information. An ever-growing body of evidence suggests that visual space surrounding the body (peripersonal space) in the monkey is coded through multisensory integration at the single-neuron level (Rizzolatti et al. 1981; Duhamel et al. 1991; Graziano and Gross 1995; Duhamel et al. 1998; Graziano and Gross 1998; Rizzolatti et al. 1998). The putamen and some parietal and premotor areas contain multisensory neurons with tactile and visual receptive fields whose locations are roughly matched in space. These neurons respond to both cutaneous and visual stimuli presented close to a given body part (e.g., the head or the hand) where the tactile receptive field is located, and visual responses decrease as the distance increases. In addition, bimodal neurons operate to some degree in body part-cenetred coordinates, in that the visual receptive field remains spatially anchored to the tactile receptive field when it is moved. Owing to these functional properties, it has been suggested that the premotor cortex, parietal areas and putamen form an interconnected system for multisensory coding of near peripersonal space centered on body parts (Colby et al. 1993; Fogassi et al. 1996; Graziano et al. 1997; Duhamel et al. 1998; Fogassi et al. 1999).

Specialized cerebral areas in peripersonal space processing have also been identified in humans. A recent functional magnetic resonance imaging study located areas in the human intraparietal sulcus and lateral occipital complex that represent nearby visual space with respect to the hands (Makin et al. 2007). Lloyd et al. (2006) observed activations in the posterior parietal cortex when humans perceived aversive objects in the peripersonal space.

In humans, however, most evidence for the existence of a multisensory system representing peripersonal space comes from neuropsychological studies in patients suffering from cross-modal extinction after a right hemisphere stroke. In studies of cross-modal extinction, a visual stimulus presented near the ipsilesional (right) hand often extinguished perception of a simultaneous tactile stimulus on the contralesional (left) hand. However, when the right visual stimulus was presented far from the hand the degree of extinction was reported to be lower (di Pellegrino et al. 1997; Ladavas et al. 1998a). Furthermore, when the hands were held in a crossed position (such that the left hand was positioned in the right hemispace and vice versa), visual stimulation near the right hand still induced extinction of left hand tactile stimuli.

Recently, Reed et al. (2006) reported attentional prioritization of space near the hand in neurologically normal subjects. The authors showed that target detection time near the hand is facilitated, compared to detection away from the hand. Moreover, analysis of data obtained using a visual covert-orienting paradigm led to the conclusion that hand presence affected attentional prioritization of space, not the shifting of attention. A series of four control experiments demonstrated that the target detection time near the hand was decreased with proprioceptive or visual hand location information, but not with arbitrary visual anchors or distant targets. These interesting results lead to two further questions. Firstly, is visual sensitivity increased in the near-hand space or does the decrease in detection time merely reflect a lower response criterion as a consequence of greater readiness to react to approaching objects? Secondly, in the earlier study by Reed and colleagues, visual stimuli far from the hand were also in the contralateral visual hemifield, while the near-hand stimuli occurred in the same visual hemifield as the visible hand. Consequently, in the near-hand space, the visual stimuli project to the same hemisphere that processes visual, tactile and proprioceptive inputs from the visible hand, raising the possibility that the decrease in detection time reflects within-hemisphere or within-hemispace facilitation.

In the present study, we determine whether visual sensitivity is increased in the near-hand space in neurologically normal subjects. In a series of four experiments, we compared visual detection and spatial discrimination in fields near (1 cm) and far (40 cm) from the hand, using a flashing light emitting diode (LED) as the stimulus. The paradigms of Experiments 1–3 were comparable to that applied by Reed et al. (2006) with the exception that the task was difficult enough to yield errors, which were analyzed relative to reaction times. In Experiment 4, subjects’ hands were placed in the midsagittal plane to rule out the possibility of within-hemisphere or within-hemispace facilitation.

Experiment 1

Subjects

The experiment involved eight subjects (two females, six males, mean age = 26 years). Informed consent was obtained, and the study was performed according to the 1964 Helsinki Declaration and was approved by the local ethics committee. All subjects were volunteers and were naïve with respect to the experimental hypotheses being tested. All subjects reported normal or corrected-to-normal vision, and had no history of neurological disorders.

Materials and procedure

The stimuli were two red LEDs subtending 0.8° of visual angle at a 40 cm distance from the midsagittal plane, and were fixed on a table 50 cm from the eyes. A green LED (luminance = 24 cd/m2) served as a fixation point, and was fixed to the table in the midsagittal plane at a 50 cm distance. Subjects were asked to place the middle finger of the non-dominant hand at 1 cm from the LED placed on the same side (Fig. 1). The responding hand was placed under the table at an equal distance from each target LED.

Fig. 1
figure 1

Schematic drawing of the setting for Experiment 1

Subjects were seated with the viewing distance kept constant using a chin rest. Straight-ahead fixation on the green LED was controlled using an eye-tracker device (ISCAN-ETL 500) with a spatial resolution of 2 min of visual angle. At each trial, the fixation LED and one of the target LEDs were switched on for 500 ms. In Experiments 1–3, simultaneity of the fixation and target onsets was adopted to ensure that experimental conditions were as similar as possible to those of Experiment 4, in which the task was focused on target color (see the Sect. ”Methods section of Experiment 4” for further details). The luminance of the target LEDs was randomly chosen from one of the following values: 0.08, 1.2, 2.4, 5.8, 11.8, 19.3 or 23.4 cd/m2. Subjects were instructed to respond as fast as possible, while making as few errors as possible, when the right or left target LED was switched on simultaneously with the fixation point. Subjects answered by pressing one of two response buttons (left when the left red target LED was switched on, and right when the right target LED was switched on). The target was presented with equal probability on each side of the midsagittal plane. The inter-trial duration was randomly set at 800, 1,200 or 1,600 ms. Subjects were instructed to fix their eyes on the green central LED and to not make movements in the direction of the red target LEDs. Trials in which eye movements exceeded 0.5° of visual angle were discarded, with the corresponding stimulus presented again immediately thereafter. Less than 1% of the trials met this rejection criterion.

Results

Mean response times and mean percentage of correct responses as a function of target luminance and position relative to the hand are shown in Fig. 2. The mean percentage of correct responses was found to be higher when the target was in the near-hand space compared to the far-hand space (F [1,7] = 6.311, P = 0.04) (Fig. 2a). However, despite the non-significant interaction between luminance and stimulus distance (F [6,42] = 1.55, P = 0.19), comparison analyses with the Newman–Keuls test revealed that the two conditions differed from each other at luminance levels of 0.08, 1.2, 19.3 and 23.4 cd/m2. Further evidence is required to establish whether (a) the interaction exists, and (b) the differences at luminance levels of 2.4, 5.8 and 11.8 are significant with a greater number of subjects. Reaction times did not significantly differ between near- and far-hand conditions (F [1,7] = 0.09, P = 0.78; Fig. 2b). These results suggest that subjects did not adopt a speed versus accuracy strategy. Furthermore, the finding that reaction times did not vary with stimulus-hand proximity according to error rates suggests that subjects preferred accuracy to speed. This may have been due to the difficulty of the task since the lowest luminance stimuli were close to the detection threshold values.

Fig. 2
figure 2

a Mean percentage of correct responses (±SEM) as a function of stimulus luminance and stimulus distance relative to the seen hand. b Mean response time (±SEM) as a function of stimulus luminance and stimulus distance relative to the seen hand

Experiment 2

To determine whether the enhanced visual performance in the near-hand space was merely due to an augmented attention focus in that region due to the presence of an additional object, Experiment 1 was repeated but with a piece of wood (14 × 6 × 3) placed near one target LED. Both hands were placed under the table and subjects still responded with their dominant hand.

Eight subjects (three females, five males, mean age = 27.6 years) who had not participated in the previous experiment were enrolled in this second experiment. All subjects reported normal or corrected-to-normal vision, and had no history of neurological disorders.

Results

Mean response times and mean percentage of correct responses as a function of the target luminance and its position relative to the piece of wood are shown in Fig. 3. Detection performances were similar whether the target LED was near or far from the piece of wood (F [1,7] = 0.11, P = 0.74 and F [1,7] = 0.4, P = 0.55 for reaction times and correct response rates, respectively). No significant interaction was evident between luminance and stimulus distance (F [6,42] = 0.25, P = 0.96). These observations suggest that the results of Experiment 1 were not due to any effect of the proximal hand being an additional object.

Fig. 3
figure 3

a Mean percentage of correct responses (±SEM) as a function of stimulus luminance and stimulus distance relative to the seen hand. b Mean response time (±SEM) as a function of stimulus luminance and stimulus distance relative to the seen hand

Experiment 3

This experiment examined whether proprioceptive information played a significant role in near-hand processing, or whether only visual information regarding the hand lead to enhanced processing in its near space. Experiment 1 was replicated with subjects placed in total darkness, hence preventing a view of the hand placed on the table near one of the target LEDs. The luminance of target LEDs was randomly selected as 0.08, 1.2, 1.6, 2, 2.4, 2.8 or 3.2 cd/m2. Since the task was performed in complete darkness, the luminance levels had to be adjusted to lower values relative to Experiments 1 and 2 to avoid ceiling effects.

Eight subjects (four females, four males, mean age = 27.2) who had not participated in the previous experiments were enrolled. All subjects reported normal or corrected-to-normal vision, and had no history of neurological disorders.

Results

Mean response times and mean percentage of correct responses as a function of the target luminance and its position relative to the hand are shown in Fig. 4. Detection performances were similar whether the target LED was near or far from the hand (F [1,7] = 0.02, P = 0.88 and F [1,7] = 0.95, P = 0.37 for reaction times and correct response rates, respectively). No significant interaction was evident between luminance and stimulus distance (F [6,36] = 0.73, P = 0.63).

Fig. 4
figure 4

a Mean percentage of correct responses (±SEM) as a function of stimulus luminance and stimulus distance relative to the seen hand. b Mean response time (±SEM) as a function of stimulus luminance and stimulus distance relative to the seen hand

Experiment 4

Experiments 1–3 were designed to extend the findings of Reed et al. (2006), with a view to providing further evidence of sensorial modulation in the near-hand space. However, the possibility that the decrease in detection time reflects within-hemisphere or within-hemispace facilitation cannot be excluded, since visual stimuli far from the hand are also in the contralateral visual hemifield, while near-hand stimuli occur in the same visual hemifield as the visible hand. Three additional controls were included. Firstly, to avoid the hemispace/hemifield confound, the subject’s hand was placed in the midsagittal plane. Secondly, in previous experiments, the spatial forced-choice paradigm was designed to prevent response strategies that could easily occur in a simple response paradigm (i.e., target flashed or not). While the distributions of reaction times and correct responses suggest that subjects did not adopt a speed versus accuracy strategy, it is possible that under maximum uncertainty, the right–left response alternative yielded a bias in the direction of the hand side. To examine this theory, the response alternative (red versus green LED) was defined as independent of the dimension of interest, specifically, the distance of the visual stimulus relative to the hand position (near versus far). Thirdly, since only the non-dominant hand was placed near visual stimuli in Experiments 1–3, in this analysis, the effects of near-hand stimuli were tested for both hands in different trial blocks.

Subjects

Ten subjects (five females, five males, mean age = 26.2 years) who had not participated in the previous experiments were enrolled. All subjects were right handed and reported normal or corrected-to-normal vision, and had no history of neurological disorders.

Materials and procedures

Three pairs of LEDs (one green and one red) were employed as visual stimuli. Far-hand LEDs were placed 40 cm from each side of a red fixation LED, which was positioned on the subjects’ midsagittal plane. Near-hand LEDs were placed 1 cm from subject’s hand and 40 cm from the fixation stimulus (Fig. 5). Luminance of all LEDs was set as 14.5 cd/m2. At each trial, the red fixation LED and one of the target pairs were switched on simultaneously for 500 ms. Simultaneity of the fixation and target LED onset was adopted to avoid response bias induced by the fixation LED color. Subjects were asked to report whether the target LED was red or green. The non-responding hand remained on the table at 1 cm from the left, right or midsagittal target LED pairs. Consequently, three hand positions were defined: (a) left hand near the left side targets, (b) right hand near the right side targets and (c) non-dominant hand (i.e., the left hand) near the midsagittal targets. The order of presentation of these three conditions was randomized among the subjects. Only the target pairs appearing in the left or right hemispace served as far-hand targets. Specifically, when subjects placed their hand near the left or right target LEDs, the devices in the midsagittal plane were not used as far-hand stimuli, since they were too close to the subjects’ bodies and could thus induce a near-body space effect. Subjects were instructed to respond as rapidly as possible, with minimal errors. Responses were given by pressing the right button when the target LED was red and left button when the target was green.

Fig. 5
figure 5

Schematic drawing of the setting for Experiment 4

Results

Mean response times and mean percentage of correct responses for each target position relative to the hand are shown in Fig. 6. The correct response rates were higher when the visual target appeared in the near-hand space, compared to the far-hand space (F [2,18] = 8.43, P < 0.01). Significantly higher response performances were observed with both the midsagittal and left hemispace positions of the hand when the visual stimulus appeared near, rather than far away from the hand (P < 0.01 with the Newman–Keuls test).

Fig. 6
figure 6

a Mean percentage of correct responses (±SEM) as a function of stimulus distance relative to the seen hand. b Mean response time (±SEM) as a function of stimulus distance relative to the seen hand

The mean difference in reaction times between the far- and near-hand space conditions was comparable when stimuli were presented near the left or right hand at 99 and 91 ms, respectively. This effect size did not differ significantly between the two hands (F [1,9] = 0.22, P = 0.65). Hence, since only the left hand was placed on the midsagittal plane of right-handed subjects, reaction times were recorded specifically for the left hand in both midsagittal and left hemispace positions, as shown in Fig. 6. In opposition to Experiments 1–3, reaction times varied significantly according to target–hand distances (F [2,18] = 4.12, P = 0.03). The response alternative (red versus green) was orthogonal to the near–far position of the visual stimuli with respect to the hand (left versus right). Hence, response strategies favouring the hand side in cases of uncertainty do not appear to explain the superior near-hand visual performance.

Discussion

The present study addressed whether visual stimuli appearing close to the hand are processed more precisely than those at a distance. The study was motivated by growing evidence that neurons and cortical regions are highly specialized in the processing of stimuli appearing in the peripersonal space, especially close to the head and the upper limbs. Our data extend the previous finding of superior processing of visual stimuli close to the hand by Reed et al. (2006). Consistent with the earlier results, visual stimuli (LEDs) were detected more easily near (1 cm) than away from (40 cm) the hand. This effect was not due to reduced spatial uncertainty yielded by the presence of the hand, nor due to a response bias whereby there was a response to a stimulus close to the hand even when the stimulus was not seen. Furthermore, visual stimuli appearing near the hand were processed more precisely and rapidly, independently of the hand position in space. This finding rules out the possibility that the hand–stimuli distance effect merely reflects within-hemisphere or within-hemispace facilitation, since when the hand was placed in the midsagittal position, both hemispheres received equivalent visual input. In contrast to the observations of Reed and co-workers, our data suggest that enhanced visual processing is essentially linked to the view of the hand, and does not appear to rely on proprioceptive information. This observation is in accordance with a neuropsychological study which showed, in brain-damaged patients with visuo-tactile extinction, strong modulatory effects of vision on touch perception when a visual stimulus was presented near the seen hand and only mild effects when the vision of the hand was prevented (Ladavas et al. 2000). The authors suggested that the proprioceptive signals specifying the current hand position in space do not seem to be relevant in determining the cross-modal interaction between vision and touch. However, it is additionally possible that proprioception simply plays a weaker role than vision in the processing of peripersonal space stimuli. Hence, the differences between the two reports may be attributed to greater statistical power in the earlier study, which examined 27 subjects, compared to our 8 subjects. Further investigation is required to assess the relative roles of vision and proprioception in near–body space processing.

It seems that the effect of the distance of visual stimuli relative to the hand position essentially reflects an increase in visual sensitivity, mediated by prioritization of attention in that region, as suggested by Reed et al. (2006). Hence, the lower response times in the near–hand space observed both in the earlier study and our Experiment 4 may not be accounted for by a lower response criterion as a consequence of greater readiness to react to approaching objects.

The mechanism underlying superior visual perception in the near-hand space remains unknown. This enhancement may rely on the activity of an interconnected system for integrated (visual–tactile) coding of peripersonal space centered on body parts. This network involving the putamen and the parietal and frontal lobes is activated when visual stimuli are located in spatial proximity to a particular body part (e.g., face or hand) and it encounters neurons responding to both visual and tactile stimuli (Rizzolatti et al. 1981; Graziano and Gross 1995; Iriki 1996; Duhamel et al. 1998). More precisely, visual processing of near-hand stimuli might recruit areas of the posterior parietal cortex that have been shown to play an important role in visuospatial encoding of aversive stimuli. A primary function of the posterior parietal cortex is the integration of visuospatial and somatosensory information to shape an appropriate motor response [see Grefkes and Fink (2005) for a recent account]. In the monkey, the inferior and superior posterior parietal areas chiefly receive visual inputs from striate cortex but are also the first regions along the dorsal visual stream to integrate these retinally derived signals with other sensory signals (such as somatosensory and proprioceptive afferents) to form a higher-order representation of visual space (Driver and Mattingley 1998). Rizzolatti and Matelli (2003) have proposed a separation of the dorsal-stream parietal areas into two distinct “sub-streams,” ventral and dorsal. In particular, the ventral part of the dorsal stream is comprised of inferior parietal regions and supports visual representations of space for the purposes of organizing action. These regions have also been associated with action intention (Andersen and Buneo 2002) and are extensively connected with frontal premotor areas (Shipp et al. 1998; Rizzolatti and Matelli 2003). It might be hypothesized that, through this particular network, the visual field around the body parts constantly receives enhanced and/or specific processing when compared to the far-body visual field. An unequal distribution of attention between the near and far spaces might explain enhanced processing in the near-body space. This concept is supported by neuropsychological findings in patients with space representation disorders such as unilateral neglect. In those patients, clinical dissociations have been described documenting neglect of personal space (Bisiach et al. 1986; Guariglia and Antonucci 1992; Beschin and Robertson 1997; Peru and Pinna 1997), near personal space (Halligan and Marshall 1991; Ladavas et al. 1998b) or far external space (Cowey et al. 1994). Since unilateral neglect or extinction mostly relies on attentional deficits, the possibility of similar or related major attentional deficits in such patients is an interesting hypothesis.

The mechanisms underlying the superior visual detection and discrimination in the near-hand space could not be elucidated in the present study. Increased allocation of visual attention in the near-hand space might be a plausible explanation. However, other mechanisms such as additional visual processes in the near-body visual field or sharpened visual receptive fields in near-body neurons might also explain the findings. Nonetheless, the existence of an interconnected system for integrated (visual–tactile) coding of peripersonal space centered on body parts and comprising bimodal neurons, similar to those found in non-human primates, is a reasonable contention. Furthermore, it would be interesting to investigate a possible relationship between enhanced visual processing in the near-hand space and motor action programming in response to threatening stimuli.