Abstract
Peri-hand space is the area surrounding the hand. Objects within this space may be subject to increased visuospatial perception, increased attentional prioritization, and slower attentional disengagement compared to more distal objects. This may result from kinesthetic and visual feedback about the location of the hand that projects from the reach and grasp networks of the dorsal visual stream back to occipital visual areas, which in turn, refines cortical visual processing that can subsequently guide skilled motor actions. Thus, we hypothesized that visual stimuli that afford action, which are known to potentiate activity in the dorsal visual stream, would be associated with greater alterations in visual processing when presented near the hand. To test this, participants held their right hand near or far from a touchscreen that presented a visual array containing a single target object that differed from 11 distractor objects by orientation only. The target objects and their accompanying distractors either strongly afforded grasping or did not. Participants identified the target among the distractors by reaching out and touching it with their left index finger while eye-tracking was used to measure visual search times, target recognition times, and search accuracy. The results failed to support the theory of enhanced visual processing of graspable objects near the hand as participants were faster at recognizing graspable compared to non-graspable targets, regardless of the position of the right hand. The results are discussed in relation to the idea that, in addition to potentiating appropriate motor responses, object affordances may also potentiate early visual processes necessary for object recognition.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
There are two cortical visual pathways: the ventral and dorsal visual streams (Milner and Goodale 2006). Projections from primary visual cortex (V1) to the occipitotemporal cortex (OTC) make up the ventral stream, while projections from V1 to the posterior parietal cortex (PPC) make up the dorsal stream (Gallivan and Goodale 2018). The dorsal stream receives the majority of its inputs from magnocellular retinal ganglion cells and is primarily involved in the visual control of action. In contrast, the ventral stream receives the majority of its inputs from parvocellular retinal ganglion cells and is primarily involved in visual perception and recognition (Baizer et al. 1991; Gallivan and Goodale 2018; Perry et al. 2015). The prominent role that the dorsal stream plays in processing stimuli that afford action is supported by brain imaging studies that have observed increased activity in the occipito-parietal junction (OPJ) of the dorsal stream in response to images of objects that change by orientation (Valyear et al. 2006); however, increased activity in the OPJ was only observed when the image was of an object that affords grasping (Rice et al. 2007). Thus, object features that indicate action affordances, such as an object’s graspability and orientation, may bias visual processing towards the dorsal visual stream to enable the control of fine motor movements towards such objects (Gallivan and Goodale 2018; Valyear et al. 2006; Rice et al. 2007).
In most cases, dorsal stream processing of vision is thought to proceed in a relatively linear fashion from the lateral geniculate nucleus (LGN), to V1, V2, V5, and then to the dorsomedial (reach) and dorsolateral (grasp) circuits, which consist of separate connections from the intraparietal sulcus (IPS) in the PPC to the premotor cortex. Nonetheless, a number of subcortical projections allow visual information to bypass V1 and project directly to V5 and downstream reach and grasp circuits. These may derive directly from the LGN, directly from the pulvinar, or indirectly from the superior colliculus via the pulvinar (Brown et al. 2008; Lui et al. 2017; Makin et al. 2012; Schendel and Robertson 2004). In addition, action relevant information is transmitted from parietal areas back to early cortical visual areas. For example, Greenberg et al. (2012) used fMRI and diffusion spectrum imaging to measure activity and connections from the intraparietal sulcus (IPS) to V1–3, during an attentional task. They observed large white matter tracts projecting from the posterior IPS to the early visual cortex and found that as the participant’s attention increased, activity in the IPS increased and this corresponded with increasing activity in V1–3. Additional research has observed increased activity in the retinotopic zones of V1–3 that correspond to both a target object and the to-be-placed area prior to the onset of grasping and placing of an object (Gallivan et al. 2019). Together, these findings suggest that attention modulating areas of the IPS may send feedback to early cortical visual areas which may enhance visual attention for action-relevant information in the environment prior to movement onset.
It has been suggested that placing a hand near an object may alter visual processing within these brain networks and thus, alter visual processing of such objects. Altered visual processing appears to occur when the object is presented up to 30 cm away from the palm of hand (Bonifazi et al. 2007; Serino et al. 2015). However, this distance may be plastic as it can be lengthened by tool use (Magosso et al. 2010; Reed et al. 2010). To quantify how visual processing is altered within peri-hand space, Reed et al. (2006) investigated attentional prioritization for stimuli within this space by presenting a cue on either side of a screen followed by a target in the cued or uncued location while participants rested their hand either near to or far from the screen. Targets that appeared near the hand were detected faster than those away from the hand, regardless of the cued location. In another study, Abrams et al. (2008) found that participants took longer to complete a visual search task and were slower to disengage attention from distractor stimuli when the hand was located near the visual search display. Finally, additional investigators have found that placing a hand near visual stimuli can increase the number of stimuli that can be remembered (Tseng and Bridgeman 2011) as well as improve visual working memory for action relevant information (Kelly and Brockmole 2014). Altogether, it appears as though the presence of the hand results in enhanced visual perception as characterized by prolonged and prioritized visual attention for, more thorough evaluation of, and increased working memory for objects located near the hand.
While many studies have found evidence in support of enhanced visual processing for objects near the hand, many studies have also failed to support this theory. Dosso and Kingstone (2018) attempted to closely replicate Reed et al. (2006) but failed to find a convincing peri-hand effect. They also failed to find any peri-hand effects when using visual stimuli meant to bias visual processing towards the magnocellular pathway. Thomas and Sunny (2019) were unable to reproduce any peri-hand effects in a visual search task similar to that of Abrams et al. (2008). Andringa et al. (2018) failed to find a peri-hand effect in a more complex and ecologically valid visual search task. Lastly, Bush and Vecera (2014) found improved spatial (parvocellular-biased) discrimination but impaired temporal (magnocellular-biased) discrimination for visual stimuli near the hand. These results suggest that peri-hand space effects are unstable across conditions making it difficult to identify exactly what type of stimuli or tasks enable these effects.
One possibility is that stimuli that strongly afford action may be the most likely to induce peri-hand space effects. Gibson (1979) introduced the term ‘affordances’ to refer to the specific actions that an object is most likely to elicit. For example, a chair affords sitting while an apple affords grasping. Object affordances can be learned through direct experience (Borghi et al. 2012) or by observing how others interact with objects (Jacquet et al. 2012), but they are also context dependent in the sense that a chair strongly affords sitting after one has finished running a marathon but not after a 15 h plane ride. Importantly, when an object is encountered in a context that strongly affords action, this is thought to activate the dorsal stream (Breveglieri et al. 2015; Chao and Martin 2000; Grèzes et al. 2003) and trigger relevant actions towards that object. In relation to peri-hand space effects, it has been found that when the hand is in a position that affords grasping, that is with the palm facing the stimuli, visuospatial attention is improved compared to when the palm is facing away (Colman et al. 2017). This suggests that a hand position that affords action is an important factor for eliciting a peri-hand effect and should be carefully controlled in peri-hand space research. Relatedly, Chan et al. (2013) found that image recognition was slower for high spatial frequency (parvocellular-biased) images near the hand, however, they observed that image recognition speed was improved for low spatial frequency (magnocellular-biased) images if the objects depicted in the images afforded grasping. Thus, it may be the case that the extent to which both the visual stimulus and the hand afford action determines the extent to which a peri-hand effect will be observed.
This view is supported by recent work suggesting that the proximity of one’s hand to an object may alter visual perception by influencing how visual information is processed within the dorsal stream. Specifically, visual information near the hand may travel via one of the subcortical shortcut pathways directly to V5 and then on to the dorsomedial and dorsolateral reach and grasp circuits (Brown et al. 2008; di Pellegrino and Frassinetti 2000; Makin et al. 2012; Schendel and Robertson 2004). Visual, kinesthetic, and motor information needed to direct motor responses are integrated within these circuits, which then project back and influence neural activity in V2 (Makin et al. 2012; Perry and Fallah 2017; Perry et al. 2015). Area V2 is an important brain area for detecting the orientation of objects (Perry et al. 2016) and it shares connections with the dorsomedial and dorsolateral circuits of the dorsal visual stream which have been implicated in reaching and grasping (Kastner et al. 2017; Perry and Fallah 2017). Thus, refined visual processing in V2 could subsequently facilitate improved orienting of the hand when subsequently reaching to grasp viewed objects near the hand.
Given that altered vision within peri-hand space is thought to result from feedback from reach and grasp networks in the dorsal stream (Perry and Fallah 2017) and that grasp-relevant object features appear to improve target detection accuracy for subsequent reach and grasp movements (Hannus et al. 2005; Bekkering and Neggers 2002), we hypothesized that objects that afford grasping and are easily defined by their orientation would facilitate visual perception within peri-hand space. To test this, participants were asked to complete a visual search task in which they had to identify a single target image among an array of eleven distractor images. The target consisted of an object that either strongly afforded grasping or did not. Distractor images were identical to the target image and differed from the target only in their orientation—an action-relevant feature of graspable objects that selectively activates the OPJ region of the dorsal stream (Valyear et al. 2006; Rice et al. 2007). Targets were either easy or difficult to differentiate from the distractors based on their orientation. Previous research suggests that participants may spend more time looking at distractor stimuli when searching for a target in peri-hand space, possibly because prolonged attentional resources are attributed to each distractor near the hand to ensure a more accurate search (Abrams et al. 2008; Thomas and Sunny 2017). Still, once the target image is fixated upon, participants may recognize it more quickly, due to faster visual processing of the target image in peri-hand space (Bröhl et al. 2017; Thomas and Sunny 2017). Thus, participants wore eye-tracking glasses while completing the visual search task to determine whether hand proximity might produce differential effects on visual search time, defined as the time from the appearance of the visual array to when the participant initially fixated on the target, and target recognition time, defined as the time from when the participant initially fixated on the target to when they released the spacebar to touch the target on the screen.
Methods
Participants
Thirty-five individuals were recruited from introductory psychology courses at Thompson Rivers University and from the general public via social media (age range 18–46, M = 24.63, SD = 8.53; male, n = 12, female, n = 23). Participants self-reported as being right-hand dominant for writing. Only right-handed participants were included in the study as peri-hand space effects may be stronger for the right hand of right-handed individuals (Grivaz et al. 2017; Colman et al. 2017). Participation was voluntary and each participant signed an informed consent form prior to participating. Each participant was tested individually in the psychology laboratory and received 1% bonus credit towards their introductory psychology course or a $5.00 gift card for participating. The study was approved by the Thompson Rivers University Research Ethics Board prior to running the experiment.
Design
The experiment consisted of a 2 Hand Position (near vs. far) × 2 Graspability (graspable vs. non-graspable) × 2 Difficulty (easy orientation vs. hard orientation) factorial within-subjects design in which all participants engaged in a visual search task with all independent variables completed in a randomized or counterbalanced order.
Procedure
Stimuli and materials
For this experiment, eight line drawings of objects, consisting of a white foreground and a black outline, were selected. Line drawings were used instead of photographs to control for differences in low-level visual properties such as colour, shading, brightness, and texture as line drawings and photographs of real objects produce a similar performance in memory recall (Snow et al. 2014). Four of the images depicted graspable objects and four images depicted non-graspable objects. Graspable objects and non-graspable objects were matched based on shape (chess piece vs. person, mug vs. chair, hammer vs. palm tree, and banana vs. canoe; Fig. 1). All stimuli, including the target stimuli, were presented at an image size of 4 × 4 cm. Therefore, all of the images were significantly smaller than their real-world size. The computer program Adobe Photoshop (adobe.com) was used to manipulate each image between 0 and 360-° angles by increments of 30° to produce a total of 12 images. Easy target images were those positioned at 0, 90, 180, and 270-° angles while hard target images were those positioned at any angle in between these (Fig. 2).
A custom-designed visual search task, programmed in Unity game engine, was coded using C# program language in Visual Studio and the images were displayed on an E230t HP touch screen monitor. The program functioned such that when the spacebar was held down, the touch screen displayed a fixation cross for 1000 ms, followed by the target stimulus for 2000 ms, followed by a visual search array containing the target along with 11 distractors. The distractors differed from the target object only by their orientation (Fig. 3). The program recorded the time of the presentation of the visual array to the time the spacebar was released (total search time), the time from when the spacebar was released to the time when the screen was touched (movement duration), and the location on the screen that was touched (accuracy). Visual fixations were assessed using a Positive Science Eye Tracking system (https://www.positivescience.com) that recorded at a rate of 30 frames per second. Yarbus software was used to calibrate the eye-tracking system and LiveCapture software was used to track eye movements. Frame by frame analysis in Kinovea video player was used to calculate fixation durations on the eye-tracking video. Frame numbers for when the array fully appeared, when the target was fixated on, and when the array fully disappeared were recorded for each accurate trial.
Experimental task
Each participant sat facing the computer screen, which was adjusted to the participant’s eye level and positioned 17 cm back from the edge of the table. The spacebar was marked with a touch location which was aligned with the center of the screen and the keyboard was positioned directly below and in front of the screen. There was a distance of 12–30 cm from the touch location on the spacebar and the target stimulus on the screen depending on the location of the target stimulus in the array. Participants put on the eye-tracker and received instructions on how to complete the task. They were told to press the space bar down with their left index finger until they found the target object. Importantly, the left hand was always in a prone posture with the palm facing down and away from the stimuli (Fig. 4), which is based on findings from Colman et al. (2017) that no peri-hand space effects were observed when the palm was facing away from the stimuli. Upon identifying the target in the array, participants were required to release the spacebar with their left hand and reach out and touch the target image in the array with their left index finger. They were instructed to complete the task as quickly and as accurately as possible. Each participant completed practice trials until they completed approximately five trials correctly.
Prior to the start of the experiment, participants were told to put their right hand on their lap or on the side of the computer screen in an open hand position with their palm facing the visual array. Participants held their left index finger on the marked location of the spacebar during which time a black fixation cross appeared in the middle of a blank screen for 1000 ms. Subsequently, the fixation cross disappeared, and a target object appeared for 2000 ms. Following the target screen, a visual array appeared in which the target was presented along with the remaining 11 distractors varying in orientation from the target. Distractors and targets appeared in randomized positions in the circular array and were located between 5 and 30 cm from the right hand when it was positioned on the right side of the screen. The order in which the visual arrays were presented, the location of the target image within the array, and the image that served as the target within each array was randomized and counterbalanced across participants and trials. Of the 12 image locations in the array, the target image only ever appeared in the outer eight locations. The four center locations were never chosen for the target location because it was too close to the initial fixation point of the eyes. After completing 40 trials, participants were asked to switch the position of their right hand from the screen to their lap or vice versa and then complete the next set of trials for a total of two counterbalanced blocks, each consisting of 40 trials (hands close vs. hands far) for a total of 80 trials.
Data analysis
Three dependent variables were analyzed using the data collected from the custom visual search software and the frame by frame video analysis. First, accuracy was calculated as the proportion of trials on which the participant correctly identified the target object among the distractors. Second, target recognition time was calculated by subtracting movement duration (time from finger lift to screen touch) from the total time participants fixated on the target object (time from target fixation to complete disappearance of the array). Third, visual search time was calculated by subtracting the target recognition time from the total search time (time from presentation of the array to screen touch). The frame-by-frame analysis procedure required an experimenter’s judgement of the frame numbers that would be used to calculate visual search times and target recognition times. As such, these measures were tested for inter-rater reliability between two raters using two-way mixed average-measures intraclass correlation co-efficients with absolute agreeability, which revealed very high inter-rater reliability (visual search time ICC = 0.999; target recognition time ICC = 0.963). Means were calculated separately for each participant under each experimental condition for a total of eight means for each dependent variable.
Statistical analyses
Two participants were eliminated due to accuracy that was two standard deviations below the sample mean and one participant was eliminated from eyetracking analyses due to technical issues (Table 1). From the remaining 32 participants, a total of 1135 out of 2560 trials were excluded from analysis because the participant selected the incorrect image (542 trials), the participant lifted their finger before fixating on the target object (179 trials), or there were errors in the calibration of the eye tracker (67 trials). In addition, it was noted during the frame by frame analysis of the eye-tracking data that participants sometimes looked at the target image more than once before responding to it. Such trials were associated with longer visual search times (M = 2701 ms, SE = 13 ms), compared to other trials (M = 1820 ms, SE = 73 ms), t = − 8.910, p < 0.001, d = 1.50, as well as significantly faster final target recognition times (M = 425 ms, SE = 19 ms) compared to when the participant only looked at the target once (M = 587 ms, SE = 17 ms), t = 8.480, p < 0.001, d = 1.60. Inclusion of these trials in the final statistical analysis revealed a significant main effect of difficulty (F(1,31) = 10.403, p = 0.003, ηp2 = 0.251) for the measure of visual search time because participants were more likely to look back and forth between the target image and a given distractor image on difficult trials compared to easy trials. No other measures were affected by the inclusion versus exclusion of these trials from the overall analysis. As such, trials on which the participant fixated on the target image more than once before responding to it (347 trials) were also excluded from the final statistical analysis.
A three-way repeated measures ANOVA for each of the three dependent variables (accuracy, visual search time, and target recognition time) was used to analyze the effect of each independent variable: hand position (near vs. far), object graspability (graspable vs. non-graspable), and target orientation difficulty (easy vs. hard), on each of the three dependent variables. Four planned t-tests were conducted for each dependent variable based on our predictions of longer search times, shorter target recognition times, and improved accuracy when the hand was positioned near graspable stimuli. The purpose of these comparisons was to further examine any interactions between graspability and hand position. They included: hands close graspable versus hands close non-graspable; hands far graspable versus hands far non-graspable; hands close graspable versus hands far graspable; hands close non-graspable versus hands far non-graspable. Only planned comparisons that were significant or trending towards significance are reported in the results.
Results
Target detection accuracy
Accuracy refers to the proportion of trials on which the participant correctly identified the target image among the distractors. The statistical analysis revealed that there was no significant difference between the hands close (M = 0.813, SE = 0.020) and hands far (M = 0.820, SE = 0.019) conditions on target detection accuracy, F(1,32) = 0.226, p = 0.638, ηp2 = 0.007. There was also no significant difference between graspable (M = 0.818, SE = 0.022) and non-graspable (M = 0.816, SE = 0.020) objects on target detection accuracy F(1,32) = 0.013, p = 0.909, ηp2 = 0.000. However, there was a significant main effect of difficulty such that accuracy was higher for easy orientation trials (M = 0.907, SE = 0.013) than hard orientation trials (M = 0.726, SE = 0.023), F(1,32) = 109.002, p < 0.001, ηp2 = 0.773 (Fig. 5). There was no significant interaction between hand position and graspability F(1,32) = 0.259, p = 0.615, ηp2 = 0.008, nor hand position and difficulty F(1,32) = 0.013, p = 0.910, ηp2 = 0.000. There was also no significant three-way interaction between hand position, graspability, and difficulty on accuracy F(1,32) = 0.314, p = 0.564, ηp2 = 0.011. Together, these results indicate that neither hand position nor graspability influenced accuracy and that only target orientation difficulty affected accuracy rates.
Visual search time
Visual search time refers to the amount of time participants searched through the array before initially fixating on the target image in the array. The statistical analysis revealed that there was no significant difference between the hands close (M = 1786 ms, SE = 76 ms) and hands far (M = 1842 ms, SE = 79 ms) conditions, F(1,31) = 0.526, p = 0.474, ηp2 = 0.017; no significant difference between graspable (M = 1842 ms, SE = 58 ms) and non-graspable (M = 1786 ms, SE = 95 ms) target objects, F(1,31) = 0.927, p = 0.343, ηp2 = 0.029; and no significant difference between easy (M = 1757 ms, SE = 85 ms) and hard target orientations (M = 1871 ms, SE = 66 ms), F(1,31) = 3.452, p = 0.073, ηp2 = 0.100, on visual search time (Fig. 6). There were also no significant interactions between hand position and graspability F(1,31) = 0.818, p = 0.373, ηp2 = 0.026; hand position and difficulty F(1,31) = 0.145, p = 0.706, ηp2 = 0.005; or hand position, graspability, and difficulty on visual search time F(1,31) = 0.174, p = 0.680, ηp2 = 0.006. Together, these results indicate that neither hand position, graspability, nor target detection orientation influenced visual search time.
Target recognition time
Target recognition time refers to the amount of time from when the participant first fixated on the target image in the array to when they lifted their left index finger from the space bar to touch the target image. It indicates how long it took participants to recognize that the image they were looking at was indeed the target image. The statistical analysis revealed that there was no significant difference in target recognition time when the hand was close (M = 589 ms, SE = 22 ms) compared to far (M = 576 ms, SE = 16 ms), F(1,31) = 0.728, p = 0.400, ηp2 = 0.023, from the array. There was, however, a significant main effect of graspability such that participants displayed shorter target recognition times for graspable target objects (M = 562 ms, SE = 18 ms) compared to non-graspable target objects (M = 602 ms, SE = 18 ms), F(1,31) = 34.520, p < 0.001, ηp2 = 0.527. There was also a significant effect of difficulty such that participants displayed shorter target recognition times for easy (M = 542 ms, SE = 17 ms) compared to hard (M = 622 ms, SE = 20 ms) target orientations, F(1,31) = 54.421, p < 0.001, ηp2 = 0.637 (Fig. 7). The interaction between hand position and graspability was not significant, F(1,31) = 2.718, p = 0.109, ηp2 = 0.081. However, planned comparisons revealed that the results were trending towards longer target recognition times for non-graspable objects near the hand (M = 616 ms) compared to far away (M = 588 ms), t(31) = 1.599, p = 0.060, d = 0.248 (Fig. 8). This was the only planned comparison that was close to reaching a level of significance. There was also no significant interaction between hand position and difficulty, F(1,31) = 0.314, p = 0.579, ηp2 = 0.010, and no three-way interaction between hand position, graspability, and difficulty, F(1,31) = 0.328, p = 0.571, ηp2 = 0.010, on target recognition time. Together, these results indicate that hand position had no effect on target recognition times. Nonetheless, participants were faster at recognizing easy target orientations than hard target orientations as well as faster at recognizing graspable objects compared to non-graspable objects.
Discussion
Altered visual processing within peri-hand space is thought to be enabled by dorsal stream structures that mediate visually-guided action (Brozzoli et al. 2012; Makin et al. 2012; Perry and Fallah 2017). The aim of the present study was to test the hypothesis that objects that afford action may be more likely to recruit dorsal stream processing and as such will receive the greatest visual processing advantage when positioned near the hand. Participants were asked to complete a visual search task while their right hand was positioned either close to or far from visual stimuli on a touchscreen. Target images consisted of objects that differed in the extent to which they afforded action (graspable versus non-graspable) and were differentiated from distractors by a single affordance-relevant feature (orientation). We predicted that participants would show the greatest peri-hand space effects, defined as increased target detection accuracy, longer visual search times, and shorter target recognition times, when searching for target objects that afforded grasping and that were positioned in an easily identified orientation. Altogether, we did not find any hand position effects. Instead, we found that participants were faster at recognizing graspable objects compared to non-graspable objects regardless of hand position.
The advantages of this study include the use of eye-tracking equipment to measure how stimuli are visually attended near the hand and the influence of task difficulty on peri-hand space effects. Much of the research on peri-hand space has predominantly used total reaction time as a measure of visual attention. However, eye movements often precede hand movements (Bekkering and Neggers 2002). Thus, the use of an eye-tracking device allowed us to separate the measures of visual search time from target recognition time, which enabled us to differentiate the time spent searching for the target from the time spent recognizing the target. Furthermore, task difficulty is a factor that is not often accounted for in peri-hand space research. By dividing the orientation trials into easy and hard trials we were able to assess the extent to which measures of visuospatial processing near the hand are impacted by task difficulty.
One possible shortcoming of this study is that the right hand was positioned on the screen in the hand close condition, but the left hand was always acting on the stimuli regardless of the condition. Our decision to use this experimental setup was based on previous research by Colman et al. (2017) who found that the peri-hand space effect tends be stronger for the right hand of right-handed individuals and positioning the palm of the hand towards the experimental stimuli may enhance peri-hand space effects as it prepares the motor system for action (Colman et al. 2017). Also, Tseng and Bridgeman (2011) used a change detection task and alternated between left and right hands responding while the opposite hand rested near the stimuli. They found that performance was only improved when the right hand was near the stimuli and the left hand responded to the stimulus. In fact, many studies have found peri-hand space effects using a similar paradigm to the one presented here, with the right hand near the stimuli and the left hand responding to the stimuli (Colman et al. 2017; Reed et al. 2010, 2006; Thomas and Sunny 2017, 2019). Nonetheless, it is possible that had we required participants to reach out and touch the target image with the same hand that was positioned near the array, this could have enhanced the affordance effect of the graspable objects in the array, produced greater dorsal stream activation, and increased ecological validity of the task. This possibility is currently being examined in a subsequent study.
Another potential limitation of this study may be the use of black and white line drawings as stimuli, which are not technically graspable, as opposed to pictures of real objects. Yet, a number of previous studies have found significant hand position effects when participants are required to respond to line drawings of graspable objects versus line drawings of non-graspable objects. For example, Chan et al. (2013) used modified line drawings of graspable and non-graspable objects as stimuli in a target recognition task. Participants had to respond as to whether the real-life size of the object depicted in the line drawing was larger or smaller than a shoebox by pressing one of two response keys on a keyboard. They found that when the participant’s hand was located near the display, they responded more quickly to line drawings of graspable objects compared to line drawings of non-graspable objects. Chainay et al. (2011) presented vertical or horizontal lines followed by line drawings of objects that are naturally vertically or horizontally grasped. They found that when the line’s orientation was congruent with the object’s natural grasping position, participants were faster at distinguishing between vertically and horizontally grasped objects than when incongruent lines or circles were presented prior to the graspable object. This effect did not occur for horizontal and vertical blocks or for words representing graspable objects. Finally, in an object recall task, Snow et al. (2014) found that memory performance was best for real objects. However, memory performance was similar between photographs and line drawings of objects. Together, these findings suggest that line drawings of graspable objects are sufficient to elicit graspability affordances in participants.
While it is likely the case that real objects would most likely elicit the strongest grasping affordance, which may in turn, enhance memory for that object (Snow et al. 2014), the use of line drawings allowed us to efficiently manipulate various orientations so that each object was rotated to the same degree for each angle. They also allowed us to control for other factors that may influence visual processing such as color, luminance, or detail as previous research suggests that features such as colour may negatively impact visual memory when the hand is near the stimulus (Kelly and Brockmole 2014). Finally, a substantial number of researchers have observed increases in visual attention to two-dimensional shapes when they are presented near the hand (Abrams et al. 2008; Bush and Vecera 2014; Colman et al. 2017; di Pellegrino and Frassinetti 2000; Gozli et al. 2012; Kelly and Brockmole 2014; Perry et al. 2015; Reed et al. 2006, 2010; Thomas and Sunny 2017; Tseng and Bridgeman 2011).
Another consideration is that both the graspable and non-graspable stimuli used in the present study afforded the same required action, reaching out to touch the target on the screen. It could be argued that this might lead to equivalent affordance-related activation in the dorsal stream. Nonetheless, Rice et al. (2007) required participants to passively watch images of graspable and non-graspable objects that either stayed in the same position or changed by orientation. Images of objects that afforded grasping led to enhanced activation of the dorsal visual stream during orientation changes, whereas images of non-graspable showed no dorsal stream activation during orientation changes, even when no subsequent action was required. Furthermore, Chan et al. (2013) found that participants were faster at responding to modified line drawings of graspable objects, even though both the graspable and non-graspable stimuli in their study also afforded the same required action of pressing a button. Thus, previous research suggests that depictions of graspable objects are more effective than depictions of non-graspable objects at priming activity in the dorsal visual stream regardless of the subsequent action to be performed. Still, future research could address this question by requiring participants to reach out and “grasp” the graspable stimuli versus reach out and “touch” the non-graspable stimuli.
The results of the present study indicated that hand position and stimulus graspability had no effect on the accuracy with which participants identified the target image in the array. Overall, accuracy was affected only by orientation difficulty such that participants were more accurate at identifying the target image when it was of an easy, as compared to hard, orientation. Target detection difficulty is generally not a variable that is incorporated into peri-hand space research, making it difficult to predict any effect that it would have on target detection accuracy near the hand. The present results likely indicate one of two things, either large differences in task difficulty may overshadow any increased accuracy in identifying images in peri-hand space, or the procedures used in this study did not activate the dorsal visual stream to the same extent that was predicted by our hypothesis and thus, difficulty was the only factor that affected accuracy rates.
We also did not find an effect of hand position, graspability, or orientation difficulty on the measure of visual search time. Our original prediction that search times would increase in peri-hand space was based on the findings of Abrams et al. (2008) who found that participants were slower to disengage visual attention when stimuli were located near the hand. Other theories suggest that differences in visual search time may reflect a difference in temporal and spatial processing near the hand. Specifically, when the hand is present the magnocellular pathway may be more active resulting in higher temporal processing and lower spatial processing of nearby objects. This theory suggests that differences in visual search times may depend on whether the task relies primarily on temporal or spatial processing (Goodhew et al. 2015). We cannot conclude, however, the extent to which each of these processes was at play in the present study. This could be further tested by comparing performance on spatial versus temporal tasks using stimuli that afford action versus those that do not, both in and out of peri-hand space to determine how these factors interact.
We did find that both object graspability and orientation difficulty influenced the speed with which participants recognized the target object once they fixated on it, although these two factors did not interact. These results could be interpreted in a number of ways. It could be that faster recognition of the graspable objects occurred due to a difference in difficulty between the graspable and non-graspable objects used as stimuli in this experiment. This is unlikely, however, because no graspability effects were found for measures of accuracy or visual search time. Alternatively, because the left hand was preparing to perform an action (reaching out to touch the screen), this may have facilitated recognition of graspable objects regardless of the position of the right hand. However, research has found that while grasping movements enhance the detection of orientation changes, pointing movements do not (Bekkering and Neggers 2002; Gutteling et al. 2011). Based on this research, we did not predict that the pointing action of the left-hand would alter visual processing in the present study. Nonetheless, to our knowledge there is no research specifically examining whether or not pointing facilitates the recognition of graspable over non-graspable objects, which should be addressed in future research.
Another interpretation is that object affordances (Chan et al. 2013; Borghi et al. 2012; Chainay et al. 2011; Jacquet et al. 2012) may potentiate both object recognition and subsequent motor responses without influencing attentional prioritization during a visual search task. Evidence for this comes from Yamani et al. (2016), who asked participants to search for a right handled cup among left handled distractors and vice versa. Participants indicated that they had found the target using either the hand that was congruent or incongruent to the target handle and used the other hand to respond to target-absent trials. The results revealed that initial visual search efficiency was equivalent for left and right target handle orientations regardless of whether participants used the congruent or incongruent hand; however, post-search response selection and execution was potentiated when the responding hand was congruent with the target. In another study, Ariga et al. (2016), eliminated the visual search portion of the task and focused only on target recognition and response times. Right-handed participants made judgments as to whether a cup with a left or right facing handle appeared first. This time participants were faster at detecting cups with a right-facing handle. Together, these results suggest that object affordances have no effect on visual search time, but may speed object recognition and motor responses once the visual search is complete. The graspable objects in our study did not favour either the left or right hand and thus, could have potentiated both object recognition and left-hand responses regardless of the position of the right hand. The interaction between hand position and object affordances could be investigated more directly in future research by modifying the task to compare right- versus left-handed graspable targets near the right hand.
A fourth interpretation is based on the research of Kveraga et al. (2007) which suggests that magnocellular projections to occipital temporal cortex (OTC) of the ventral visual stream via the orbitofrontal cortex (OFC) may enable fast or “gist” recognition of objects that afford action based on low spatial frequency information (Chan et al. 2013). In other words, there may be magnocellular projections that project to ventral stream brain areas to enable speeded recognition of objects that afford action as well as magnocellular projections to dorsal stream areas that allow for fast reactions towards objects that afford action. It may be possible that the faster recognition of graspable objects observed in the present study may be due to this magnocellular projection to the ventral stream.
In sum, the pursuit of peri-hand space research is enticing because there are a variety of ways that any interaction between object affordances and visual processing near the hand could be assessed and potentially applied. For instance, a better understanding of peri-hand space could potentially provide novel insights into the mechanisms by which visually-guided actions are impaired in a variety of clinical populations such as developmental coordination disorder, autism, cerebral palsy, and traumatic brain injury. It could also potentially lead to novel rehabilitative approaches that involve using the hand to strengthen the connections between the reach and grasp circuits in parietofrontal cortex, object recognition circuits in occipitotemporal cortex, and the visual cortex with the potential for improving fine motor actions. However, a more complete understanding of the circumstances under which peri-hand space effects occur, and the neural processes that underlie these effects, is critical before any clinical applications can be pursued.
References
Abrams RA, Davoli CC, Du F, Knapp WH III, Paull D (2008) Altered vision near the hand. Cognition 107:1035–1047. https://doi.org/10.1016/j.cognition.2007.09.006
Andringa R, Boot WR, Roque NA, Ponnaluri S (2018) Hand proximity effects are fragile: a useful null result. Cogn Res Princ Implic 3:1–12. https://doi.org/10.1186/s41235-018-0094-7
Ariga A, Yamada Y, Yamani Y (2016) Early visual perception potentiated by object affordances: Evidence from a temporal order judgment task. Iperception 7:1–7. https://doi.org/10.1177/2041669516666550
Baizer JS, Ungerleider LG, Desimone R (1991) Organization of visual inputs to the inferior temporal and posterior parietal cortex in macaques. J Neurosci 11:168–190. https://doi.org/10.1523/jneurosci.11-01-00168.1991
Bekkering H, Neggers SFW (2002) Visual search is modulated by action intentions. Psychol Sci 13:370–374. https://doi.org/10.1111/j.0956-7976.2002.00466.x
Bonifazi S, Farnè A, Rinaldesi L, Làdavas E (2007) Dynamic size-change of peri-hand space through tool-use: Spatial extension or shift of the multi-sensory area. J Neuropsychol 1:101–114. https://doi.org/10.1348/174866407X180846
Borghi AM, Flumini A, Natraj N, Wheaton LA (2012) One hand, two objects: emergence of affordance in contexts. Brain Cogn 80:64–73. https://doi.org/10.1016/j.bandc.2012.04.007
Breveglieri R, Galletti C, Bosco A, Gamberini M, Fattori P (2015) Object affordance modulates visual responses in the macaque medial posterior parietal cortex. J Cogn Neurosci 27:1447–1455. https://doi.org/10.1162/jocn_a_00793
Bröhl C, Theis S, Rasche P, Wille M, Mertens A, Schlick SM (2017) Neuroergonomic analysis of perihand space: effects of hand proximity on eye-tracking measures and performance in a visual search task. Behav Inform Technol 36:737–744. https://doi.org/10.1080/0144929X.2016.1278561
Brown LE, Kroliczak G, Demonet J-F, Goodale MA (2008) A hand in blindight: hand placement near target improves size perception in the blind visual field. Neuropsychologia 46:786–802. https://doi.org/10.1016/j.neuropsychologia.2007.10.006
Brozzoli C, Ehrsson HH, Farnè A (2012) Multisensory representation of the space near the hand: from perception to action and interindividual interactions. Neuroscientist 20:122–135. https://doi.org/10.1177/1073858413511153
Bush WS, Vecera SP (2014) Differential effect of one versus two hands on visual processing. Cognition 133:232–237. https://doi.org/10.1016/j.cognition.2014.06.014
Chainay H, Naouri L, Pavec A (2011) Orientation priming of grasping decision for drawings of objects and blocks, and words. Mem Cogn 39:614–624. https://doi.org/10.3758/s13421-010-0049-9
Chan D, Peterson MA, Barense MD, Pratt J (2013) How action influences object perception. Front Psychol 4:1–6. https://doi.org/10.3389/fpsyg.2013.00462
Chao LL, Martin A (2000) Representation of manipulable man-made object in the dorsal stream. NeuroImage 12:478–484. https://doi.org/10.1006/nimg.2000.0635
Colman HA, Remington RW, Kritikos A (2017) Handedness and graspability modify shifts of visuospatial attention to near-hand objects. PLoS ONE 12:1–19. https://doi.org/10.1371/journal.pone.0170542
di Pellegrino G, Frassinetti F (2000) Direct evidence from parietal extinction of enhancement of visual attention near a visible hand. Curr Biol 10:1475–1477. https://doi.org/10.1016/S0960-9822(00)00809-5
Dosso JA, Kingstone A (2018) The fragility of the near-hand effect. Collabra Psychol 4:1–16. https://doi.org/10.1525/collabra.167
Gallivan JP, Goodale MA (2018) The dorsal “action” pathway. Handb Clin Neurol 151:449–466. https://doi.org/10.1016/B978-0-444-63622-5.00023-1
Gallivan JP, Chapman CS, Gale DJ, Flanagan JR, Culham JC (2019) Selective modulation of early visual cortical activity by movement intention. Cereb Cortex 29:4662–4678. https://doi.org/10.1093/cercor/bhy345
Gibson JJ (1979) The ecological approach to visual perception. Houghton Mifflin, Boston
Goodhew SC, Edwards M, Ferber S, Pratt J (2015) Altered visual perception near the hands: a critical review of attentional and neurophysiological models. Neurosci Biobehav Rev 55:223–233. https://doi.org/10.1016/j.neubiorev.2015.05.006
Gozli DG, West GL, Pratt J (2012) Hand position alters vision by biasing processing through different visual pathways. Cognition 124:244–250. https://doi.org/10.1016/j.cognition.2012.04.008
Greenberg AS, Verstynen T, Chui Y-C, Yantis S, Schneider W, Behrmann M (2012) Visuotopic cortical connectivity underlying attention revealed with white-matter tractography. J Neurosci 32:2773–2782. https://doi.org/10.1523/jneurosci.5419-11.2012
Grèzes J, Tucker M, Armony J, Ellis R, Passingham RE (2003) Objects automatically potentiate action: an fMRI study of implicit processing. Eur J Neurosci 17:2735–2740. https://doi.org/10.1046/j.1460-9568.2003.02695.x
Grivaz P, Blanke O, Serino A (2017) Common and distinct brain regions processing multisensory bodily signals for peripersonal space and body ownership. Neuro Image 147:602–618. https://doi.org/10.1016/j.neuroimage.2016.12.052
Gutteling TP, Kenemans JL, Negger SFW (2011) Grasping preparation enhances orientation change detection. PLoS ONE 6:1–8. https://doi.org/10.1371/journal.pone.0017675
Hannus A, Cornelissen FW, Lindemann O, Bekkering H (2005) Selection-for-action in visual search. Acta Psychol 118:171–191. https://doi.org/10.1016/j.actpsy.2004.10.010
Jacquet PO, Chambon V, Borghi AM, Tessari A (2012) Object affordances tune observers’ prior expectations about tool-use behaviors. PLoS ONE 7:1–11. https://doi.org/10.1371/journal.pone.0039629
Kastner S, Chen Q, Jeong SK, Mruczek REB (2017) A brief comparative review of primate posterior parietal cortex: a novel hypothesis on the human toolmaker. Neuropsychologia 105:123–134. https://doi.org/10.1016/j.neuropsychologia.2017.01.034
Kelly SP, Brockmole JR (2014) Hand proximity differentially affects visual working memory for color and orientation in a binding task. Front Psychol 5:1–5. https://doi.org/10.3389/fpsyg.2014.00318
Kveraga K, Boshyan J, Bar M (2007) Magnocellular projections as the trigger of top-down facilitation in recognition. J Neurosci 27:13232–13240. https://doi.org/10.1523/jneurosci.3481-07.2007
Lui L, Wang F, Zhou K, Ding N, Lou H (2017) Perceptual integration rapidly activates dorsal visual pathway to guide local processing in early visual areas. PLoS Biol 15:1–15. https://doi.org/10.1371/journal.pbio.2003646
Magosso E, Ursino M, di Pellegrino G, Ládavas E, Serino A (2010) Neural bases of peri-hand space plasticity through tool-use: insights from a combined computational-experimental approach. Neuropsychologia 48:812–830. https://doi.org/10.1016/j.neuropsychologia.2009.09.037
Makin TR, Holmes NP, Brozzoli C, Farnè A (2012) Keeping the world at hand: rapid visuomotor processing for hand-object interactions. Exp Brain Res 219:421–428. https://doi.org/10.1007/s00221-012-3089-5
Milner AD, Goodale MA (2006) The visual brain in action, 2nd edn. Oxford University Press, Oxford
Perry CJ, Fallah M (2017) Effector-based attention systems. Ann N Y Acad Sci 1396:56–69. https://doi.org/10.1111/nyas.13354
Perry CJ, Serigo LE, Crawford JD, Fallah M (2015) Hand placement near the visual stimulus improves orientation selectivity in V2 neurons. J Neurophysiol 113:2859–2870. https://doi.org/10.1152/jn.00919.2013
Perry CJ, Amarasooriya P, Fallah M (2016) An eye in the palm of your hand: alterations in visual processing near the hand, a mini-review. Front Comput Neurosci 10:1–8. https://doi.org/10.3389/fncom.2016.00037
Reed CL, Grubb JD, Steele C (2006) Hands up: attentional prioritization of space near the hand. J Exp Psychol Hum Percept Perform 32:166–177. https://doi.org/10.1037/0096-1523.32.1.166
Reed CL, Betz R, Garza JP, Roberts RJ Jr (2010) Grab it! Biased attention in functional hand and tool space. Atten Percept Psychophys 72:236–245. https://doi.org/10.3758/app.72.1.236
Rice NJ, Valyear KF, Goodale MA, Milner AD, Culham JC (2007) Orientation sensitivity to graspable objects: an fMRI adaptation study. NeuroImage 36:87–93. https://doi.org/10.1016/j.neuroimage.2007.03.032
Schendel K, Robertson LC (2004) Reaching out to see: arm position can attenuate human visual loss. J Cogn Sci 16:935–943. https://doi.org/10.1162/0898929041502698
Serino A, Noel J-P, Galli G, Canzoneri E, Marmaroli P, Lissek H, Blanke O (2015) Body part-centered and full body- centered peripersonal space representations. Sci Rep 5:1–14. https://doi.org/10.1038/srep18603
Snow JC, Skiba RM, Coleman TL, Berryhill ME (2014) Real-world objects are more memorable than photographs of objects. Front Hum Neurosci 8:1–11. https://doi.org/10.3389/fnhum.2014.00837
Thomas T, Sunny MM (2017) Slower attentional disengagement but faster perceptual processing near the hand. Acta Psychol 147:40–47. https://doi.org/10.1016/j.actpsy.2017.01.005
Thomas T, Sunny MM (2019) Situational determinants of hand-proximity effects. Collabra Psychol 5:1–11. https://doi.org/10.1525/collabra.198
Tseng P, Bridgeman B (2011) Improved change detection with nearby hands. Exp Brain Res 209:257–269. https://doi.org/10.1007/s00221-011-2544-z
Valyear KF, Culham JC, Sharif N, Westwood D, Goodale MA (2006) A double dissociation between sensitivity to changes in object identity and object orientation in the ventral and dorsal visual streams: a human fMRI study. Neuropsychologia 44:218–228. https://doi.org/10.1016/j.neuropsychologia.2005.05.004
Yamani Y, Ariga A, Yamada Y (2016) Object affordances potentiate responses but do not guide attentional prioritization. Front Integr Neurosc 9:1–6. https://doi.org/10.3389/fnint.2015.00074
Acknowledgements
The authors would like to thank Alex Touchet for programming the visual search task. This research was supported by the Natural Sciences and Engineering Research Council of Canada (JMK) (Grant no. RGPIN-2017-05995) and the Thompson Rivers University Undergraduate Research Experience Award Program (UREAP).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Melvyn A. Goodale.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Bamford, L.E., Klassen, N.R. & Karl, J.M. Faster recognition of graspable targets defined by orientation in a visual search task. Exp Brain Res 238, 905–916 (2020). https://doi.org/10.1007/s00221-020-05769-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00221-020-05769-z