Introduction

Perception and action are linked through a shared computational code, according to the common coding theory (Prinz 1990; Hommel et al. 2001). As a consequence, the initiation of a stimulus-cued action depends on how stimulus-related information and action-related information interact (Prinz 1990). According to the dimensional overlap model (DOM) (Kornblum 1992), there are three dimensions that need to be taken into account in this interaction: task-relevant stimulus dimensions, task-irrelevant stimulus dimensions, and response dimensions. Dimensional overlap is defined as the degree to which elements of the stimulus and response sets are perceptually, structurally, or conceptually similar (Kornblum 1992). This similarity may result in different stimulus–response (S–R) compatibility effects, which influence both the speed and accuracy of performance (reviewed in Umiltá and Nicoletti 1990). Kornblum et al. (1990) proposed a taxonomy based on dimensional overlap, which classifies S–R ensembles into eight types according to whether or not there is an overlap between (1) the relevant and irrelevant stimulus dimensions, (2) the relevant stimulus dimension and the response dimension, and (3) the irrelevant stimulus dimension and the response dimension.

In choice reaction time experimental paradigms, spatial location is an intrinsic property of the stimulus and cannot be ignored (Tsal and Lavie 1993), affecting performance even when it is irrelevant to the task, as shown by the Simon effect (Simon 1990) and the spatial Stroop effect (MacLeod 1991). In the Simon task, the relevant stimulus dimension is a nonspatial physical feature, such as color or shape, which is explicitly associated with a lateralized manual response (e.g., red color—right key response and green color—left key response). Even though stimulus location (left or right) is irrelevant, responses are faster when it coincides with the assigned response (Umiltá and Nicoletti 1990, 1992; Lu and Proctor 1995; Hommel 1995, 1997). In a spatial Stroop task, stimulus location is also irrelevant per se, but the relevant stimulus dimension is a word or symbolic feature that conveys spatial information. Reaction times are faster when semantic meaning (relevant stimulus dimension) and stimulus location (irrelevant stimulus dimension) are congruent (Lu and Proctor 1995).

The DOM is an attempt to provide a unified theoretical framework for understanding all compatibility effects between stimulus and response sets (Kornblum et al. 1990; Kornblum 1992). According to Kornblum’s taxonomy expressed in the model (Kornblum 1992, 1994), the Simon and Stroop tasks belong to different categories (Types 3 and 8, respectively) because the relevant and irrelevant stimulus dimensions are dissimilar in the Simon task but highly similar in the spatial Stroop task. Thus, the irrelevant stimulus dimension interferes directly with the response dimension only in the Simon task (S–R overlap), while in the spatial Stroop task the conflicting overlap is with the relevant stimulus dimension (S–S overlap) (Lu and Proctor 1995). While the Simon and spatial Stroop tasks are useful to investigate how stimulus properties influence action selection (Lu and Proctor 1995; Hommel 2011), the stimuli used in most studies are often very simple (Fitts and Seeger 1953; Umiltá and Nicoletti 1990, 1992; Tsal and Lavie 1993; Kornblum and Lee 1995; Wuhr 2006; Pecher et al. 2010; Hommel 2011; Li et al. 2014). There has not been any attempt to investigate whether similar effects occur with more complex stimuli, such as body parts.

The vision of body parts conveys important social and linguistic information (Allison et al. 2000). Accordingly, many neurophysiological evidences indicate that there are specialized neural networks dedicated to process visual stimuli representing body parts in the brain (Peelen and Downing 2007), specially faces (LaBar et al. 2003) and hands (Grosbras and Paus 2006). However, most work with this subject has been centered on questions regarding stimulus recognition, and less is known about S–R compatibility effects, i.e., how the spatial location of the stimulus affects directed action. It is possible that handedness recognition (right or left hand) endows hands with intrinsic spatial information, which can somehow interact with spatial stimulus location (right or left hemifield) and be revealed as a spatial Stroop effect (S–S overlap). In the present work, we use a handedness judgment task to study SRC effects when responding to hand stimuli. We compare the results to a similar experiment with an abstract stimulus endowed with intrinsic location information, arrows. The main purpose of this work is to evaluate how body parts fit in Kornblum’s taxonomy using a S–R compatibility task based on handedness judgment and evaluate whether the use of body parts (hands) would trigger a spatial Stroop effect.

Methods

Participants

Two groups of 16 right-handed volunteers (eight males and eight females; age = 18–32 years, mean = 22 years) and 12 right-handed volunteers (eight males and four females; 18–27 years, mean = 19 years) participated in Experiment 1 (handedness recognition task) and Experiment 2 (arrow task), respectively. All participants had normal visual acuity and were naïve to the purpose of the experiment. We obtained written informed consent from all participants, and the study was approved by the research ethics committee of the Universidade Federal Fluminense (185/2005). The experiment was performed in accordance with the ethical standard laid down in the 1964 Declaration of Helsinki.

Stimulus

In Experiment 1, the stimuli were drawings of the right and left hand in both dorsal and palm views (Fig. 1a). In Experiment 2, the stimuli were arrows pointing either to the right or to the left (Fig. 1b). The stimuli were displayed randomly in a 20-in VGA monitor and were presented in either the right or the left hemifield.

Fig. 1
figure 1

Stimulus sets used in handedness recognition (a) and arrow direction tasks (b), respectively

Experimental apparatus

The experiment was conducted in a quiet and dimly lit room. A desktop computer was used both to present stimuli and to record the participant’s response. The head of participants was positioned in an adjustable forehead and chin rest so that the distance between the eyes and the screen was about 57 cm. The Micro Experimental Laboratory software (MEL, version 2.0) was used to manage the experiment and to record response latencies. The stimuli were presented 7.5o to the left or to the right of the central fixation point. The response was executed by pressing one of two micro-switches, one located to the left and the other to the right side of the subjects’ midline. We used an Eye Track System (Model 210, Applied Science Laboratories) to control the subject’s fixation during testing.

Task

In Experiment 1 (handedness recognition task), the participant was instructed to press one key (right or left), while a stimulus (drawings of a left or right hand) was presented either to the left or to the right of the fixation point. There were two conditions: For the normal condition, the volunteer was instructed to press the left key (using his/her left index finger) when a left-hand drawing appeared on the display and the right key (using his/her right index finger) for a right-hand drawing; for the inverse condition, the instruction was reversed and the volunteer had to press the left key for the right-hand drawings and the right key for the left-hand drawings.

In Experiment 2 (arrow task), the stimuli were arrows pointing either to the right or to the left, and the subject’s task was similar to Experiment 1, i.e., the subject had to press a right or left key, depending on the side the arrowhead was pointing to, with both a normal and an inverse condition.

Experimental procedure

Participants performed the normal and inverted sessions in different days.

In Experiment 1, each session was divided into four blocks of 75 trials, resulting in 300 trials per session. In two blocks, we presented drawings of the hands in dorsal view and in the other two blocks in palm view. In Experiment 2, each session was divided into four blocks of 72 trials, resulting in 288 trials per session.

Experimental conditions were counterbalanced across subjects. During the session, the volunteers were instructed to: (1) maintain the gaze at the central fixation point, (2) not look directly at the stimulus, and (3) respond as fast as possible. The average of correct MRTs was calculated and used for subsequent analyses.

Analysis of variance

The averages of manual reaction times (MRT) were used separately for each experiment in an ANOVA with the following factors:

Experiment 1: hand (drawing of the right or left hand), hemifield (right or left), and finger (associated with the right or left response key).

Experiment 2: arrow (arrow pointing to the right or left), hemifield (right or left), and finger (associated with the right or left response key).

We used the Newman–Keuls method for post hoc comparisons. Statistical significance was set at 0.05.

Results

Experiment 1

Errors

Overall errors amounted to 4.42 %, of which 4.06 % were judgment errors and 0.36 % were omission errors. Errors were entered into an ANOVA with the following factors: arrow (arrow pointing to the right or left), hemifield (right or left), and finger (right or left), and no sources of variance were statistically significant.

Central tendency measures

In Experiment 1 (Fig. 2), there was an interaction between hemifield and finger (F 1,15 = 19.31; p < 0.001; η 2 = 0.27), which indicates the occurrence of a Simon effect. The MRT for the left finger was faster when the stimulus appeared in the left hemifield (425 ms ±8) than when the stimulus appeared in the right hemifield (441 ms ±8.4), and the MRT for the right finger was faster when the stimulus appeared in the right hemifield (405 ms ±7.1) than when the stimulus appeared in the left field (423 ms ±6.8).

Fig. 2
figure 2

Interaction between hemifield and response key (irrelevant S–R overlap) is expressed as a Simon effect in the handedness recognition task

The finger factor was significant (F 1,15 = 7.59; p < 0.014; η 2 = 0.32). The MRT for the right finger (414 ms ±7) was 19 ms faster than the left finger (433 ms ±8.1), independent of the stimulus.

There was an interaction between hand and finger (F 1,15 = 5.04; p < 0.04; η 2 = 0.38). The MRT of the right finger in response to the appearance of the right-hand stimulus (403 ms ±6) was faster than to the left hand (442 ms ±8). There was no difference in MRT of the left finger in response to the right-hand (423 ms ±8) or left-hand (425 ms ±7) stimuli.

There was no interaction between hemifield and hand factors (F 1,15 = 0.17; p > 0.68; η 2 = 0.001) that would indicate a spatial Stroop effect. No other factor or interaction was significant.

Experiment 2

Errors

Overall, errors amounted to 4.42 %, of which 4.24 % were judgment errors and 0.18 % were anticipation errors. Errors were entered into an ANOVA with the following factors: hand (drawing of the right or left hand), hemifield (right or left), and finger (right or left), and no sources of variance were statistically significant.

Central tendency measures

In Experiment 2 (Fig. 3), there was an interaction between hemifield and arrow (F 1,11 = 7.966; p < 0,001; η 2 = 0.13), which indicates the occurrence of a spatial Stroop effect. The MRT to the left arrow was faster when it appeared at the left hemifield (409 ms ±8) than when it appeared at the right hemifield (419 ms ±8), and the MRT to the right arrow was faster when it appeared at the right hemifield (413 ms ±9) than when it appeared at the left hemifield (428 ms ±11).

Fig. 3
figure 3

In the arrow task, there was a classical spatial Stroop task revealed by the interaction between hemifield and arrow direction (irrelevant S–S overlap)

The finger factor was significant (F 1,11 = 13.32; p < 0.03; η 2 = 0.40). The MRT of the right finger (406 ms ±7) was 23 ms faster than the left finger (429 ms ±8.1). Similar to Experiment 1, response to the right finger is faster than with the left finger, independent of the arrow’s direction.

There was NO interaction between hemifield and finger (F 1,11 = 0.44; p > 0.51; η 2 = 0.01), which indicates the absence of a Simon effect. No other factor or interaction was significant.

Discussion

At variance with previous studies that employed orthogonal mappings to differentiate between Simon and spatial Stroop effects (e.g., Luo and Proctor 2013; Luo et al. 2010), we used a horizontal mapping in which left and right stimuli were paired with left and right responses in both a normal and an inverse condition, to ensure a strong conceptual, perceptual, and structural dimensional overlap between stimulus and response sets (Kornblum 1992; Proctor et al. 2002). In particular, an inverse condition allowed us to separate the Simon and spatial Stroop effects by comparing MRTs in conditions where an irrelevant stimulus dimension (location) overlapped with either a relevant stimulus dimension (i.e., meaning—irrelevant S–S overlap) or a response dimension (response keys position—irrelevant S–R overlap). Accordingly, for instance, we have a spatial Stroop effect (attributable to an S–S overlap) whenever the left MRT is faster when a right hand appears in the right hemifield and a Simon effect (attributable to an S–R overlap) whenever the left MRT is faster when the right hand appears on the left hemifield.

Our results show that even though hand stimuli carry implicit location information (right or left), they do not interact with the irrelevant stimulus dimension (i.e., right or left hemifield) and a spatial Stroop effect does not occur. In contrast, the spatial Stroop effect was elicited when we used arrows as stimuli. Not surprisingly, our results show that the hand drawings evoke a Simon effect due to the interaction between the irrelevant stimulus dimension (right or left hemifield) and response location (right or left key). In both experiments, responses to the right hand were faster than with the left hand. This was independent of stimulus category and indicates a facilitation of motor responses controlled by the left hemisphere, as expected for our sample of right-handed volunteers (Serrien et al. 2006). Even though performance can be affected in mixed verbal/manual tasks (Wickens et al. 1983), this cannot fully explain the different results we obtained with Experiments 1 and 2, since the same condition applied to both: command was verbal and response was manual.

The crucial difference between hands and arrows as stimuli is the frame of reference in which they are coded. In our study, the hand stimuli seem to be encoded according to their relative position to the body. According to Parsons (1987, 1994), judging the handedness of a visually presented hand stimulus involves a pre-attentive handedness recognition process, which is followed by the mental simulation of one’s own hand moving toward the stimulus for a confirmatory matching (Parsons 1987, 1994; Parsons and Fox 1998; Parsons et al. 1998). Even though they can share some common neural substrates, the two phases are distinct in which the first phase relies on stored visual representations (Wolpert et al. 1995), while the second phase depends on motor imagery that follows the same rules of real movement, including compliance to physical constraints (De Lange et al. 2006; Gentilucci et al. 1998; Parsons 1994; Vargas et al. 2004; Lameira et al. 2009). Ottoboni et al. (2005) used a modified Simon task to evaluate the automatic recognition of hands and found the same results: Handedness is implicitly encoded with reference to one’s body and influences response even when it is irrelevant to the task. Arrows, on the other hand, convey a spatial code (left and right) that is independent of the body.

Taken together, these results suggest that the interaction between the irrelevant and relevant attributes of the stimulus (S–S overlap) only causes the spatial Stroop effect when they belong to the same reference frame. In the arrow’s case, the irrelevant characteristic of the stimulus is its location in space (environment), which only overlaps with the relevant feature because they share the same allothetic frame of reference. In the case of the hands, their irrelevant feature, spatial location, also generates left and right codes that will not overlap with the relevant dimension, however, because the hand is tied to an idiothetic frame of reference, i.e., the body. Its relevant feature codes are embodied and are not intrinsically associated with any direction in the environment. Thus, we propose that S–R compatibility tasks using body parts fit in Kornblum’s taxonomy as Type 3 ensembles, because they do not elicit a spatial Stroop effect.