Abstract
Previous studies showed that motor information related to tool use (i.e., functional actions) could affect processing of objects semantic properties, whereas motor information related to grasping or moving tool (i.e., structural actions) cannot. However, little is known about the neural correlates mediating such interaction between motor and semantic information. Here, healthy participants performed a semantic judgment task requiring identification of semantic relations among objects, after observing a functional, a structural or a pointing action prime. In a within-subject design, during prime presentation the participants underwent repetitive transcranial magnetic stimulation (rTMS) over the left supramarginal gyrus (SMG), the left posterior middle temporal gyrus (pMTG) or received sham stimulation. Results showed that in the sham condition observing functional actions (vs. structural and pointing actions) favoured processing of semantic relations based on function similarity (i.e., taxonomic relations), but not of relations based on co-occurrence within an event schema (i.e., thematic relations). Moreover, stimulation of both left SMG and pMTG abolished the effect of functional action primes worsening subsequent judgment about taxonomic relations, and this effect was greater after pMTG stimulation. rTMS did not affect processing of thematic semantic relations. We suggest that action observation triggers activation of functional motor information within left inferior parietal cortex, and that integration between functional motor and conceptual information in left temporal cortex could impact high-level semantic processing of tools.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Sensory-motor, action-related information seems to be crucial for categorization and identification of manipulable artefact objects (Beauchamp & Martin, 2007; Buxbaum & Saffran, 2002; Cree & McRae, 2003; Farah & McClelland; 1991; McRae, Hare, Elman, & Ferretti, 2005; Warrington & Shallice, 1984). Indeed, neuroimaging studies show that manipulable artefact objects (hereafter, tools) and their related actions share a large part of neural correlates within a wide left fronto-parietal and occipito-temporal network (Beauchamp & Martin, 2007; Boronat et al., 2005; Canessa et al., 2007; Chao & Martin, 2000; Creem-Regehr & Lee, 2005; Grafton, Fadiga, Arbib, & Rizzolatti, 1997; Ishibashi, Pobric, Saito, & Lambon Ralph, 2016; Kellenbach, Brett, & Patterson, 2003; Lewis, 2006).
Within this network, several features of actions and tools seem to be processed (Binkofski & Buxbaum, 2013; Buxbaum & Kalénine, 2010; Hoeren et al., 2013, 2014; Rizzolatti & Matelli, 2003): (i) tools’ and actions’ semantic features (Binkofski & Buxbaum, 2013; Kalénine, Buxbaum, & Coslett, 2010; Lewis, 2006); (ii) configuration of prehensile movements intended to grasp and move objects (i.e., structural actions; e.g., grasping a pen with a whole-hand prehension to put it in a drawer), and skilled actions required to use objects according to their typical function (i.e., functional actions; e.g., holding a pen between index and thumb fingers with the learned hand posture suited for writing; Bub, Masson, & Cree, 2008; Bub & Masson, 2010; Buxbaum & Kalénine, 2010).
Sensory-motor information related to actions and lexical-semantic information about manipulable objects (including tools) affect each other in a variety of motor or semantic tasks. For instance, visual presentation of manipulable objects and object-related names and verbs can prime congruent functional or structural grasping actions during motor tasks (Bub et al., 2008; Bub & Masson, 2006, 2010; Creem & Proffitt, 2001; Glover, 2004; Tucker & Ellis, 1998, 2001). Conversely, activation of motor representations linked to functional actions can prime visual and lexical processing of tools (Helbig, Graf, & Kiefer, 2006; Helbig, Steinwender, Graf, & Kiefer, 2010; Jax & Buxbaum, 2010; Lee, Mirman, & Buxbaum, 2014; Myung, Blumstein, & Sedivy, 2006; Myung et al., 2010).
Recently, some studies (Borghi, Flumini, Natraj, & Wheaton, 2012; De Bellis et al., 2016; Pluciennicka, Wamain, Coello, & Kalénine, 2016a) explored the effects of motor activation in high-level semantic tasks tapping two kinds of semantic relations among objects: thematic relations (i.e., relations based on co-occurrence of objects within the same event schema: e.g., “hammer-nail”; Estes, Golonka, & Jones, 2011; Lin & Murphy, 2001; Nelson, 1983; Kalénine et al., 2009; Kalénine & Bonthoux, 2008; Mirman, Landrigan, & Britt, 2017; Mirman & Graziano, 2012) and taxonomic relations (i.e., relations among objects belonging to the same category; e.g., “cat-dog”, “apple-orange”, “chair-couch”). Since tools can be categorized on the basis of their typical function (e.g., “for cutting”, “for cleaning” and so on; Garcea & Mahon, 2012), taxonomic relations among tools usually imply similarity of function (e.g., “cup-glass”; “saw-axe”, “pen-pencil”; Buxbaum & Saffran, 2002; Kalénine, Mirman, Middleton, & Buxbaum, 2012; Mirman et al., 2017; Yee, Drucker, & Thompson-Schill, 2010).
Borghi et al. (2012) observed that elaboration of thematic tool-object pairs evoking the functional use of tool (e.g., knife-bread) was slowed down when the visual scene included a hand performing a structural grasping of the tool (e.g., grabbing the blade of the knife); conversely, elaboration of unrelated tool-object pairs not evoking tool use (e.g., knife-nail) was slowed down when observing functional grasping of tools (e.g., grabbing the handle of the knife). According to the authors, these findings could be accounted for by interference between observed and evoked action during semantic processing. A subsequent study employing the same task and stimuli showed that the interference of structural grasp on processing of thematic pairs was associated to an eye-fixation bias towards tool–hand interaction, likely in relation to decoding of intention of observed structural gestures (Natraj, Pella, Borghi, & Wheaton, 2015).
In another study (Pluciennicka et al., 2016a), participants observed prime videos displaying functional or meaningless actions and then performed a visual search paradigm in which they had to identify a target (i.e., a tool) presented along with other semantically related or unrelated items. It was found that prior observations of functional gestures (compared with meaningless gestures) increased eye-fixations towards objects associated with the target for a thematic (but not for a taxonomic) relation. Such findings showed that observing others’ functional actions can facilitate elaboration of thematic relations among objects. According to the authors, these results also supported the idea that thematic associations between tools and related objects are largely based on neural representation of action, since action provides the most common context linking tools and related objects (Pluciennicka et al., 2016a). Partially divergent findings have been reported in a study (De Bellis et al., 2016) in which participants observed prime images depicting hands grabbing a tool with a functional or a structural grasping, and then were required to decide whether pairs of tools (including the one previously shown in the prime stimulus) shared common function (Experiment 1), or whether tools–objects pairings were thematically related (Experiment 2). The results showed that observing functional (compared to structural) actions speeded up semantic judgments about taxonomic relations in Experiment 1 but not in Experiment 2, where judgments concerned thematic relations. These findings suggested that representation of functional (but not of structural) actions necessarily implies access to semantic information and is tightly linked to representation of tool function, in agreement with other studies (Bub et al., 2008; Bub & Masson, 2006; Creem & Proffitt, 2001). It is important to underline that the above-mentioned studies are not directly comparable due to important differences in aims and methods. For example, Pluciennicka et al. (2016a) inferred activation of semantic knowledge from analysing eye movements in a visual search paradigm, whereas De Bellis et al. (2016) and Borghi et al. (2012) required participants to produce explicit semantic judgments on relatedness of object pairs. Moreover, Borghi et al. (2012) assessed the effect of functional versus structural actions on elaboration of thematic associations only, whereas De Bellis et al. (2016) and Pluciennicka et al. (2016a) assessed both thematic and taxonomic relations. Notwithstanding their divergences, these studies demonstrated that observation of functional actions could prime semantic relations among objects, thus supporting the notion that motor information can affect high-order semantic processing.
Some cues about the neural bases of such ‘motor-to-semantic’ priming effects have been provided by Natraj et al. (2013) in an EEG study employing the same behavioural task as in Borghi et al. (2012). The authors found that observing hands approaching a tool with a functional grasp was associated to an early activation (100–400 ms after stimulus onset) of left parieto-frontal regions involved in processing of familiar tool–hand interactions, whereas observing structural grasps in the presence of thematically related tool–object pairs elicited a late increase of activation (400–600 ms after stimulus onset) in homologous right parieto-frontal areas. Thus, a set of parieto-frontal regions seem to be involved in the interplay between tool-related motor and semantic information, but the causal role of these regions has not been addressed yet.
In the present study, we used repeated transcranial magnetic stimulation (rTMS) to investigate the possible neural substrates mediating the priming effect of action observation on processing of taxonomic and/or thematic relations among objects. We focused on the left supramarginal gyrus (SMG) and on the left posterior middle temporal gyrus (pMTG), that are considered two core regions within the fronto-temporo-parietal network for action representation. The left SMG is deemed to be involved in storage of acquired gestures for functional use of tools (Binkofski & Buxbaum, 2013; Buxbaum, Kyle, & Menon, 2005; Buxbaum, Kyle, Tang, & Detre, 2006; Buxbaum, Kyle, Grossman, & Coslett, 2007; Frey, 2007; Johnson & Grafton, 2003; Johnson-Frey, Newman-Norlund, & Grafton, 2004; Kalénine et al., 2010; Lewis, 2006; Osiurak et al., 2008; Vingerhoets, 2014; Vingerhoets, Acke, Vandemaele, & Achten, 2009) and is activated in tasks requiring both execution and recognition of motor aspects of tool actions (Caspers et al., 2011; Oosterhof et al., 2010). It is possible to hypothesize that the differential effect of functional versus structural action primes is related to activation of motor representations in this area. The left pMTG has been linked to representation of semantic aspects of functional actions (Binkofski & Buxbaum, 2013; Buxbaum et al., 2005; Buxbaum & Kalénine, 2010; Kalénine et al., 2010; Lewis, 2006) as well as to representation of higher semantic features of tools (Andres, Pelgrims, & Olivier, 2013; Canessa et al., 2007; Ishibashi et al., 2016; Yee et al., 2010). It is possible to hypothesize that this area is involved in elaboration of semantic relations, regardless of prior activation of motor representations.
To test the above hypotheses, in a within-subject design, healthy participants received online stimulation over left SMG or left pMTG (or received a sham stimulation), during an explicit semantic judgment task, in which participants were required to identify thematic or taxonomic relations among objects, after having been exposed to structural, functional or pointing action primes. All primes depicted one reference object and one acting hand, but differed for the action depicted; online rTMS was delivered during prime presentation.
As regards the sham stimulation condition, we expected that observing functional actions (vs. observing pointing actions, considered as a sort of baseline) could facilitate explicit identification of taxonomic associations between objects, whereas observing structural actions were not expected to exert any facilitation on elaboration of either taxonomic or thematic relations (thus replicating De Bellis et al., 2016). Moreover, since we used a time interval between prime onset and stimulus presentation shorter than that used by De Bellis et al. (2016), we could expect to observe an effect of functional action observation on processing of thematic relations too. Indeed, it has been shown that elaboration of distinct semantic features follows specific time courses, with early activation of thematic properties, rapidly fading out (around 500–1000 ms from stimulus onset), and a late and long lasting elaboration of object function (from 700 to 1200 ms; Kalénine et al., 2012; Mirman & Graziano, 2012; Wamain, Pluciennicka, & Kalénine, 2015). Hence, the delay in De Bellis et al.’s study (700 ms) might have been too long to detect an effect of functional action observation on processing of thematic relations.
As regards the effect of rTMS, we expected that stimulation of the left SMG, by impeding activation of functional motor representation, could selectively abolish the effect of functional action primes on taxonomic (and possibly thematic) relations. Moreover, we expected that left pMTG stimulation could not modulate the effect of action primes, but might hinder general semantic processing (i.e., unchanged facilitation of functional action primes and relative slowing of semantic judgments).
Last, in the sham condition and in the rTMS conditions, we expected to observe significantly faster and more accurate responses in processing thematic versus taxonomic semantic relationships. Indeed, it is well known that healthy individuals are faster and more accurate in associating objects according to a thematic rather than to a taxonomic relation (Kalénine et al., 2009; Tsagkaridis, Watson, Jax, & Buxbaum, 2014), since age 7 (Kalénine & Bonthoux, 2008; Pluciennicka, Coello, & Kalénine, 2016b).
Materials and methods
Participants
We enrolled a convenience sample including 27 undergraduate students (12 males; mean age = 24.0; SD = 1.7 years), without history of neurological or psychiatric disease. All the volunteers were right-handed healthy subjects with normal or corrected-to-normal vision. They were naïve to the purposes of the study and provided their written informed consent, after having received information about the stimulation techniques. All procedures were run in accordance with the Declaration of Helsinki and approved by the local ethics committee.
Experimental stimuli
Experimental stimuli consisted in coloured images depicting object triads including a reference object (n = 54), an item semantically related to the reference one, and an unrelated item (distractor). Figures (300 × 350 pixels each, subtending a visual angle of about 5.7° at a viewing distance of 80 cm) were aligned along the horizontal axis against a 1100 × 600 pixel white frame subtending a visual angle of about 11° at a viewing distance of 80 cm. The reference object was located in the centre of the frame at a distance of 100 pixels from upper and lower borders, and at a distance of 400 pixels from side borders. The other two items were placed on each side of the reference object, separated from it and from vertical borders by 50 pixels. The semantically related item was on the left of the reference one in half of the triads (see Fig. 1 a).
Starting from the 54 reference objects, 108 triads were generated: in half triads reference and related items had the same function (taxonomic triads, i.e., hammer and jackhammer); in the remaining triads the related item was the recipient of the action typically carried out with the reference object (thematic triads, i.e., hammer and nail). In all the triads, both the related item and the distractor did not share type of grasping required for functional use with the reference object (see “Appendix” for a complete list of stimuli, Table 2).
To ascertain that objects in the triads met the above criteria, we made three preliminary normative studies in which all pairs containing reference and semantically related items (matched pairs, 54 taxonomic and 54 thematic), or reference and distractor items (unmatched pairs, 54 from taxonomic and 54 from thematic triads) were submitted to three different evaluations, each performed by an independent group of undergraduate students who did not participate to the main experiment. In Evaluation 1, we established that the reference objects were more associated with semantically related items than with distractors. In Evaluation 2, we ensured that related items expressed a truly taxonomic relationship in taxonomic triads and a truly thematic relationship in thematic triads. In Evaluation 3, we ascertained that the reference object in a triad did not share functional grasping with either the related or the distractor item (see Supplementary Material).
Action prime stimuli were coloured pictures portraying a hand grabbing the reference object with a functional grasping (functional prime) or a structural grasping (structural prime), or pointing to the reference object without touching it (pointing prime). Action primes (950 × 350 pixels each, subtending a visual angle of about 7.5° at a viewing distance of 80 cm) were centred on 1100 × 600 pixel white background frames and had a visual angle of about 11° at a viewing distance of 80 cm. Three prime stimuli were arranged for each reference object (see Fig. 1 b).
Procedure
Stimuli randomization and presentation, online TMS administration, as well as data collection (accuracy and response times, RTs) were performed using MATLAB software.
The experiment was conducted in three weekly experimental sessions; each session corresponded to a different stimulation condition (SMG, pMTG, sham stimulation). In each session, two blocks of trials were administered, one including taxonomic triads, and one including thematic triads. In each block, 54 triads were presented, for a total of 108 trials per session; one-third of triads (36) were preceded by a functional prime, one-third by a structural prime and one-third by a pointing prime. Order of stimulation conditions and of experimental blocks was counterbalanced across participants; order of trials within each block was randomized. A break of about 10 min was allowed between the experimental blocks. The total duration of a session was of about 50 min.
Participants sat on a comfortable chair facing a 17″ monitor screen, holding their chin on a chinrest, their hands with palms facing down on their legs and with their feet resting on a two-pedal foot-keyboard. Each trial started with a fixation point (a cross) presented in the centre of a blank screen for 800 ms, followed by an inter-stimulus interval (ISI, blank screen) of 20 ms. Subsequently, an action prime was presented for 500 ms. A rTMS train was delivered (see below) 100 ms after prime onset. Immediately after prime offset, an object triad appeared and remained on the screen until participants’ response. Finally, there was an inter-trial interval (ITI, blank screen) lasting 4500 ms (see Fig. 2 a).
For each object triad, the participants were asked to judge which object was more associated to the one placed in the centre of the screen, and to provide their responses by foot keyboard to avoid any response compatibility effect (Tucker & Ellis, 1998). Participants responded “right” or “left” by pressing, respectively, the right or left pedal.
Before each session, participants performed a short training session with no rTMS, in which triads and primes not included in the experiment were presented.
TMS protocol and neuronavigation
Online trains of rTMS (5 pulses, 10 Hz; duration: 400 ms) were delivered by means of a 70-mm-diameter figure-of-eight coil connected to a Magstim Rapid 2 stimulator (Magstim Company) producing a maximum output of 1.2 T at the coil surface (type of output: biphasic; pulse width 400 μs). In keeping with safety recommendations (Rossi, Hallett, Rossini, & Pascual-Leone, 2009), two consecutive stimulation trains were separated by at least 5.3 s and the total number of pulses per session was 540 (270 per block). Stimulation intensity was fixed at 65% of the machine output (after Andres et al., 2013). The brain targets (left SMG and left pMTG) and the corresponding stimulation sites on the participants’ scalp were localized by means of Softaxic Optic (EMS) neuronavigation system. Neuronavigation was performed on estimated-MRI stereotaxic templates of participants’ brains based on a sampling of 65 scalp points digitized by means of a Polaris Vicra (Northern Digital) digitizer. Mean Talairach coordinates of stimulated brain targets were as follows: left SMG: x = − 58, y = − 34, z = 38; left pMTG: x = − 61, y = − 49, z = − 1 (see Fig. 2b). In the sham stimulation condition, the coil was positioned at a 90° angle on the Vertex, resting on the scalp with only one edge, so that the coil focus was directed away from participant’s head. In the SMG and pMTG stimulation conditions, coil positioning on the stimulation site was checked online by means of the neuronavigator system during the entire experimental session; in all conditions, a mechanical arm attached to a tripod hold a coil.
Data analysis
On participants’ correct RTs, we run a 2 × 3 × 3 repeated-measures ANOVAs with semantic relationship (taxonomic, thematic), stimulation condition (SMG, pMTG, Sham) and action prime (functional, structural, pointing) as within-subject factors. The same analysis was performed on accuracy data after applying Logit transformation for each experimental condition.
These analyses were followed up by means of Bonferroni post hoc comparisons that imply correction of ‘raw’ p values as a function of the number of comparisons, to reduce the risk of type 1 errors; corrected p values are then compared with the conventional alpha threshold set at 0.05.
Trials with RTs beyond the 95th and below the 5th percentile were discarded from analyses. Data analysis was performed by SPSS v.20 (IBM).
Results
ANOVA on correct RTs revealed a significant main effect of the semantic relationship [F(1,26) = 287.524; p < 0.001; η2 = 0.917], due to faster responses on thematic triads (mean = 861.874; SD = 31.296 ms) than on taxonomic triads (mean = 1102.159; SD = 39.321 ms). Main effects of stimulation condition [F(2,52) = 0.458; p = 0.635; η2 = 0.017], and action prime [F(2,52) = 0.564; p = 0.573; η2 = 0.021] were not significant. No first order interaction reached significance (p > 0.1). There was a significant second order interaction between semantic relationship, stimulation condition and action prime [F(4,104) = 2.607; p = 0.040; η2 = 0.091]. Bonferroni-corrected post hoc comparisons (Fig. 3) revealed that in the sham condition, participants’ responses on taxonomic triads were faster when preceded by functional action prime (mean = 1046.367; SD = 39.320 ms) than by either structural (mean = 1096.904; SD = 45.272 ms; corrected-p = 0.033) or pointing (mean = 1093.565; SD = 45.182 ms; corrected-p = 0.040) action primes. Instead, the type of action prime did not modulate participants’ responses on thematic triads. The favouring effect of the action prime on taxonomic judgements was abolished by rTMS over SMG. Moreover, participants’ responses on taxonomic triads preceded by a functional prime were slower in pMTG stimulation condition (mean = 1143.573; SD = 52.931 ms) than in the sham condition (mean = 1119.752; SD = 46.417 ms; corrected-p = 0.038). No significant difference emerged between pMTG and SMG, or between SMG and sham stimulation conditions (see Supplementary Table 1). The proportion of participants exhibiting a difference between each active stimulation condition and the sham condition in the same direction as the whole group is reported in Supplementary Table 2. The highest percentage was associated to the MTG condition for functional primes.
The proportion of participants’ correct responses on the different experimental conditions is reported in Table 1. ANOVA on Logit transformed accuracy data revealed a significant main effect of semantic condition [F(1,26) = 16.372; p < 0.001; η2 = 0.386] due to a higher accuracy on thematic triads (Logit transformed accuracy ± SEM = 1.381 ± 0.369, corresponding to an accuracy rate of 94%) with respect to taxonomic triads (Logit transformed accuracy ± SEM = 0.987 ± 0.357, corresponding to an accuracy rate of 88%; odds ratio = 1.483, 95% Confidence Interval = 1.475–1.488); no other main or interaction effect was significant (all p > 0.05).
Discussion
We used rTMS to investigate whether left SMG and pMTG, two key regions within the neural network coding for tools and action representation, mediate the effect of action observation on processing of semantic (thematic or taxonomic) relations. Participants received active rTMS over pMTG or SMG (or underwent sham stimulation) during observation of action primes depicting structural, functional or pointing actions, and then performed semantic judgments on thematic or taxonomic associations. Overall, our data showed that processing of thematic relations was faster and more accurate than processing of taxonomic relations (based on function similarity), in keeping with previous findings (Kalénine et al., 2009; Kalénine & Bonthoux, 2008; Lin & Murphy, 2001; Mirman et al., 2017). Furthermore, we found that action observations as well as rTMS modulated elaboration of taxonomic relations, whereas we failed to observe any significant effects of both action primes and rTMS conditions on thematic relations. In the absence of active rTMS, we found that observing functional (vs. structural and pointing) actions favoured processing of taxonomic relations among objects. rTMS delivered on both left SMG and left pMTG abolished this facilitation, but rTMS on left pMTG determined a stronger effect, making taxonomic semantic judgements after functional primes significantly slower than those observed in the sham condition. This last result did not match our expectations.
The main finding of the present study was that stimulation over left SMG and pMTG modulated the effect of functional (but not structural) action primes on elaboration of taxonomic (but not thematic) relations among objects. The abolition of the priming effect on taxonomic judgements after stimulation of left SMG, without significant slowing with respect to the sham condition, supports a role of this area in processing observed actions. A number of neuropsychological (Buxbaum et al., 2005, 2007; Buxbaum, Shapiro, & Coslett, 2014; Goldenberg & Spatt, 2009; Hoeren et al., 2014; Kalénine et al., 2010) and neuroimaging (Buxbaum et al., 2006; Frey, 2007; Hoeren et al., 2013; Johnson-Frey, 2004; Johnson-Frey et al., 2004; Randerath, Goldenberg, Spijkers, Li, & Hermsdörfer, 2010; Valyear, Cavina-Pratesi, Stiglick, & Culham, 2007) investigations point to the left inferior parietal lobule (IPL) as the brain area storing acquired motor plans for object-related actions. Within the IPL three neural foci have been associated to elaboration and long-term storage of acquired motor schemas: (i) anterior intraparietal sulcus (aIPS), whose activation during visual processing of familiar objects would be related to elaboration of manual configurations for object grasping (Filimon, 2010; Kellenbach et al., 2003; Schubotz, Wurm, Wittmann, & von Cramon, 2014; Valyear et al., 2007; Yee et al., 2010); (ii) posterior supramarginal gyrus (pSMG, plus the adjacent angular gyrus, AG), that would hold learned motor programs for arm and leg movements needed to perform appropriate use-gestures (Johnson-Frey et al., 2004; Rumiati et al., 2004; Vingerhoets et al., 2009); and (iii) anterior supramarginal gyrus (aSMG), associated to representation of specific hand postures best suited to grasp an object for using it (Hermsdörfer, Terlinden, Mühlau, Goldenberg, & Wohlschläger, 2007; Johnson-Frey et al., 2004; Rumiati et al., 2004). Although some evidence suggested that aIPS might contain a neuronal population coding for grasping-for-using actions (see for example Schubotz et al., 2014), it is often considered as a neural substrate devoted to representing structural grasping via visual analysis of object structure (Cavina-Patresi et al., 2010; Tunik, Frey, & Grafton, 2005; Tunik, Ortigue, Adamovich, & Grafton, 2008). Moreover, a recent study by Andres et al. (2017) showed that TMS stimulation over left SMG impaired explicit judgments about the hand posture needed to use tools, whereas stimulation over the anterior intraparietal area had no effect on the task. For this reason, we focused on SMG that seems to be mostly involved in tasks requiring real or pantomimed use of tools, i.e., during recruitment of learned motor plans concerning functional gestures (Rumiati et al., 2004; Vingerhoets, 2014). Our stimulation site was about in the middle between the anterior and posterior portions of SMG and had mean coordinates similar to those of the region stimulated by Andres and colleagues (Andres et al., 2017: Talairach coordinates: x = − 60, y = − 41, z = 42; Andres et al., 2013: Talairach coordinates: x = − 60, y = − 34, z = 46) for impairing elaboration of functional manipulation of tools. Furthermore, left SMG is included in the so-called action observation network and shows consistent activation during observation of others’ actions (Avenanti, Candidi, & Urgesi, 2013; Caspers et al., 2011; Gazzola & Keysers, 2008; Oosterhof et al., 2010). Finally, damages in left SMG/AG are commonly associated to limb apraxia (Buxbaum et al., 2005, 2007; Goldenberg & Spatt, 2009; Heilman, Rothi, & Valenstein, 1982; Hoeren et al., 2014; Rothi, Heilman, & Watson, 1985) as well as to impaired recognition of motor aspects of observed use of tools (Buxbaum et al., 2005; Kalénine et al., 2010). Altogether such findings are compatible with the possibility that observation of a functional action prime leads to activation of participants’ functional motor plans within the left SMG, and that such motor activation would in turn impact semantic processing of taxonomic relations among objects. Stimulation of SMG would thus hinder the effect of functional action observation on elaboration of specific semantic features of tools.
Our data are consistent with the idea that left SMG might store motor programs concerning learned tool actions that can be accessed via action observation, but the most remarkable, and somewhat unexpected, result of the present study was that rTMS over the left pMTG also disrupted the favouring effect on functional action prime on taxonomic processing, leading to significantly slower judgments on taxonomic relationships compared to the sham condition. The pMTG site we stimulated was close to neural foci involved in elaboration of biological and non-biological motion (placed in the superior temporal sulcus and in lateral MTG, respectively; Beuchamp, Lee, Haxby, & Martin, 2002; Beauchamp, Lee, Haxby, & Martin, 2003; Beuchamp & Martin, 2007), as well as close to the medial fusiform region involved in visual elaboration of object structure (such as shape and colour; Chao, Haxby, & Martin, 1999; Chao, Weisberg, & Martin, 2002; Martin, 2007). Moreover, our pMTG site was very close to a region showing fMRI activation in response to both hand and tool images (x = − 48, y = − 66, z = − 5), thus exhibiting a special sensitivity to body–object interactions (Bracci, Cavina-Pratesi, Ietswaart, Caramazza, & Peelen, 2012; Bracci & Peelen, 2013). It has been proposed that pMTG might integrate information from different modalities into multimodal representation of abstract properties of objects and object-related actions (Kalénine & Buxbaum, 2016; Tarhan, Watson, & Buxbaum, 2015). Consistently with such assumption, pMTG was associated with retrieval of object function as well as with action recognition in a number of studies. For instance, Yee et al. (2010) reported fMRI adaptation in a region very close (x = -45, y = − 54, z = − 9) to our MTG site when participants had to judge function similarity between objects. Moreover, rTMS over a similar region (x = − 60, y = − 50, z = − 1) inhibited elaboration of objects’ context of use (Andres et al., 2013). Finally, investigation on left brain-damaged patients showed that posterior temporal cortex was associated with comprehension of semantic features of tool actions (Kalénine et al., 2010), and to paired deficits in production and recognition of tool actions (Tarhan et al., 2015).
The strong and significant detrimental effect of pMTG stimulation on retrieval of taxonomic information primed by functional action observation is consistent with the above literature and suggests a specific role of pMTG in integrating functional motor information with information from other modalities during high-order semantic elaboration of tools. This view would be in line with neuropsychological evidence from brain-damaged patients and with functional neuroimaging data, suggesting that neural specificity for tools in inferior temporal cortex is partially based on sensitivity to their motor-related properties, shared by both middle fusiform and inferior parietal tool-specific regions (Mahon et al., 2007). Thus, tool representation seems to inherently involve integration of motor information at level of temporal brain areas and present results suggest to extend this claim to elaboration of higher order semantic properties of tools, such as taxonomic semantic relationships.
Besides generating specific hypotheses about the role of SMG and pMTG in mediating action and semantic processing, our study addressed the issue of which kind of semantic information (taxonomic or thematic) is associated with motor representations of object use. In the sham condition, we observed a specific facilitating effect of observing functional (versus structural and pointing) actions on processing taxonomic semantic relations, thus replicating previous findings (De Bellis et al., 2016), although here we used an interval between prime onset and response (500 ms) considered optimal for assessing activation of thematic information (Kalénine et al., 2012; Mirman & Graziano, 2012; Wamain et al., 2015). As we carefully excluded communalities in motor features between related objects (i.e., objects in our triads did not share the same manipulation patterns), our findings support the idea that activating functional action representation implies access to object function.
These results would not be compatible with the idea that action representations are strictly associated with thematic, rather than taxonomic, information about tools. Indeed, several studies linked thematic knowledge about tools with action knowledge in either normal or left brain-damaged participants (Kalénine at al., 2009; Kalénine & Buxbaum, 2016; Pluciennicka et al., 2016a; Tsagkaridis et al., 2014). However, it must be considered that the present findings together with a consistent body of evidence (Kalénine et al., 2009; Tsagkaridis et al., 2014) demonstrated that explicit elaboration of thematic relationships is significantly easier (in terms of RTs and accuracy) than that of taxonomic semantic associations. Therefore, it is possible that our paradigm was not enough sensitive to detect changes in explicit elaboration of thematic knowledge as a function of action primes, presented at the duration used in the present experiment. For these reasons, we could not make strong claims about the role of functional motor information on thematic relations, whereas we could clearly demonstrate that activating functional action representation directly leads to retrieval of object function, which can be considered the “core” semantic feature of tool concepts (Garcea & Mahon, 2012).
Our findings would not be consistent with the idea that semantic processing necessarily depends on sensory and motor activation; instead, as we observed that sensorimotor, action-related information (when available) can enhance efficiency of semantic processing, our results could best fit a “weak” sensory-motor model claiming that concept representations imply cross-modal integration of modality-specific information (Meteyard, Cuadrado, Bahrami, & Vigliocco, 2012).
The present study has some limitations. First, as in a previous experiment (De Bellis et al., 2016), our action primes displayed both the “reference” object and the hand acting on it. Our prime stimuli thus provided two sources of motor information, because objects can elicit congruent motor programs (Bub et al., 2008; Bub & Masson, 2010; Helbig et al., 2006; Jax & Buxbaum, 2010). We did not hide the objects in the prime stimuli because it is hard to prompt a structural grasp in the absence of object but, since objects were always displayed, any difference between the different priming stimuli had to be attributed to the observed hand action.
Second, we used single stimulation timing for both pMTG and SMG sites. For this reason, our experimental procedure did not allow to ascertain direction and time-course of informational flow between parietal and temporal areas. Indeed, the pMTG site could receive motor information from surrounding temporal areas concerned with biological motion and body–object interaction (Beuchamp & Martin, 2007; Bracci & Peelen, 2013), as well as from inferior parietal cortex (Jouen et al., 2018; Ramayya, Glasser, & Rilling, 2009), or from both cortical regions. Here we observed that both SMG and pMTG stimulation reduced the facilitating effect of functional action primes, but pMTG stimulation had a greater impact on semantic taxonomic judgments. This pattern of results might be compatible with the idea that functional motor information was first activated within SMG and then integrated into pMTG with other kinds of information. We could not address this hypothesis here, but it would be consistent with evidence from EEG investigation, demonstrating an early (100–400 ms) enhancement of left fronto-parietal activity after observation of functional grasping of tools (Natraj et al., 2013), as well as from an ERP study showing a modulation of early components (P3a, 200–300 ms) in parietal cortex after observation of object-related actions (Wamain, Pluciennicka, & Kalénine, 2014).
A third limitation could be related to the fact that we enrolled a convenience sample without computing sample size on the basis of previous evidence. However, our sample size was close to that (n = 30, computed on the effect size of differences in mean RTs, with alpha = 0.05 and power = 0.8) required to replicate the significant effects of TMS stimulation on SMG or MTG (versus TMS stimulation on Vertex) in two tasks very similar to those employed in the current paper (Andres et al., 2013).
Conclusions
In the present study, we confirmed that activating functional motor information through action observation could impact retrieving of objects’ function, and facilitate elaboration of taxonomic semantic relations among objects. By applying rTMS, we could demonstrate that this effect is mediated by activation of the inferior parietal cortex and of middle temporal areas where integration of action and semantic information takes place. Our findings provide new insight about the dynamic interplay between parietal action-related regions and temporal regions devoted to multimodal integration leading to formation of high-level cognitive representations about tools.
References
Andres, M., Pelgrims, B., & Olivier, E. (2013). Distinct contribution of the parietal and temporal cortex to hand configuration and contextual judgements about tools. Cortex, 49(8), 2097–2105. https://doi.org/10.1016/j.cortex.2012.11.013.
Andres, M., Pelgrims, B., Olivier, E., & Vannuscorps, G. (2017). The left supramarginal gyrus contributes to finger positioning for object use: A neuronavigated transcranial magnetic stimulation study. The European Journal of Neuroscience, 46(12), 2835–2843. https://doi.org/10.1093/cercor/bhq180.
Avenanti, A., Candidi, M., & Urgesi, C. (2013). Vicarious motor activation during action perception: Beyond correlational evidence. Frontiers in Human Neuroscience, 7, 185. https://doi.org/10.3389/fnhum.2013.00185.
Beauchamp, M. S., Lee, K. E., Haxby, J. V., & Martin, A. (2002). Parallel visual motion processing streams for manipulable objects and human movements. Neuron, 34(1), 149–159.
Beauchamp, M. S., Lee, K. E., Haxby, J. V., & Martin, A. (2003). FMRI responses to video and point-light displays of moving humans and manipulable objects. Journal of Cognitive Neuroscience, 15(7), 991–1001. https://doi.org/10.1162/089892903770007380.
Beauchamp, M. S., & Martin, A. (2007). Grounding objects concepts in perceptions and actions: Evidence from fMRI studies of tools. Cortex, 43(3), 461–468.
Binkofski, F., & Buxbaum, L. J. (2013). Two action systems in the human brain. Brain and Language, 127(2), 222–229. https://doi.org/10.1016/j.bandl.2012.07.007.
Borghi, A. M., Flumini, A., Natraj, N., & Wheaton, L. A. (2012). One hand, two objects: Emergence of affordance in contexts. Brain and Cognition, 80(1), 64–73. https://doi.org/10.1016/j.bandc.2012.04.007.
Boronat, C. B., Buxbaum, L. J., Coslett, H. B., Tang, K., Saffran, E. M., Kimberg, D. Y., et al. (2005). Distinctions between manipulation and function knowledge of objects: Evidence from functional magnetic resonance imaging. Cognitive Brain Research, 23(2–3), 361–373. https://doi.org/10.1016/j.cogbrainres.2004.11.001.
Bracci, S., Cavina-Pratesi, C., Ietswaart, M., Caramazza, A., & Peelen, M. V. (2012). Closely overlapping responses to tools and hands in left lateral occipitotemporal cortex. Journal of Neurophysiology, 107(5), 1443–1456. https://doi.org/10.1152/jn.00619.2011.
Bracci, S., & Peelen, M. V. (2013). Body and object effectors: the organization of object representations in high-level visual cortex reflects body–object interactions. Journal of Neuroscience, 33(46), 18247–18258. https://doi.org/10.1523/JNEUROSCI.1322-13.2013.
Bub, D., & Masson, M. (2006). Gestural knowledge evoked by objects as part of conceptual representations. Aphasiology, 20(9), 1112–1124. https://doi.org/10.1080/02687030600741667.
Bub, D. N., & Masson, M. E. J. (2010). On the nature of hand-action representations evoked during written sentence comprehension. Cognition, 116, 394–408. https://doi.org/10.1016/j.cognition.2010.06.001.
Bub, D. N., Masson, M. E. J., & Cree, G. S. (2008). Evocation of functional and volumetric gestural knowledge by objects and words. Cognition, 106(1), 27–58. https://doi.org/10.1016/j.cognition.2006.12.010.
Buxbaum, L. J., & Kalénine, S. (2010). Action knowledge, visuomotor activation, and embodiment in the two action systems. Annals of the New York Academy of Sciences, 1191(1), 201–218. https://doi.org/10.1111/j.1749-6632.2010.05447.
Buxbaum, L. J., Kyle, K., Grossman, M., & Coslett, H. B. (2007). Left inferior parietal representations for skilled hand-object interactions: Evidence from stroke and corticobasal degeneration. Cortex, 43(3), 411–423. https://doi.org/10.1016/S0010-9452(08)70466-0.
Buxbaum, L. J., Kyle, K. M., & Menon, R. (2005). On beyond mirror neurons: Internal representations subserving imitation and recognition of skilled object-related actions in humans. Cognitive Brain Research, 25(1), 226–239. https://doi.org/10.1016/j.cogbrainres.2005.05.014.
Buxbaum, L. J., Kyle, K. M., Tang, K., & Detre, J. A. (2006). Neural substrates of knowledge of hand postures for object grasping and functional object use: Evidence from fMRI. Brain Research, 1117(1), 175–185. https://doi.org/10.1016/j.brainres.2006.08.010.
Buxbaum, L. J., & Saffran, E. M. (2002). Knowledge of object manipulation and object function: Dissociations in apraxic and nonapraxic subjects. Brain and Language, 82(2), 179–199. https://doi.org/10.1016/S0093-934X(02)00014-7.
Buxbaum, L. J., Shapiro, A. D., & Coslett, H. B. (2014). Critical brain regions for tool-related and imitative actions: A componential analysis. Brain, 137(7), 1971–1985. https://doi.org/10.1093/brain/awu111.
Canessa, N., Borgo, F., Cappa, S. F., Perani, D., Falini, A., Buccino, G., et al. (2007). The different neural correlates of action and functional knowledge in semantic memory: An FMRI study. Cerebral Cortex, 18(4), 740–751. https://doi.org/10.1093/cercor/bhm110.
Caspers, S., Eickhoff, S. B., Rick, T., von Kapri, A., Kuhlen, T., Huang, R., et al. (2011). Probabilistic fibre tract analysis of cytoarchitectonically defined human inferior parietal lobule areas reveals similarities to macaques. Neuroimage, 58, 362–380. https://doi.org/10.1016/j.neuroimage.2011.06.027.
Cavina-Pratesi, C., Monaco, S., Fattori, P., Galletti, C., McAdam, T. D., Quinlan, D. J., et al. (2010). Functional magnetic resonance imaging reveals the neural substrates of arm transport and grip formation in reach-to-grasp actions in humans. Journal of Neuroscience, 30(31), 10306–10323. https://doi.org/10.1523/JNEUROSCI.2023-10.2010.
Chao, L. L., Haxby, J. V., & Martin, A. (1999). Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects. Nature Neuroscience, 2(10), 913. https://doi.org/10.1038/13217.
Chao, L. L., & Martin, A. (2000). Representation of manipulable man-made objects in the dorsal stream. Neuroimage, 12, 478–484.
Chao, L. L., Weisberg, J., & Martin, A. (2002). Experience-dependent modulation of category-related cortical activity. Cerebral Cortex, 12(5), 545–551.
Cousineau, D. (2005). Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson’s method. Tutorials in Quantitative Methods for Psychology, 1(1), 42–45.
Cree, G. S., & McRae, K. (2003). Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns). Journal of Experimental Psychology: General, 132(2), 163.
Creem, S. H., & Proffitt, D. R. (2001). Grasping objects by their handles: a necessary interaction between cognition and action. Journal of Experimental Psychology: Human Perception and Performance, 27(1), 218.
Creem-Regehr, S. H., & Lee, J. N. (2005). Neural representations of graspable objects: Are tools special? Cognitive Brain Research, 22(3), 457–469. https://doi.org/10.1016/j.cogbrainres.2004.10.006.
De Bellis, F., Ferrara, A., Errico, D., Panico, F., Sagliano, L., Conson, M., et al. (2016). Observing functional actions affects semantic processing of tools: Evidence of a motor-to-semantic priming. Experimental Brain Research, 234(1), 1–11. https://doi.org/10.1007/s00221-015-4432-4.
Estes, Z., Golonka, S., & Jones, L. L. (2011). Thematic thinking: the apprehension and consequences of thematic relations. In B. H. Ross (Ed.), Psychology of learning and motivation (Vol. 54, pp. 249–294). Cambridge: Academic Press.
Farah, M. J., & McClelland, J. L. (1991). A computational model of semantic memory impairment: Modality specificity and emergent category specificity. Journal of Experimental Psychology: General, 120(4), 339.
Filimon, F. (2010). Human cortical control of hand movements: Parietofrontal networks for reaching, grasping, and pointing. The Neuroscientist, 16(4), 388–407. https://doi.org/10.1177/1073858410375468.
Frey, S. H. (2007). What puts the how in where? Tool use and the divided visual streams hypothesis. Cortex, 43(3), 368–375.
Garcea, F. E., & Mahon, B. Z. (2012). What is in a tool concept? Dissociating manipulation knowledge from function knowledge. Memory & Cognition, 40(8), 1303–1313. https://doi.org/10.3758/s13421-012-0236-y.
Gazzola, V., & Keysers, C. (2008). The observation and execution of actions share motor and somatosensory voxels in all tested subjects: Single-subject analyses of unsmoothed fMRI data. Cerebral Cortex, 19(6), 1239–1255. https://doi.org/10.1093/cercor/bhn181.
Glover, S. (2004). Separate visual representations in the planning and control of action. Behavioral and Brain Sciences, 27(1), 3–24.
Goldenberg, G., & Spatt, J. (2009). The neural basis of tool use. Brain, 132(6), 1645–1655. https://doi.org/10.1093/brain/awp080.
Grafton, S. T., Fadiga, L., Arbib, M. A., & Rizzolatti, G. (1997). Premotor cortex activation during observation and naming of familiar tools. Neuroimage, 6(4), 231–236. https://doi.org/10.1006/nimg.1997.0293.
Heilman, K. M., Rothi, L. J., & Valenstein, E. (1982). Two forms of ideomotor apraxia. Neurology, 32(4), 342–342.
Helbig, H. B., Graf, M., & Kiefer, M. (2006). The role of action representations in visual object recognition. Experimental Brain Research, 174(2), 221–228. https://doi.org/10.1007/s00221-006-0443-5.
Helbig, H. B., Steinwender, J., Graf, M., & Kiefer, M. (2010). Action observation can prime visual object recognition. Experimental Brain Research, 200(3–4), 251–258. https://doi.org/10.1007/s00221-009-1953-8.
Hermsdörfer, J., Terlinden, G., Mühlau, M., Goldenberg, G., & Wohlschläger, A. M. (2007). Neural representations of pantomimed and actual tool use: evidence from an event-related fMRI study. Neuroimage, 36, T109–T118. https://doi.org/10.1016/j.neuroimage.2007.03.037.
Hoeren, M., Kaller, C. P., Glauche, V., Vry, M. S., Rijntjes, M., Hamzei, F., et al. (2013). Action semantics and movement characteristics engage distinct processing streams during the observation of tool use. Experimental Brain Research, 229(2), 243–260. https://doi.org/10.1007/s00221-013-3610-5.
Hoeren, M., Kümmerer, D., Bormann, T., Beume, L., Ludwig, V. M., Vry, M. S., et al. (2014). Neural bases of imitation and pantomime in acute stroke patients: distinct streams for praxis. Brain, 137(10), 2796–2810. https://doi.org/10.1093/brain/awu203.
Ishibashi, R., Pobric, G., Saito, S., & Lambon Ralph, M. A. (2016). The neural network for tool-related cognition: An activation likelihood estimation meta-analysis of 70 neuroimaging contrasts. Cognitive Neuropsychology, 33(3–4), 241–256. https://doi.org/10.1080/02643294.2016.1188798.
Jax, S. A., & Buxbaum, L. J. (2010). Response interference between functional and structural actions linked to the same familiar object. Cognition, 115(2), 350–355. https://doi.org/10.1016/j.cognition.2010.01.004.
Johnson, S. H., & Grafton, S. T. (2003). From ‘acting on’ to ‘acting with’: The functional anatomy of object-oriented action schemata. In C. Prablanc, D. Pélisson, & Y. Rosetti (Eds.), Progress in brain research (Vol. 142, pp. 127–139). Amsterdam: Elsevier.
Johnson-Frey, S. H. (2004). The neural bases of complex tool use in humans. Trends in Cognitive Sciences, 8(2), 71–78. https://doi.org/10.1016/j.tics.2003.12.002.
Johnson-Frey, S. H., Newman-Norlund, R., & Grafton, S. T. (2004). A distributed left hemisphere network active during planning of everyday tool use skills. Cerebral Cortex, 15(6), 681–695. https://doi.org/10.1093/cercor/bhh169.
Jouen, A. L., Ellmore, T. M., Madden-Lombardi, C. J., Pallier, C., Dominey, P. F., & Ventre-Dominey, J. (2018). Beyond the word and image: II-Structural and functional connectivity of a common semantic system. NeuroImage, 166, 185–197. https://doi.org/10.1016/j.neuroimage.2017.10.039.
Kalénine, S., & Bonthoux, F. (2008). Object manipulability affects children’s and adults’ conceptual processing. Psychonomic Bulletin & Review, 15(3), 667–672.
Kalénine, S., & Buxbaum, L. J. (2016). Thematic knowledge, artifact concepts, and the left posterior temporal lobe: Where action and object semantics converge. Cortex, 82, 164–178. https://doi.org/10.1016/j.cortex.2016.06.008.
Kalénine, S., Buxbaum, L. J., & Coslett, H. B. (2010). Critical brain regions for action recognition: Lesion symptom mapping in left hemisphere stroke. Brain, 133(11), 3269–3280. https://doi.org/10.1093/brain/awq210.
Kalénine, S., Mirman, D., Middleton, E. L., & Buxbaum, L. J. (2012). Temporal dynamics of activation of thematic and functional knowledge during conceptual processing of manipulable artifacts. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38(5), 1274. https://doi.org/10.1037/a0027626.
Kalénine, S., Peyrin, C., Pichat, C., Segebarth, C., Bonthoux, F., & Baciu, M. (2009). The sensory-motor specificity of taxonomic and thematic conceptual relations: A behavioral and fMRI study. NeuroImage, 44(3), 1152–1162. https://doi.org/10.1016/j.neuroimage.2008.09.043.
Kellenbach, M. L., Brett, M., & Patterson, K. (2003). Actions speak louder than functions: The importance of manipulability and action in tool representation. Journal of Cognitive Neuroscience, 15(1), 30–46. https://doi.org/10.1162/089892903321107800.
Lee, C. I., Mirman, D., & Buxbaum, L. J. (2014). Abnormal dynamics of activation of object use information in apraxia: Evidence from eyetracking. Neuropsychologia, 59(1), 13–26. https://doi.org/10.1016/j.neuropsychologia.2014.04.004.
Lewis, J. W. (2006). Cortical networks related to human use of tools. The Neuroscientist, 12(3), 211–231. https://doi.org/10.1177/1073858406288327.
Lin, E. L., & Murphy, G. L. (2001). Thematic relations in adults’ concepts. Journal of experimental psychology: General, 130(1), 3.
Mahon, B. Z., Milleville, S. C., Negri, GaL., Rumiati, R. I., Caramazza, A., & Martin, A. (2007). Action-related properties shape object representations in the ventral stream. Neuron, 55(3), 507–520. https://doi.org/10.1016/j.neuron.2007.07.011.
Martin, A. (2007). The representation of object concepts in the brain. Annual Review of Psychology, 58, 25–45. https://doi.org/10.1146/annurev.psych.57.102904.190143.
McRae, K., Hare, M., Elman, J. L., & Ferretti, T. (2005). A basis for generating expectancies for verbs from nouns. Memory & Cognition, 33(7), 1174–1184. https://doi.org/10.3758/BF03193221.
Meteyard, L., Cuadrado, S. R., Bahrami, B., & Vigliocco, G. (2012). Coming of age: A review of embodiment and the neuroscience of semantics. Cortex, 48(7), 788–804.
Mirman, D., & Graziano, K. M. (2012). Damage to temporo-parietal cortex decreases incidental activation of thematic relations during spoken word comprehension. Neuropsychologia, 50(8), 1990–1997. https://doi.org/10.1016/j.neuropsychologia.2012.04.024.
Mirman, D., Landrigan, J.-F., & Britt, A. E. (2017). Taxonomic and Thematic Semantic Systems. Psychological Bulletin. https://doi.org/10.1037/bul0000092(manuscript in press).
Myung, J.Y., Blumstein, S. E., Yee, E., Sedivy, J. C., Thompson-Schill, S. L., & Buxbaum, L. J. (2010). Impaired access to manipulation features in Apraxia: Evidence from eyetracking and semantic judgment tasks. Brain and Language, 112(2), 101–112. https://doi.org/10.1016/j.bandl.2009.12.003.
Myung, J. Y., Blumstein, S. E., & Sedivy, J. C. (2006). Playing on the typewriter, typing on the piano: Manipulation knowledge of objects. Cognition, 98(3), 223–243. https://doi.org/10.1016/j.cognition.2004.11.010.
Natraj, N., Pella, Y. M., Borghi, A. M., & Wheaton, L. A. (2015). The visual encoding of tool-object affordances. Neuroscience, 310, 512–527. https://doi.org/10.1016/j.neuroscience.2015.09.060.
Natraj, N., Poole, V., Mizelle, J. C., Flumini, A., Borghi, A. M., & Wheaton, L. A. (2013). Context and hand posture modulate the neural dynamics of tool-object perception. Neuropsychologia, 51(3), 506–519. https://doi.org/10.1016/j.neuropsychologia.2012.12.003.
Nelson, K. (1983). The derivation of concepts and categories from event representations. In E. K. Scholnick (Ed.), New trends in conceptual representation: Challenges to Piaget’s theory (pp. 129–149). Hillsdale, NJ: Erlbaum.
Oosterhof, N. N., Wiggett, A. J., Diedrichsen, J., Tipper, S. P., & Downing, P. E. (2010). Surface-based information mapping reveals crossmodal vision–action representations in human parietal and occipitotemporal cortex. Journal of Neurophysiology, 104(2), 1077–1089. https://doi.org/10.1152/jn.00326.2010.
Osiurak, F., Aubin, G., Allain, P., Jarry, C., Etcharry-Bouyx, F., Richard, I., et al. (2008). Different constraints on grip selection in brain-damaged patients: Object use versus object transport. Neuropsychologia, 46(9), 2431–2434. https://doi.org/10.1016/j.neuropsychologia.2008.03.018.
Pluciennicka, E., Coello, Y., & Kalénine, S. (2016b). Development of implicit processing of thematic and functional similarity relations during manipulable artifact object identification: Evidence from eye-tracking in the Visual World Paradigm. Cognitive Development, 38, 75–88. https://doi.org/10.1016/j.cogdev.2016.02.001.
Pluciennicka, E., Wamain, Y., Coello, Y., & Kalénine, S. (2016a). Impact of action primes on implicit processing of thematic and functional similarity relations: Evidence from eye-tracking. Psychological Research, 80(4), 566–580. https://doi.org/10.1007/s00426-015-0674-9.
Ramayya, A. G., Glasser, M. F., & Rilling, J. K. (2009). A DTI investigation of neural substrates supporting tool use. Cerebral Cortex, 20(3), 507–516. https://doi.org/10.1093/cercor/bhp141.
Randerath, J., Goldenberg, G., Spijkers, W., Li, Y., & Hermsdörfer, J. (2010). Different left brain regions are essential for grasping a tool compared with its subsequent use. Neuroimage, 53(1), 171–180. https://doi.org/10.1016/j.neuroimage.2010.06.038.
Rizzolatti, G., & Matelli, M. (2003). Two different streams form the dorsal visual system: anatomy and functions. Experimental Brain Research, 153(2), 146–157. https://doi.org/10.1007/s00221-003-1588-0.
Rossi, S., Hallett, M., Rossini, P. M., & Pascual-Leone, A. (2009). Safety, ethical considerations, and application guidelines for the use of transcranial magnetic stimulation in clinical practice and research. Clinical Neurophysiology, 120(12), 2008–2039. https://doi.org/10.1016/j.clinph.2009.08.016.
Rothi, L. J., Heilman, K. M., & Watson, R. T. (1985). Pantomime comprehension and ideomotor apraxia. Journal of Neurology, Neurosurgery & Psychiatry, 48(3), 207–210.
Rumiati, R. I., Weiss, P. H., Shallice, T., Ottoboni, G., Noth, J., Zilles, K., et al. (2004). Neural basis of pantomiming the use of visually presented objects. Neuroimage, 21(4), 1224–1231. https://doi.org/10.1016/j.neuroimage.2003.11.017.
Schubotz, R. I., Wurm, M. F., Wittmann, M. K., & von Cramon, D. Y. (2014). Objects tell us what action we can expect: Dissociating brain areas for retrieval and exploitation of action knowledge during action observation in fMRI. Frontiers in Psychology, 5, 636. https://doi.org/10.3389/fpsyg.2014.00636.
Tarhan, L. Y., Watson, C. E., & Buxbaum, L. J. (2015). Shared and distinct neuroanatomic regions critical for tool-related action production and recognition: Evidence from 131 left-hemisphere stroke patients. Journal of Cognitive Neuroscience, 27(12), 2491–2511. https://doi.org/10.1162/jocn_a_00876.
Tsagkaridis, K., Watson, C. E., Jax, S. A., & Buxbaum, L. J. (2014). The role of action representations in thematic object relations. Frontiers in Human Neuroscience, 8, 140. https://doi.org/10.3389/fnhum.2014.00140.
Tucker, M., & Ellis, R. (1998). On the relations between seen objects and components of potential actions. Journal of Experimental Psychology: Human Perception and Performance, 24(3), 830.
Tucker, M., & Ellis, R. (2001). The potentiation of grasp types during visual object categorization. Visual Cognition, 8(6), 769–800.
Tunik, E., Frey, S. H., & Grafton, S. T. (2005). Virtual lesions of the anterior intraparietal area disrupt goal-dependent online adjustments of grasp. Nature Neuroscience, 8(4), 505. https://doi.org/10.1038/nn1430.
Tunik, E., Ortigue, S., Adamovich, S. V., & Grafton, S. T. (2008). Differential recruitment of anterior intraparietal sulcus and superior parietal lobule during visually guided grasping revealed by electrical neuroimaging. Journal of Neuroscience, 28(50), 13615–13620. https://doi.org/10.1523/JNEUROSCI.3303-08.2008.
Valyear, K. F., Cavina-Pratesi, C., Stiglick, A. J., & Culham, J. C. (2007). Does tool-related fMRI activity within the intraparietal sulcus reflect the plan to grasp? Neuroimage, 36, T94–T108. https://doi.org/10.1016/j.neuroimage.2007.03.031.
Vingerhoets, G. (2014). Contribution of the posterior parietal cortex in reaching, grasping, and using objects and tools. Frontiers in Psychology, 5(MAR), 1–17. https://doi.org/10.3389/fpsyg.2014.00151.
Vingerhoets, G., Acke, F., Vandemaele, P., & Achten, E. (2009). Tool responsive regions in the posterior parietal cortex: Effect of differences in motor goal and target object during imagined transitive movements. NeuroImage, 47(4), 1832–1843. https://doi.org/10.1016/j.neuroimage.2009.05.100.
Wamain, Y., Pluciennicka, E., & Kalénine, S. (2014). Temporal dynamics of action perception: Differences on ERP evoked by object-related and non-object-related actions. Neuropsychologia, 63, 249–258. https://doi.org/10.1016/j.neuropsychologia.2014.08.034.
Wamain, Y., Pluciennicka, E., & Kalénine, S. (2015). A saw is first identified as an object used on wood: ERP evidence for temporal differences between Thematic and Functional similarity relations. Neuropsychologia, 71, 28–37. https://doi.org/10.1016/j.neuropsychologia.2015.02.034.
Warrington, E. K., & Shallice, T. (1984). Category specific semantic impairments. Brain, 107, 829–854.
Yee, E., Drucker, D. M., & Thompson-Schill, S. L. (2010). fMRI-adaptation evidence of overlapping neural representations for objects related in function or manipulation. NeuroImage, 50(2), 753–763. https://doi.org/10.1016/j.neuroimage.2009.12.036.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Ethical approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent
Informed consent was obtained from all individual participants included in the study.
Conflict of interest
The authors declare that they have no conflict of interest.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Appendix
Appendix
See Table 2.
Rights and permissions
About this article
Cite this article
De Bellis, F., Magliacano, A., Sagliano, L. et al. Left inferior parietal and posterior temporal cortices mediate the effect of action observation on semantic processing of objects: evidence from rTMS. Psychological Research 84, 1006–1019 (2020). https://doi.org/10.1007/s00426-018-1117-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00426-018-1117-1