Introduction

The somatosensory system detects, identifies and recognizes sensory patterns to guide appropriate outputs. Tactile information from the fingertips is especially crucial to successfully grasp an object. Each tactile receptor encodes unique grasp events such as contact, lift, hold, lowering and release. Indeed, Johansson (1996) recorded microneurography from the median nerve and observed these signals in response to specific events of a grasp and specific tactile receptors signaled each unique event. Therefore, it is advantageous for the central nervous system to facilitate sensory signal processing that carries tactile information from the hands when grasping an object is the goal. However, tactile gating complicates this obvious necessity. What are the mechanisms responsible for tactile gating? Before any motor task, the motor plan is sent to the effector muscle groups via descending motor tracts and terminates at the alpha motor neurons. Concurrently, an efference copy of the same motor plan is sent to the primary somatosensory area of the cortex, and this mechanism is predictive in nature (Bays et al. 2006). The efference copy itself triggers the events required for tactile gating reducing afferent magnitude (Chapman and Beauchamp 2006; Voss et al. 2005, 2008). Previous work in our laboratory (Colino et al. 2014) and others (Buckingham et al. 2010; Chapman et al. 1987; Milne et al. 1988; Rushton et al. 1981; Voss et al. 2008; Williams et al. 1998) reported tactile gating before and during movement. Furthermore, most studies (e.g., Chapman et al. 1987; Milne et al. 1988; Williams et al. 1998) observed tactile gating in simple tasks, such as index finger abduction. But, tactile gating has been examined in a variety of tasks such as pointing (Buckingham et al. 2010), grasping (Colino et al. 2014; Juravle et al. 2011), juggling (Juravle and Spence 2011) and in gait (Duysens et al. 1995; Morita et al. 1998; Staines et al. 1998). Previous studies commonly found that participants fail to perceive tactile events just before and during movement. Chapman (1994) argues that tactile gating has central origins for two reasons: (1) Tactile gating often occurs before EMG onset, and (2) peripheral reafference does not have any effect on evoked potentials due to peripheral stimulation. Also, higher sensory and motor processes are likely at work, leading to tactile gating observed in the literature. Indeed, task, the velocity at which movement is performed and inherent sensitivity of skin patches should influence how tactile gating manifests. However, does the presence of vision modulate tactile gating effect? Specifically, one might expect this other sensory stream to influence perception of simultaneous sensory events (e.g., Colavita 1974).

Indeed, humans often grasp objects when those objects are visible. We commonly use vision and tactile perception in conjunction when using tools, grasping or manipulating objects.Footnote 1 Vision and touch ascend the sensory axons and ultimately integrate in cortical multisensory areas. When participants are asked to respond to tactile stimulation, reaction times decrease when participants see the stimulated limb, even when the tactile stimuli were not seen (Tipper et al. 1998, 2001). Furthermore, Kennett et al. (2001) observed two-point discrimination thresholds decrease when participants received non-informative vision of their stimulated arm relative to a condition when a neutral object was presented in the same spatial location as tactile stimuli. Furthermore, Taylor-Clarke et al. (2002) observed enhanced N80 component of the sensory-evoked potential when participants had vision of the limb. These observations demonstrate the visual enhancement of touch effect. The present study addresses one key question: how does the presence of vision affect tactile gating? In particular, we hypothesize that the presence of visual feedback is expected to increase tactile sensitivity. We test this hypothesis by asking participants to make speeded reaching to grasp movements either with or without vision. During the task, we randomly delivered tactor pulses at specific segments of the left and right arms. Based on previous findings, we expect tactile sensitivity to decrease markedly at the right forearm segment (see Colino et al. 2014; Colino and Binsted 2016) while sensitivity at the other segments to be unaffected by the presence of the motor plan. The critical question is, how does vision on the limb influence this site-specific gating?

Methods

Participants

Fourteen participants (7 females) were recruited from the local graduate and undergraduate population (median age = 22.5 years; SD = 3.8 years). All were self-reported right-handed individuals, had normal or corrected-to-normal visual acuity and reported no previous neurological conditions. Participants gave written informed consent, and the local research ethics board approved experimental procedures.

Apparatus

An Optotrak Certus (Northern Digital Inc., Waterloo, ON) tracked at 250 Hz the three-dimensional position of three infrared-emitting diodes (IREDs) affixed to the index finger, thumb and wrist of each participant’s right hand. Six custom-built tactile micromotors (tactors) were taped to the dorsal surface of the proximal phalanx of the right index finger, the dorsal surface of the proximal phalanx of the right fifth finger and dorsal surface of the mid-forearm of both arms. These tactors generated a stimuli consisting of a single 7.5 ms long (onset–offset), 1 mm deformation of the skin, resulting in a readily detectable tap at rest (tactor dimensions: 17 mm long, 7 mm diameter, weighs 1 g, 3 DC Volts). Participants were seated in an upright padded chair with the left arm resting on a flat grasping surface that was at the level of the upper abdomen (see Fig. 1). The right arm always began at the home position that was 35 cm to the right of each participant’s midline. The elbow was flexed at 90°.

Fig. 1
figure 1

Experiment setup. The participant sat comfortably behind the reaching surface and was instructed to reach, grasp and lift the target in response to the imperative cue (buzzer). Participants were also instructed to return to “Home” position upon trial completion. The buzzer was placed behind the participant and was readily audible. Participants made reach-to-grasp and lift movements to either one of two locations depending where the experimenter placed the target cylinder. The participant did not observe target placement because liquid crystal goggles prevented view of the reaching surface during pre-trial period. An Optotrak Certus camera (National Digital Instruments, Waterloo, ON) tracked IREDs placed on the participants’ right and left forearms, right index finger and right fifth digit. The camera itself was 273 cm from the left edge and 150 cm above the reaching surface. The camera was angled downward at 29° toward the reaching surface. The reaching surface itself sat 85 cm above the floor

Task

Trial progression matched that of previous studies conducted in our laboratory with the exception of visual manipulation (see Colino et al. 2014; Colino and Binsted 2016). On each trial, participants performed speeded reaching and grasping movements to a target object cylinder, concluding with the lifting of the object from reaching surface. Once the grasping movement was complete, participants made a detection judgment as to whether a stimulus was felt (i.e., yes/no) and where the tactor was felt (e.g., left mid-forearm, proximal phalanx of the left index finger, proximal phalanx of the left fifth digit). Participants completed grasping trials in two sessions, one where vision was available during the response and one where vision was removed following the target preview.

Data collection took place inside a small sound-isolated room. Participants sat in front of the horizontal reaching surface, wearing liquid crystal display goggles (PLATO, Translucent Technologies) to occlude vision during the period between trials. All trials began with the right hand 30 cm to the right of the participant’s midline and 15 cm in front of their torso; the left hand was in the mirror symmetric location. A computer-generated tone (2000 Hz, 300 ms duration) warned participants that a trial was imminent and 1 s later the goggles opened. After a subsequent variable foreperiod (1000–1500 ms), the imperative cue consisting of a piezoelectric auditory buzzer (50 ms duration) was presented. Participants reached out and grasped the 2-cm-diameter and 5-cm PVC cylinder with the index finger and thumb of the right hand. During the full-vision experimental session, the goggles remained open throughout the response, while during the no vision session the goggles closed 500 ms before the auditory buzzer. The cylinder was located at one of two possible target locations that the experimenter changed randomly during the inter-trial period. The locations were 5 cm to the left or right of a position 25 cm directly anterior to the home location for the right hand. This spatial uncertainty prevented participants from predicting target location. Movements were required to be initiated within 400 ms of the buzzer and completed it in 800 ms or less. Movement initiation and completion were using velocity criteria of 50 mm/s (Chua and Elliott 1993). Once a trial was successfully completed, participants made a yes/no choice (Y/N) regarding the occurrence of a tactor. In addition, if a stimulus was detected, the participant verbally indicated where on the body the stimulus was felt (e.g., “left index finger”).

The tactor generated stimuli occurred during one of seven epochs relative to the imperative cue from 0 ms (at the same time as the imperative cue) to 600 ms (after the imperative cue) in 100-ms bins (i.e., 0, 100,…600 ms). There were eight trials per epoch per tactor (i.e., 8 trials with a delay of 0 ms, 8 trials with a delay of 100 ms) on each experimental session (i.e., vision/no vision). In addition, there were 448 catch trials on each session, where no tactile stimulus was delivered. These trials were used to assess participants’ false alarm rate. In total, each participant completed 1792 trials (across two separate experimental sessions), with each experimental session being comprised of 896 trials and lasting between 100 and 120 min. All trials within each experimental session were presented in a randomized fashion, and session condition (vision/no vision) was counterbalanced across participants.

Data analysis

Due to trial-to-trial variation in reaction time, all trial data were resegmented into 100-ms time bins to achieve temporal accuracy. Specifically, to capture the time at which the stimulus was delivered relative to movement onset, we subtracted each participant’s reaction time (for each trial) from the time relative to the imperative cue (e.g., 100−300 ms = − 200 ms). Seven time bins were created such that they collectively spanned 399 ms before movement onset through 500 ms after movement onset. The time bins were organized as follows: −299 to −200 ms, −199 to −100 ms, −99 to −0 ms, 1 to 100 ms, 101 to 200 ms, 201 to 300 ms and 301 to 400 ms.

Sensitivity (d′) and criterion (C) were calculated for every condition within each participant (Gescheider 1997). Sensitivity was calculated by subtracting the false alarm z-score (Z fa) from the hits z-score (Z h; see Gescheider 1997, p. 119). False alarm rates were pooled together across all conditions and were used to calculate d′. Half the sum of Z h and Z fa resulted in C. Negative C values reflect bias toward frequent “yes” responses, whereas positive values of C reflect bias toward frequent “no” responses (Gescheider 1997). C was chosen because the range of C does not depend on d′ (Gescheider 1997).

In addition to the detection variables, several different movement performance variables were also monitored. These included reaction time, movement time, peak velocity, peak acceleration and peak grip aperture. Sensitivity data were submitted to a 2 (vision, no vision) × 4 (tactor location: right index finger, right fifth digit, right forearm and left forearm) × 7 tactor time (−299 to −200 ms, −199 to −100 ms, −99 to −0 ms, 1 to 100 ms, 101 to 200 ms, 201 to 300 ms and 301 to 400 ms) repeated measures analysis of variance (ANOVARM). All statistically significant interactions were decomposed using simple main effect analyses. Any violations of sphericity were corrected and we used the Greenhouse–Geisser correction. Two-tailed t tests were used to test statistical significance at the largest change in sensitivity at every stimulus location where there was a statistically significant simple main effect. Statistical significance was set to p < .05.

Results

Behavior

Velocity and acceleration change over time are depicted in Fig. 2. There were no statistically significant effects.

Fig. 2
figure 2

Velocity (left) and acceleration (right) over transport movement proportion expressed as percent of movement completion. Normal velocity and acceleration profiles are shown. Participants behaved normally in all conditions. There were no statistically significant differences in velocity and acceleration profiles in response to activated tactor pulses or differences between vision and no vision

Sensory detection

Omnibus repeated measures ANOVA revealed a main effect of Vision [F(1,9) = 6.03 p = .036], Stimulus location [F(1.2,11.2) = 14.07, p = .002] and Stimulus time [F(2.6,23.6) = 6.19, p = .004]. The main effect of vision revealed that full vision enhanced d′ (M = 3.23, SE = .06) compared to no vision (M = 3.09, SE = .07). Also, there was a statistically significant interaction between Stimulus location and Stimulus time [F(18,162) = 6.266, p < .0001], but there was no statistically significant interactions between Vision and Stimulus location [F(3,27) = 1.407, p = .262] and Vision and Stimulus time [F(6,54) = 1.225, p = .308]. The three-way interaction between Vision, Stimulus location and Stimulus time failed to achieve statistical significance [F(18,162) = 1.096, p = .361]. Subsequent simple main effects analysis was performed on the statistically significant two-way interaction between Stimulus location and Stimulus time. Each stimulus location was analyzed separately determining whether sensitivity changed across stimulation times.

Simple main effects analysis of the left arm revealed a trend toward statistical significance (p = .061), but sensitivity did not change across time at the left arm replicating previous findings in our laboratory (Colino et al. 2014). Likewise, there were no changes in d′ at the right index finger (p = .241). However, sensitivity changed across stimulus times at the right fifth digit and the right forearm. There was a main effect of stimulus time at the right fifth digit [F(2.8,25.4) = 6.38, p = .003]. Pairwise comparisons between the first time point and all others reveal sensitivity decreased by 1.7 d′ units from the first stimulus time (mean d′ = 3.85) and d′ rose steadily toward baseline. There was a main effect at the right forearm [F(2.8,25.7) = 9.76, p = .0001]. Pairwise comparisons revealed that d′ decreasing nearly two d′ units from the first stimulus time bin (i.e., −299 to −200 ms; mean d′ = 3.73) to the fourth stimulus bin (i.e., 1 to 100 ms; mean d′ = 1.77). Figure 3 depicts d′ observations from all four stimulus locations.

Fig. 3
figure 3

Above figure depicts tactile sensitivity (d′) across time relative to movement onset (ms). Solid lines depict sensitivity under full vision, and dotted lines depict sensitivity under no vision. The differences observed in this figure show the main effect of vision with no vision associated with decreased sensitivity. The upper left panel depicts sensitivity at the right arm under vision and no vision, and the upper right panel depicts the same at the right fifth digit. The lower left panel depicts sensitivity at the right index finger, and the lower right panel depicts sensitivity at the left arm. *p < .05; **p < .01; and ***p < .001

In summary, the present study observed right fifth digit and right forearm reduced sensitivity, whereas no reduction was observed at the left forearm and the right index finger. Also, d′ reduction occurred before movement start and there was a moderate enhancement of sensitivity when participants had full vision of the object and reaching surface relative to no vision.

Discussion

Sensitivity

The current study examined the visual availability effect on sensitivity and observed enhanced sensitivity under full vision relative to no vision. Also, the present study replicates previous findings from our laboratory (Colino et al. 2014) and extends findings from others (e.g., Buckingham et al. 2010; Chapman 1994; Chapman and Beauchamp 2006; Williams et al. 1998; Morita et al. 1998; Staines et al. 1998; Voss et al. 2008). Indeed, previous investigations studied tactile gating within the context of simple finger abduction movements (e.g., Chapman et al. 1987; Milne et al. 1988; Williams et al. 1998). Task demands affect tactile gating demonstrated by the present observations where sensitivity decreased at the right forearm and right fifth digit but sensitivity did not change at the left forearm and the right index finger. Also, sensitivity changed across time with decreasing sensitivity approaching movement onset. Participants experience the largest sensitivity reduction at the right forearm approaching movement onset. Observing tactile gating before movement onset strongly suggests the presence of predictive sensorimotor processes generated by frontal motor areas and passed to sensory areas (perhaps as an efference copy). Therefore, tactile gating appears to function in a predictive manner; it is the result of movement planning—an event that clearly occurs centrally and propagates peripherally.

Task clearly has an effect on tactile gating; the present observation did not observe tactile gating at the right index finger—the limb segment that contacted the target object in the present protocol. It appears that central mechanisms associated with tactile gating have the ability to augment tactile sensitivity at a specific limb segment according to the likelihood that a limb segment receives tactile information. Hence, this action is consistent with a feed-forward system that specifies the expected sensory utility throughout a movement.

However, previous studies (e.g., Williams and Chapman 2002) observed tactile gating occurred without overt movement making it difficult to reconcile with present observations. Williams and Chapman (2002) passively moved each subject’s limb and observed that without overt, voluntary movement planning could not elicit tactile gating. In other words, there could not be a central motor command in this context and, by extension, no predictive sensorimotor planning signal. However, it is possible to elicit gating without a planned movement. Indeed, Williams and Chapman (2002) report gating without a motor command. This observation illustrates a “postdictive” explanation, suggesting that gating occurs as a result of sensory inflow in the presence of other sensory events. Indeed, Williams and Chapman proposed a backward masking effect where sensory information from the movement masks tactile information from processing and, therefore, prevent tactile events reaching conscious perception. Present accounts of the pain gating mechanism agree with the postdictive explanation, the well-known example being the inhibitory inputs from large (Aβ) fibers to the dorsal horn of the spinal cord (Melzack and Wall 1965).

However, it is difficult to reconcile present data disagree with a postdictive account of tactile gating. Namely tactile gating occurred before movement onset highlighting the strong possibility that tactile gating is largely the result of central motor planning processes. Simple movements, such as finger abduction, do not necessarily demand tactile information to be a useful information source. On the other hand, grasping is a complex movement that requires relevant sources of tactile information. However, in simple single-joint movements, the central nervous system would not predict that tactile information would be used later in the movement and, therefore, would be more likely to gate that information. By contrast, tactile gating would not occur at the specific effectors in a grasp (i.e., the fingers and thumb) because tactile information will be a relevant source of information.

Effect of vision

Indeed, humans often use vision to interact with objects and the hands and arms are often in view. Tactile stimulation reaction time decreases when subjects see their hand, even when the tactile stimuli were not seen (Tipper et al. 1998, 2001). The present visual availability effect may be the visual enhancement of touch effect manifesting (Serino and Haggard 2010).

Visual enhancement of touch is not thought to be a spatial attention effect—visual enhancement of touch persists when attention is experimentally controlled. Bays et al. (2006) also concluded that attention cannot account for tactile suppression, concluding that tactile suppression is due to some other physiological mechanism rather than the task’s attentional demands. Similarly, Juravle et al. (2011) made the same conclusion. Participants performed a dual task whereby they were asked to make speeded reach and grasp movements followed by a speeded detection of a tactor activation on the distal phalanx of the index finger. The tactor vibration was delivered with equal likelihood to either hand or with a higher probability to one hand or the other and was delivered before movement, during or after movement. For the speeded detection task, a tactile pulse was delivered to the finger at the movement preparation phase, before movement onset, when the hand moved or after the movement. Participants grasped the object with a power palmar grasp. When participants detected a tactile pulse, they pressed down a pedal with their foot while still executing the movement. When participants made a response, the next trial started 2000 ms later. Attention was manipulated with higher probabilities of tactile stimulation at either the moving hand or the resting hand. The results show faster responses to tactile stimulation when there was higher probability of tactile stimulation to either the moving hand or the resting hand. This effect may be due to attentional shift to the hand more likely to be stimulated with a tactor. However, participants’ reaction times slowed when participants were stimulated with tactor activation during the motor preparation phase before movement onset, thus indicating a possible dual-task interference (i.e., psychological refractory period, see Welford 1952). Furthermore, tactor activation detection thresholds were not different between the preparation and execution phases irrespective of the probability of tactile stimulation. No difference in tactile thresholds indicates that preparing to move an effector does not elicit a shift in tactile perception to that effector but that the effect is likely due to an inhibitory mechanism due to the movement itself that begins before movement onset (Juravle et al. 2011). Therefore, sensory suppression due to attention shift cannot explain the observed tactile suppression. However, the Juravle et al. study does not directly manipulate vision determining nor does it provide direct evidence, supporting that visual enhancement of touch is a perceptual context effect.

However, Yamaguchi and Knight (1990) found that patients with prefrontal cortex damage demonstrated enhanced sensory-evoked potential amplitude before movement. But, prefrontal cortical damage is linked with distractibility and poor attention capacity. How can present results reconcile with those from Yamaguchi and Knight (1990)? The prefrontal cortex has overall inhibitory output to other cortical and subcortical structures (Alexander et al. Alexander et al. 1976; Edinger et al. 1975; Yamaguchi and Knight 1990). But, removing prefrontal influence on target structures would disinhibit target freeing them from inhibitory input with concomitant attention deficit. Therefore, attention deficits would accompany unsuppressed tactile inputs rather than cause unsuppressed tactile inputs.

Kennett et al. (2001) found two-point discrimination thresholds decreased when participants received non-informative vision of their stimulated arm relative to a condition when a neutral object was presented in the same spatial location as the stimulation. This observation precludes the possibility that spatial attention has a role in visual enhancement of touch. Furthermore, Taylor-Clarke et al. (2002) recorded cortical potentials and observed enhanced N80 component of the somatosensory-evoked potential. The authors argue that all the above results support descending feedback from parietal areas affecting processing at primary somatosensory cortex. In a more recent study, Press et al. (2004) studied the conditions in which visual enhancement of touch occurs. They conducted four separate experiments that differed spatial distribution and difficulty of tactile stimuli. Press et al. (2004) measured participants using a two-point discrimination using a staircase procedure and ensured that visual information did not carry any informative cues regarding stimulus activation. Press and colleagues observed that when spatial discrimination difficulty is high vision of the arm enhanced spatial discrimination observing speeded reaction times when the arm was viewed. However, participants did not commit any more errors when the arm was viewed compared to viewing an object at the same spatial location precluding the possibility that visual enhancement of touch is not the result of a spatial attention effect.

The present experiment found evidence of visual touch enhancement in a dual task. Specifically, higher d’ was observed when participants viewed reaching surface during the reach-to-grasp compared to no vision. Present results reconcile with data from Press and colleagues because they observed visual touch enhancement with suprathreshold stimuli—as is presently the case. Interestingly, the present experiment observed visual touch enhancement in a dual-task context and did so measuring simple detection. Evidence suggests that disrupting primary somatosensory activity while viewing the arm while stimulating the same arm decreases tactile acuity. Indeed, Fiorio and Haggard (2005) applied single-pulse transcranial magnetic stimulation (TMS) over primary somatosensory cortex before making tactile gating judgments when the hand was viewed. However, no accuracy reduction was observed when TMS disrupted S1 while a neutral object was viewed. Additionally, no accuracy drop was observed when secondary somatosensory cortex was disrupted and vision could not offer any information regarding tactile stimuli as stimuli were delivered in dark conditions. The Fiorio and Haggard study offers important clues as to the mechanisms of visual touch enhancement and a strong explanation of the present observation. A recent study identifies anterior intrapareital sulcus contributing to the visual enhancement of touch effect integrating of vision and somatosensation (Konen and Haggard 2014). Konen and Haggard (2014) found that disrupting aIPS with transcranial magnetic stimulation (TMS) after vision of the limb abolished visual enhancement of touch. But, applying TMS to other areas of cortex did not have any effect. Visual enhancement of touch may require a feedback loop between aIPS and primary somatosensory cortex (Konen and Haggard 2014).

Conclusion

The current study observed reduced sensitivity before movement onset and during the first moments of movement. Also, visual enhancement of touch may occur when participants are not explicitly instructed to view the limb, rather they view the goal object. It may be that mere vision of the limb may induce visual enhancement of touch as observed in the present study underlying the putative role of parietal sulcus to integrate motor and visual signals, process such signals and subsequently send signals to primary somatosensory cortex. The present study, along with the multisensory literature, underlines the interactions between somatosensation, motor processes and vision.