As a result of the proliferation of technology in modern society, it is not uncommon to observe individuals engaging in behaviors such as listening to music on the bus while browsing the Web or playing a game. In fact, the reader can likely attest that talking on the phone or responding to text messages while glancing at his or her e-mail inbox or a television program is a frequent occurrence. The concurrent consumption of multiple streams of media—a behavior known as media multitasking—has become increasingly popular in everyday life. Indeed, a recent study of media use among youths indicated that between the years of 1999 and 2009, the proportion of time spent engaging in two or more media concurrently has increased from 16 % to 29 % (Rideout, Foehr, & Roberts, 2010). Given the considerable rise in the propensity to engage in media multitasking, it is not surprising that researchers have begun to explore whether habitual engagement in media multitasking is associated with measurable changes in the cognitive processes engaged during multitasking (e.g., Alzahabi & Becker, 2013; Cain & Mitroff, 2011; Minear, Brasher, McCurdy, Lewis, & Younggren, 2013; Ophir, Nass, & Wagner, 2009). To this end, the majority of research on media multitasking has thus far adopted an individual-differences approach whereby self-reports of media multitasking are linked to performance on laboratory tasks that are thought to capture some central characteristic of media multitasking.

One characteristic of media multitasking that has been the target of recent empirical study is the continual switching of attention among several media sources (Alzahabi & Becker, 2013; Minear et al., 2013; Ophir et al., 2009). Specifically, researchers have examined how the propensity to engage in media multitasking may relate to general task-switching ability. To date, these endeavors have yielded somewhat mixed findings. For example, Ophir and colleagues found that higher self-reports of media multitasking were indicative of poorer task-switching abilities. Subsequent studies have come to different conclusions, however, with some finding that media multitasking is associated with better task-switching performance (Alzahabi & Becker, 2013), and still others failing to find any relation at all (Minear et al., 2013).

Another key characteristic of media multitasking is the presence of ongoing distraction from concurrent media streams. Consequently, researchers have attempted to explore the link between self-reported media multitasking and performance on behavioral tasks thought to index an individual’s susceptibility to distraction. Specifically, Ophir and colleagues (2009) found that higher reports of media multitasking were linked with a decreased ability to ignore salient, but irrelevant, distractors—a finding that has been conceptually replicated in subsequent work by Cain and Mitroff (2011; but see Minear et al., 2013, for possible exceptions). On the basis of these findings, these researchers have suggested that heavy media multitaskers are less able (or less likely) to employ a “top-down” processing style to deal with distracting information (Cain & Mitroff, 2011; Ophir et al., 2009).

In the work that follows, we focused on yet another key characteristic of habitual media multitasking that has thus far been unexplored: the tendency to avoid sustaining attention on any one particular source of information. We reasoned that in order to effectively engage in media multitasking, one must either engage in the continuous switching of attention among multiple sources of information or else divide one’s attention among multiple media streams simultaneously. In either case, media multitasking appears to be the antithesis of continuously sustaining attention on a single source of information. It is therefore possible that individuals who habitually engage in media multitasking may develop deficits in sustained attention, or conversely, that individuals who have difficulty with sustained attention may engage in more frequent media multitasking. As such, we posited that media multitasking is associated with a general deficit in one’s ability to sustain the focus of attention on a single task over time.

Some evidence for a negative relation between media multitasking and sustained attention has come from a recent study by Ralph, Thomson, Cheyne, and Smilek (2014), who examined the association between self-reported media multitasking and attentional functioning in everyday life. In their study, it was found that higher reports of media multitasking were predictive of higher reports of attention lapses (i.e., being absent-minded or inattentive to the current events and experiences), as well as attention-related cognitive errors (such as putting milk in the pantry). In addition, Ralph and colleagues examined participants’ self-reported propensity to mind wander, finding a weak, but significant, positive relation with media multitasking. Taken together, these findings suggest that media multitasking may be associated with a deficit in one’s ability to sustain attention on particular task goals over time. Given that this association was observed via subjective reports of attention lapses, one might also conclude that media multitasking is associated only with rather large (and noticeable) failures of sustained attention. It is an open question, therefore, whether media multitasking would be associated with general sustained-attention ability if objective measures were used. We explored this possibility in the empirical work that follows.

The present studies

Across four studies, we investigated whether the self-reported propensity to media multitask is associated with sustained-attention performance. There is no single agreed-upon “measure” of sustained attention, and in fact, the term sustained attention may be applied to a host of behaviors. As such, we chose not to simply assess the possible relation between media multitasking and a single laboratory task, but instead employed three ostensible “sustained-attention” tasks, including the metronome response task (MRT; Seli, Cheyne, & Smilek, 2013), the sustained-attention-to-response task (SART; Robertson, Manly, Andrade, Baddeley, & Yiend, 1997), and a vigilance task (here, a modified version of the SART; Carter, Russell, & Helton, 2013).

The MRT is a recently developed sustained-attention task in which participants are presented with a tone at regular intervals (roughly one per second) and instructed to respond (via buttonpress) in synchrony with the onset of each tone. To perform well on the task and minimize response variance, one must continually attend to the temporal structure of the task so as to anticipate the arrival of each tone. Variability in response times to the tones is thus taken as an indicator of sustained-attention performance, with increased response variability reflecting poorer sustained attention (e.g., Seli et al., 2014; Seli, Cheyne, & Smilek, 2013; Seli, Jonker, Cheyne, & Smilek, 2013).

The SART is a go–no-go task in which participants are instructed to respond (via buttonpress) as quickly as possible to frequent go stimuli and to refrain from responding to infrequent no-go stimuli. The primary behavioral measures in the SART include (1) failures to refrain from responding to no-go stimuli (i.e., no-go errors) and (2) response times (RTs) to go stimuli. In the SART, poorer sustained-attention performance is typically associated with increased no-go errors and speeding of RTs (e.g., Cheyne, Carriere, & Smilek, 2006; Jonker, Seli, Cheyne, & Smilek, 2013; Robertson et al., 1997; Seli, Cheyne, Barton, & Smilek, 2012; Seli, Jonker, Cheyne, & Smilek, 2013; Smilek, Carriere, & Cheyne, 2010). When attention lapses, individuals often fail to inhibit their responding, and this is typically accompanied by a speeding of RTs to go stimuli prior to making such errors.

Finally, we employed a vigilance task, which is perhaps the most well-studied form of task for indexing sustained attention (see, e.g., Giambra, 1989, 1995; J. F. Mackworth, 1964; N. H. Mackworth, 1948, 1950). Vigilance tasks are go–no-go tasks (much like the SART) designed to replicate the attentional demands of real-world situations in which human operators must monitor automated systems for rare, but critical events (such as a radar operator monitoring for the characteristic “blip” of an enemy combatant). Although vigilance tasks have taken many forms, in all such tasks, participants are instructed to respond to the presentation of infrequent go stimuli (i.e., targets) and to withhold responses to frequent no-go stimuli (i.e., nontargets). Thus, the critical difference between the SART and a vigilance task is response frequency—in the SART, participants respond to frequent distractors while withholding their response to relatively rare targets, whereas in a vigilance task, participants remain nonresponsive for the majority of trials and respond only to the occurrence of relatively rare targets (for a comparison of standard and vigilance forms of the SART, see Carter et al., 2013; McVay & Kane, 2012; McVay, Meier, Touron, & Kane, 2013). A common finding in the vigilance literature is that one’s ability to sustain attention, or remain vigilant, deteriorates as a function of time on task (see, e.g., J. F. Mackworth, 1964; N. H. Mackworth, 1948, 1950). This typically manifests in the form of decreasing response sensitivity and longer RTs to targets (see, e.g., Helton & Russell, 2012; N. H. Mackworth, 1948; McCormack, 1958; see also Hancock, 2013, for a recent review). Thus, the magnitude of the observed performance decrement can be taken as an index of sustained attention.

Given the previously documented link between media multitasking and self-reported failures of attention (Ralph, Thomson, Cheyne, & Smilek, 2014), we hypothesized that media multitasking would be associated with poor performance on the sustained-attention tasks employed here (i.e., the MRT, SART, and vigilance task). To assess habitual media multitasking behavior, participants completed the Media Use Questionnaire (Ophir et al., 2009), which assesses media use and media multitasking across a variety of different media. From responses on the Media Use Questionnaire, a media multitasking index (MMI) was calculated as per Ophir et al., which indicated the degree of media multitasking that a participant engaged in during a typical hour of media consumption. We began our investigation of media multitasking and sustained attention with the MRT in Study 1. In Study 2, we attempted to extend our findings to the SART. Studies 3a and 3b replicated findings from Studies 1 and 2 (respectively) using two large online samples from Amazon’s Mechanical Turk, and in Study 4 we implemented a vigilance task, again using a large online sample from the Mechanical Turk.

Study 1

In Study 1, we examined the relation between self-reported media multitasking and performance on the MRT. The dependent measure of interest in the MRT is response variability. As such, we hypothesized that higher reports of media multitasking would be associated with greater response variability on the MRT. Of secondary interest, we also explored whether media multitasking predicts two other correlates of sustained attention: mind wandering (indexed by responses to thought probes; Smallwood, McSpadden, & Schooler, 2007) and fidgeting behavior (Seli et al., 2014; see also Carriere, Seli, & Smilek, 2013). We included these measures because it has been shown that as sustained attention fails, reports of off-task thought increase (e.g., McVay & Kane, 2012; Seli et al., 2014; Smallwood, Beach, Schooler, & Handy, 2008), as does the amount of superfluous body movement, colloquially referred to as “fidgeting” (Seli et al., 2014).

Method

Participants

A group of 77 undergraduate students (43 male, 34 female) from the University of Waterloo participated in exchange for course credit. Three of the participants were excluded for failing to complete the Media Use Questionnaire, and one for having greater than 10 % omissions on the MRT (a standard exclusion criterion; see Seli, Cheyne, & Smilek, 2013), resulting in the inclusion of 73 participants (40 male, 33 female) for subsequent analyses.

Stimuli and procedure

First, sustained attention was indexed by performance on the MRT (Seli, Cheyne, & Smilek, 2013). In the MRT, participants are instructed to respond via buttonpress in synchrony with the presentation of an auditory tone (see Fig. 1). Participants held a computer mouse in their lap and responded to the tones via mouse buttonpresses. Each trial lasted 1,300 ms, and began with 650 ms of silence, followed by the onset of a tone that lasted 75 ms, and finally another 575 ms of silence. Participants completed 18 practice trials, followed by 900 experimental trials. Rhythmic response times (RRTs) were calculated as the relative time between the tone onset and the participant’s response, with a response made prior to tone onset yielding a negative RRT and a response made after tone onset yielding a positive RRT. The variance in RRTs was computed across a moving five-trial window, to limit the influence of outlier responses on the overall RRT variability (as per Seli, Cheyne, & Smilek, 2013).

Fig. 1
figure 1

Metronome response task (MRT) trial sequence. Participants were instructed to respond in synchrony with the presentation of each tone (separated by 1,300 ms)

Following the MRT, participants completed the Media Use Questionnaire (Ophir et al., 2009). This questionnaire addresses ten groupings of activities: (1) using print media; (2) texting, instant messaging, or e-mailing; (3) using social sites; (4) using nonsocial sites; (5) talking on the phone or video chatting; (6) listening to music; (7) watching TV, movies, or YouTube; (8) playing video or online games; (9) doing homework, studying, or writing papers; and (10) face-to-face communication. For each type of activity, participants report (1) on an average day, how many hours they spend engaging in the activity, and (2) while engaging in the activity, the percentage of time that they are also doing each of the other activities listed. Responses to the latter question were selected from a drop-down menu with the options Most of the time, Some of the time, A little of the time, or Never. These responses were assigned values of 1.0, 0.67, 0.33, and 0 (respectively). MMI scores were then computed according to the formula outlined by Ophir et al., and are taken to reflect the degree of media multitasking in a typical hour of media use.

We also measured responses to mind wandering thought probes and fidgeting behavior. A total of 18 thought probes were presented pseudorandomly throughout the MRT, with one thought probe presented in each block of 50 trials. Upon presentation of each thought probe, the MRT was stopped, and participants were asked to indicate whether, just prior to the probe, they had been (a) “on task” or (b) “mind wandering.” Prior to beginning the MRT, participants were instructed on how to respond to each of the thought probes; specifically, they were instructed to report that they were “on task” if they were thinking only about things related to the task (e.g., about their performance), and to report that they were “mind wandering” if they were thinking about things unrelated to the task (e.g., about what to eat for dinner). After participants had made their response, they were presented with a screen prompting them to click the mouse to resume the MRT.

Fidgeting behavior was measured by having participants sit on a Wii Balance Board while completing the MRT. The balance board was placed on top of a flat bench approximately 18 cm from the ground (roughly chair height). Fidgeting was defined as the total amount of movement during each trial, measured using four sensors (one in each of the four feet of the Wii Balance Board) that detected vertically applied force and updated at a rate of approximately 60 Hz. Movement profiles were constructed using the same criteria outlined by Seli and colleagues (2014), such that if the sensor values from two successive readings were more than 1.96 standard deviations away from the mean of a resting noise profile (constructed at the beginning of the study session with no weight or movement on the sensors), then a movement was deemed to have occurred, and these logged movements were then summed for each trial. As was the case with RRT variance, the mean movement behavior was calculated across a moving five-trial window.

Apparatus

The MRT program was created using E-Prime 1.2 software (Psychology Software Tools Inc., Pittsburgh, PA) and run on an Acer Aspire AX1930-ES10P desktop computer. The metronome tone was presented through Bose QuietComfort 15 Noise-Cancelling Headphones, and movement data were collected using a separate program running under Python 2.6 (Python Software Foundation, www.python.org) and using Brian Peek’s WiimoteLib 1.7 (Brian Peek, http://channel9.msdn.com/coding4fun/articles/Managed-Library-for-Nintendos-Wiimote). The movement data were synchronized using time-stamp data at the start of every MRT trial. Stimuli were presented on a 19-in. ViewSonic monitor at a resolution of 1,440 by 900, and participants were seated approximately 57 cm from the display screen.

Results and discussion

Of primary interest was the relation between media multitasking and response variability. The response variance data from the MRT were highly positively skewed, and thus were normalized using a natural logarithm transformation (as per Seli, Cheyne, & Smilek, 2013), resulting in a mean transformed RRT variance of 8.15 (SD = 0.67) on the MRT. The mean MMI score was 3.94 (SD = 1.24). Critically, the Pearson correlation between scores on the MMI and transformed RRT variability revealed a significant positive correlation, r(71) = .27, p = .02 (see Fig. 2). Thus, consistent with our hypothesis, higher reports of media multitasking were associated with greater response variability on the MRT.

Fig. 2
figure 2

Scatterplot of the relation between scores on the media multitasking index (MMI) and transformed rhythmic response time (RRT) variance in Study 1. The dashed line represents the best linear fit to the data

Of secondary interest were the relations between MMI scores and mind-wandering rates, and between MMI scores and fidgeting. Overall mind-wandering rates were calculated as the proportions of thought probes for which participants indicated that they had been mind wandering (M = 54.11 %, SD = 23.33).Footnote 1 A correlational analysis failed to show a significant relation between MMI scores and overall mind-wandering rates, r(71) = .08, p = .53. The mean movement (fidgeting) data were normalized using a natural logarithm transformation (as per Seli et al., 2014), resulting in a mean transformed movement of 4.04 (SD = 0.45).Footnote 2 The Pearson correlation between MMI scores and transformed mean movement revealed no significant association, r(71) = .09, p = .43. Finally, consistent with prior findings (Seli et al., 2014), no significant correlation between response variance and movement behavior (i.e., fidgeting) was observed, r(71) = .03, p = .83.

In summary, media multitasking predicted performance on the MRT, such that higher levels of media multitasking were associated with greater response variability. However, no significant relation was found between media multitasking and probe-caught mind wandering or superfluous body movements (i.e., fidgeting) while completing the MRT.Footnote 3

Study 2

In Study 2, we evaluated whether the relation between media multitasking and sustained attention observed in Study 1 would generalize to another well-studied task that has been used to index sustained attention. To this end, in Study 2 we measured sustained-attention performance using the SART (Robertson et al., 1997). The primary indices of sustained-attention failures in the SART are no-go errors and RTs to go trials. We hypothesized that higher reports of media multitasking would be associated with a greater frequency of no-go errors and faster RTs on go trials.

Method

Participants

A group of 83 undergraduate students (63 female, 20 male) from the University of Waterloo participated in exchange for course credit. One participant was removed for not completing the MMI, and six participants were removed for having greater than 10 % omissions on the SART (Seli, Cheyne, & Smilek, 2013; Seli, Jonker, Cheyne, & Smilek, 2013). This resulted in the data from 76 participants (59 female, 17 male) being submitted for subsequent analyses.

Stimuli and procedure

Sustained attention in Study 2 was indexed by performance on the SART (see Fig. 3). Each trial of the SART involves the presentation of a single digit (1–9) in the center of the screen for 250 ms, followed by a double-circle mask for 900 ms, resulting in a total trial duration of 1,150 ms. For each block of nine trials, a single digit was chosen without replacement and presented in white on a black background. Each digit’s size was randomly selected to be of font size 48, 72, 94, 100, or 120, with equal sampling of the five possible font sizes. Participants were asked to place equal emphasis on both speed and accuracy while completing the task. Furthermore, participants were instructed to make a response (via pressing the space bar) whenever the digit was not a 3 (i.e., a go digit), and to withhold their response when the digit was a 3 (i.e., the no-go digit). Following 18 practice trials (containing two no-go digits), participants completed 900 experimental trials, 100 of which were no-go trials. After completing the SART, participants completed the Media Use Questionnaire (Ophir et al., 2009) in the same fashion as in Study 1.

Fig. 3
figure 3

Example of four possible sustained-attention-to-response task (SART) trials. Participants were instructed to respond to each digit, except when that digit was a 3

Apparatus

The SART program was constructed using Python 2.6 (Python Software Foundation, www.python.org) using Pygame 1.9.1 (http://pygame.org/news.html) and run on an Apple Mini with OS X Version 10.6.6 and a 2.4-GHz Intel Core 2 Duo processor. The stimuli were presented on a 24-in. Philips 244E monitor at a resolution of 1,920 by 1,080, and participants were seated approximately 57 cm from the display screen.

Results and discussion

The mean MMI score obtained from the Media Use Questionnaire was 3.46 (SD = 1.24), and the mean proportion of no-go errors and mean RT on go trials were .49 (SD = .24) and 406.57 ms (SD = 92.40), respectively. A typical finding in the SART is that faster responding results in more no-go errors (a speed–accuracy trade-off; Seli, Cheyne, & Smilek, 2012; Seli, Jonker, Cheyne, & Smilek, 2013; Seli, Jonker, Solman, Cheyne, & Smilek, 2013). Consistent with this previous work, here we observed a significant negative correlation between no-go errors and RTs, r(74) = –.67, p < .001, indicating the presence of a speed–accuracy trade-off. Importantly, however, we found no significant correlation between scores on the MMI and no-go errors, r(74) = .03, p = .79 (Fig. 4a), nor was there a significant correlation between MMI scores and RTs, r(74) = .08, p = .47 (Fig. 4b).Footnote 4 Given the covariation of speed and accuracy within individuals across the task, we conducted a regression analysis to control for possible speed–accuracy trade-offs, seeking to determine whether no-go errors and/or RTs uniquely predicted scores on the MMI. As can be seen in Table 1, neither no-go errors nor RTs were found to significantly predict scores on the MMI. However, when controlling for RT, the partial correlation between MMI and no-go errors increased from .03 to .12 (similarly, when controlling for errors, the partial correlation between MMI and RT increased from .08 to .14). Nonetheless, unlike in Study 1, here we found no evidence of an association between media multitasking and sustained-attention performance.

Fig. 4
figure 4

Scatterplots depicting the relation between MMI scores and no-go errors (A), as well as response times to go trials (B) on the SART. The dashed lines represent the best linear fits to the data

Table 1 Regression of no-go errors and RT predicting MMI

Study 3a and 3b

The finding that MMI scores negatively predicted performance on one sustained-attention task (the MRT in Study 1) but not another (the SART in Study 2) was unexpected. That is, although the primary measures of sustained attention in these two tasks differ, one might imagine that if both tasks index the same general cognitive processes (the ability to “sustain attention” to a single input source), then relations between media multitasking and performance should be observed either for both tasks or for neither task. The fact that neither of these outcomes was observed is noteworthy and deserves further comment. But first, we sought to replicate the findings of both Studies 1 and 2. Accordingly, in Study 3 we gathered two large online samples and tested the hypothesis that media multitasking is associated with increased response variability on the MRT (Study 3a) but is not associated with performance on the SART (Study 3b). Furthermore, we considered the possibility that because our assessments of trait-level media multitasking and our measures of sustained attention were to be gathered online, participants might actually engage in media multitasking during the experimental session. We therefore also included a questionnaire to determine whether participants were media multitasking while completing the sustained-attention tasks. We did this not to obtain a “state”-level metric of media multitasking among individuals, but rather, to provide the opportunity to control for the potentially detrimental effects of media multitasking while participants completed the online sustained-attention tasks.

Method

Participants

In Study 3, we aimed to collect large online samples with roughly double the sample sizes of Studies 1 and 2. In Study 3a, 174 participants (94 female, 80 male) took part in an online study conducted through the Amazon Mechanical Turk, and they received $1.00 as compensation for their time. Participants with greater than 10 % omissions were removed from subsequent analyses (as per Seli, Cheyne, & Smilek, 2013), resulting in the inclusion of 146 participants (77 female, 69 male), with an age range of 18 to 67 years old (M = 37.5, SD = 13.0).

Study 3b included 152 participants (77 female, 75 male) who registered for the study via the Amazon Mechanical Turk and received $1.00 as compensation for their time. Participants with greater than 10 % omissions were removed from subsequent analyses (Seli, Cheyne, & Smilek, 2013), resulting in the inclusion of 143 participants (74 female, 69 male), with an age range of 18 to 68 years old (M = 35, SD = 12).

Stimuli and procedure

There were a few minor differences between the tasks used in this study and those used in the previous studies. In both Studies 3a and 3b, the total number of trials in each task was reduced to facilitate affordable online data collection through Amazon Mechanical Turk. As such, the participants in Study 3a completed 600 trials (and 18 practice trials). The SART used in Study 3b was similar to that of Study 2, except that participants completed 315 trials (and 18 practice trials), as per Smilek and colleagues (2010). This version of the SART included 35 no-go trials and 280 go trials. For both Studies 3a and 3b, following the sustained-attention task, participants completed our in-the-moment media-multitasking questionnaire (described below), followed by the Media Use Questionnaire used to compute MMI scores (in the same fashion as in Studies 1 and 2).

To assess in-the-moment media multitasking (since the study was conducted online), after they finished the sustained-attention task (MRT in Study 3a, SART in Study 3b), but before they completed the Media Use Questionnaire, we presented participants with a short questionnaire stating: “We are also interested in whether you were media multitasking while you completed this study. Please be honest, as your response will not affect your compensation or qualification for the study.” This allowed us to determine whether participants were media multitasking specifically during the sustained-attention task. Participants were asked to indicate whether they were engaged in any of the activities presented in a list, choosing as many as applied, by clicking a box next to each activity. The choices were: using print media (including print books, print newspapers, etc.); texting, instant messaging, or e-mailing; using social sites (e.g., Facebook, Twitter, etc., except games); using nonsocial text-oriented sites (e.g., online news, blogs) or e-books; talking on the telephone or video chatting (e.g., Skype, iPhone video chat); listening to music; watching TV and/or movies (online or offline) or YouTube; playing videogames; doing homework/studying/writing papers/other work; or other (in which case, they were asked to specify the activity). Selecting even one of these options qualified as media multitasking, since the study itself constituted a form of media consumption. Participants were also able to indicate that they did not engage in media multitasking while completing the study.

Results and discussion

Media multitasking and MRT performance (Study 3a)

In Study 3a, we observed that MMI scores (M = 2.36, SD = 1.24) were significantly positively correlated with transformed RRT variance (M = 8.23, SD = 0.75), r(144) = .21, p = .01. Age was found to significantly and negatively correlate with both MMI scores, r(144) = –.34, p < .001, and transformed RRT variability, r(144) = –.19, p = .02. When controlling for the influence of age, the partial correlation between MMI and transformed RRT variance was marginal, r p (143) = .16, p = .06 (two-tailed).

As was noted above, we asked participants to indicate whether they were media multitasking while completing the MRT. Of the 146 participants, 33 (22.6 %) reported that they engaged with some other form of media while completing our online sustained-attention task. The mean MMI score for this group of multitasking participants was 2.82 (SD = 1.03). Since the present study was not intended to address how media multitasking affects performance during the MRT, the data from participants who reported multitasking during the MRT were subsequently excluded, and the analyses above were reconducted for the remaining 113 participants. MMI scores (M = 2.23, SD = 1.27) remained significantly positively correlated with transformed RRT variance (M = 8.21, SD = 0.80), r(111) = .24, p = .01. Although age remained negatively correlated with MMI scores, r(111) = –.29, p = .01, it was only marginally correlated with transformed RRT variance, r(111) = –.16, p = .09. Importantly, after we (1) removed participants who were multitasking during the MRT and (2) controlled for age, the partial correlation between MMI and transformed RRT variance was significant, r p (110) = .21, p = .03 (see Fig. 5).

Fig. 5
figure 5

Scatterplot depicting the correlation between MMI scores and transformed rhythmic response time (RRT) variance for participants who reported not multitasking while completing the MRT (Study 3a). The dashed line depicts the best linear fit to the data

Media multitasking and SART performance (Study 3b)

In Study 3b, we examined the relation between MMI scores (M = 2.28, SD = 1.39), proportions of no-go errors on the SART (M = .38, SD = .21), and RTs on SART go trials (M = 408.60 ms, SD = 89.22). The expected speed–accuracy trade-off was again observed between no-go errors and RT, r(141) = –.61, p < .001. This time, with a sample size that was almost double that of Study 2, the correlation between MMI and no-go errors bordered on significance, r(141) = .16, p = .05, although the correlation between MMI and RT remained nonsignificant, r(141) = –.11, p = .20. Age was found to significantly and negatively correlate with MMI, r(141) = –.35, p < .001, and no-go errors, r(141) = –.19, p = .02, and to positively correlate with RT, r(141) = .25, p = .003. To control for the influence of age and the speed–accuracy trade-off, a regression was conducted to determine the unique contributions of age, no-go errors, and RTs when predicting MMI scores (see Table 2). Although age continued to significantly (negatively) and uniquely predict MMI scores (consistent with the findings from Study 3), SART no-go errors and RTs did not.

Table 2 Regression of age, no-go errors, and RT predicting MMI

As in Study 3a, in Study 3b we again asked participants to report whether they were media multitasking while completing the SART. Seventeen participants (approximately 12 %) of the original 143 reported that they were indeed engaging in another form of media while completing the SART. These participants had a mean MMI score of 2.84 (SD = 1.48). The 17 participants who reported multitasking while completing the SART were excluded, and the analyses above were reconducted for the remaining 126 participants. MMI scores (M = 2.21, SD = 1.36) were not found to significantly correlate with proportion of no-go errors (M = .37, SD = .22), r(124) = .13, p = .16 (see Fig. 6a), nor did they significantly correlate with RT (M = 406.13 ms, SD = 87.62), r(124) = –.14, p = .11 (see Fig. 6b).Footnote 5 Furthermore, age remained significantly negatively correlated with MMI, r(124) = –.33, p < .001, and no-go errors, r(124) = –.22, p = .01, and positively correlated with RT, r(124) = .22, p = .01. A regression analysis was conducted to assess the unique contributions of age, no-go errors, and RTs in predicting MMI, after the multitasking participants were removed (Table 3). No significant relations between MMI and SART performance (i.e., no-go errors and RTs) were observed (although, consistent with Study 3a, age negatively and uniquely predicted MMI scores).

Fig. 6
figure 6

Scatterplots depicting the relation of MMI scores with performance in the SART (Study 3b). (A) Relation of MMI scores with proportion of no-go errors on the SART. (B) Relation between MMI scores and response times on go trials. The dashed lines represent the best linear fits to the data

Table 3 Regression of age, no-go errors, and RT predicting MMI, after removing multitasking participants

To recap, the purpose of Study 3 was to replicate the findings of Studies 1 and 2, in which we found self-reports of media multitasking to be negatively associated with performance on one sustained-attention task (the MRT, Study 1), but not another (the SART, Study 2). These findings were replicated, such that in Study 3a media multitasking was significantly associated with response variability in the MRT, whereas in Study 3b, no association was apparent between MMI and either no-go errors or RTs in the SART. Given that these two tasks are suggested to measure “sustained attention,” one important question to ask is why the MMI might be associated with performance on one task, but not another? The answer to this question may lie in the modest correlations between the MRT and SART measures (see Seli, Jonker, Cheyne, & Smilek, 2013). Indeed, previous research has shown that response variance in the MRT has a .31 correlation with SART no-go errors, and a .29 correlation with SART RTs. Thus, although the behavioral indices from both tasks do overlap to some degree, the tasks are largely independent in their measurements of sustained attention. It is therefore possible that scores on the MMI are associated with a task-specific component of the MRT, rather than a general ability to sustain attention.

Study 4

To reiterate, the purpose of the present series of studies was to investigate whether media multitasking is related to a general ability to sustain attention on a single task. Given that in Studies 1–3, media multitasking was found to predict performance on one sustained-attention task (the MRT) but not another (the SART), we decided to examine performance on yet another sustained-attention task, to determine whether, on balance, media multitasking predicts performance in terms of the behavioral measures often used to index sustained attention. Thus, in Study 4 we employed what is perhaps the most well-studied test of sustained attention: a vigilance task. Generally, an observer’s ability to sustain attention, or remain vigilant, decreases over time. This vigilance decrement typically manifests in the form of decreasing sensitivity to the critical targets and/or prolonged RTs on target detections (e.g., Helton & Russell, 2012; N. H. Mackworth, 1948; McCormack, 1958). Thus, in Study 4, in addition to looking at overall performance on the vigilance task (i.e., overall sensitivity and RTs), we also tested whether scores on the MMI were associated with the size of the vigilance decrement, in terms of both decreasing sensitivity and increasing RTs as a function of time on task.

Method

Participants

This study included 130 participants (81 male, 49 female) who signed up via the Amazon Mechanical Turk. In appreciation for their time, the participants each received $1.00. One participant was removed from subsequent data analysis for having greater than 25 % false alarms (interpreted as misunderstanding the task instructions), and 20 participants were removed for indicating that they were media multitasking during the vigilance task.Footnote 6 Accordingly, data were analyzed for the remaining 109 participants, with an age range of 20 to 82 years old (M = 40, SD = 13).

Stimuli and procedure

The vigilance task in Study 4 had the same stimuli and trial sequence as the SART in Studies 2 and 3b (see Fig. 3). Importantly, however, in Study 4 participants were instructed to respond to an infrequent go digit (i.e., when the digit was a 3), but to withhold response to frequent no-go digits (i.e., the digits 1, 2, 4, 5, 6, 8, and 9; Carter et al., 2013). As such, participants received a total of 810 trials, 90 of which were go trials, and 720 of which were no-go trials. Trials were divided into five periods of watch, each of which lasted approximately 3 min and contained 162 trials, 18 of which were go trials and 144 of which were no-go trials. At the end of the task, participants were asked to complete the same in-the-moment media-multitasking question as in Studies 3a and 3b, followed by the MMI.

Results and discussion

Percentages of hits and false alarms for each participant in each period of watch were used to compute A' (as per Macmillan & Creelman, 2005), which is an appropriate measure of sensitivity when data contain hit rates of 100 % and/or false alarm rates of zero. The mean A' and RT (for hits) during each period of watch are plotted in Figs. 7 and 8, respectively.

Fig. 7
figure 7

Sensitivity (A') in each watch period, averaged across participants (Study 4). Error bars represent standard errors of the means

Fig. 8
figure 8

Response times to go trials, averaged across participants for each period of watch (Study 4). Error bars represent standard errors of the means

To determine whether performance decreased as a function of time on task, A' and the correct RTs for each participant were submitted to a repeated measures analysis of variance (ANOVA) in which Period of Watch (1, 2, 3, 4, 5) was entered as a within-subjects factor. For A', Mauchly’s test indicated a violation of sphericity, χ 2(9) = 547, p < .001, and a Greenhouse–Geisser correction (ε = .31) was applied. As is depicted in Fig. 7, we found a significant main effect of period of watch, such that A' was found to decrease as a function of time on task, F(1.24, 133.86) = 4.95, MSE = .01, p = .02, η p 2 = .04. Similarly, for RTs, Mauchly’s test indicated a violation of sphericity, χ 2(9) = 98.1, p < .001, and so a Greenhouse–Geisser correction was applied (ε = . 73). As is shown in Fig. 8, the ANOVA revealed a significant main effect of period of watch, such that RTs became longer as a function of time on task, F(2.92, 315.01) = 43.66, MSE = 1726.45, p < .001, η p 2 = .61.

Having demonstrated a vigilance decrement in both A' and RT, we next sought to test whether subjective reports of media multitasking were related to overall performance on the task and/or to the size of the decrements (i.e., slopes) in both A' and RT. A weak yet significant negative correlation of MMI (M = 2.10, SD = 1.20) with overall sensitivity (A'; M = .98, SD = .04) was found, r(107) = –.19, p = .045; however, this correlation became nonsignificant when controlling for age, r(104) = –.14, p = .14. Furthermore, no association was observed between MMI and average RT (M = 515 ms, SD = 73), r(107) = .001, p = .995. When looking at the slope of A' across the five periods of watch (i.e., change in A' over time), no relation with MMI was found, r(107) = –.05, p = .62 (see Fig. 9a). Similarly, MMI was not found to be related to the slope of RTs, r(107) = .11, p = .26 (see Fig. 9B).Footnote 7 However, A' slopes and RT slopes were found to be significantly and negatively correlated, r(107) = –.30, p = .001, indicating that decreased sensitivity was accompanied by a slowing in RTs. This correlation between A' and RT slopes also indicates that the lack of association between both measures and the MMI was not due to a lack of variability (or restriction of range) in either of the measures. Taken together, the results provide evidence that media multitasking is not associated with one’s ability to remain vigilant over time.

Fig. 9
figure 9

Scatterplots depicting the relation of MMI scores with performance on the vigilance task in Study 4. (A) Relation of MMI scores with the slope of A' across the five periods of watch. (B) Relation of MMI scores with the slope of response times across the five periods of watch. The dashed lines represent the best linear fits to the data

General discussion

The purpose of the work reported here was to examine the possible link between habitual engagement in media multitasking and one’s ability to sustain attention on a single task. We hypothesized that individuals who frequently divide their attention among several streams of media may exhibit difficulties in sustaining their attention on any one particular source of information (as suggested by subjective reports in Ralph et al., 2014). The results were clear: Whereas the tendency to engage in media multitasking was associated with increased response variability (i.e., poor performance) on the MRT (Studies 1 and 3a), we found no relation between media multitasking and performance on the SART, in terms of either no-go errors or RTs (Studies 2 and 3b). Similarly, using a vigilance task in Study 4 (quite possibly the most venerable of all sustained-attention tasks), we found no association between media multitasking and overall performance or the size of the vigilance decrement, in terms of either sensitivity or RTs. We therefore concluded that habitual media multitasking is not related to general sustained-attention ability. That is, the positive relation that we observed between self-reported media multitasking and response variance in the MRT seems highly specific to the paradigm and measure employed, and is likely not subserved by a global deficiency in sustained attention.

Although it was not our primary focus, perhaps one of the most interesting findings to emerge from the present work was that, in our online samples, approximately 23 % of participants in Study 3a, 12 % in Study 3b, and 16 % in Study 4 reported that they were media multitasking while they were supposed to be completing the sustained-attention tasks. Moreover, in one of our samples (Study 3a), the inclusion or exclusion of these participants influenced whether the correlation between media multitasking and the behavioral measure of interest reached statistical significance, albeit marginally. The finding that nearly one-quarter of the individuals in one of our online samples was doing something other than the instructed task was quite surprising and perhaps somewhat troubling, given the apparent prevalence of media multitasking (Rideout et al., 2010) and the increasing use of online samples in psychological research (e.g., Paolacci, Chandler, & Ipeirotis, 2010; Riva, Teruzzi, & Anolli, 2003). Inquiring as to whether participants are actually media multitasking while completing online tasks might be a useful way to identify (and perhaps exclude) those who concurrently do something other than the experimental task. Although we concluded that trait measures of media multitasking do not predict underlying deficiencies in sustained attention, in-the-moment media multitasking is likely to impair one’s ability to perform the primary task.

Returning to the primary issue addressed by the present studies, one might ask why individual differences in media multitasking are associated with self-reported attention lapses (Ralph et al., 2014), but not with sustained-attention ability as measured in the laboratory? One clear hypothesis is that media multitaskers may differ in terms of the ways that they approach tasks, rather than in their underlying ability to sustain attention to any given task. In the real world, attention failures may manifest more in heavy than in light multitaskers because they may simply surround themselves with more distractions and “allow” themselves to be more distracted. This may be reflected in Ralph and colleagues’ (2014) finding that media multitasking is nominally more strongly tied to the general tendency to deliberately mind wander (i.e., to “allow” one’s attention to drift off task) than it is with the tendency to spontaneously (or unintentionally) mind wander. Furthermore, Ralph and colleagues also noted that heavy and light media multitaskers do not differ in terms of their perceived ability to control their attention and ignore distracting information (despite experiencing more attention failures). Indeed, these perceptions may be quite accurate and are supported by the present data: When required in the laboratory to maintain attention on a single task, heavy and light media multitaskers showed no compelling differences in terms of their general ability to sustain attention on the task. On a positive note, these findings present “good news” for the ever-growing proportion of society that habitually media multitasks.