Introduction

Today students are exposed to a multiplicity of diverse media types. Given the accessibility of these media technologies, the ways in which students use media have changed dramatically. With the ubiquity of media technologies (e.g., computer, smart phone, TV), students increasingly add greater amounts of media content (e.g., Internet searching, music, gaming) into the same amount of time. This is accomplished by using numerous media types concurrently, in this manner taking part in “media multitasking.” Media multitasking is generally defined as dual tasking (doing two or more things at the same time) or task switching (rapidly alternating between different tasks) in learning contexts (Wood and Zivcakova 2015).

One of the most cited works on media multitasking was conducted by Ophir et al. 2009. Ophir et al. (2009) asked the question: “Are chronic multitaskers more attentive to irrelevant stimuli in the external environment and irrelevant representations in memory?” (p. 15583). They developed a self-report media use survey that queries participants about their use of twelve forms of media. Moreover, for each of the twelve media, they asked the participants to report how often they concurrently took part in any of the other eleven forms. A media multitasking index (MMI) was developed to discriminate between heavy media multitaskers (HMMs) and light media multitaskers (LMMs). The discrimination was based upon the top and bottom quartiles of the MMI distribution. A score of one standard deviation above the average categorizes the participant as a heavy media multitasker (HMM). A score of one standard deviation below the mean categorizes the participant as a light media multitasker (LMM).

While a handful of studies are beginning to report negative impacts of heavy media multitasking on laboratory-based attentional assessments that present static stimuli (Ophir et al. 2009), others are finding that heavy media multitaskers may have better performance on multisensory integration tasks (Lui and Wong 2012). Hence, we offer a synthesis of the literature that suggests that inconsistencies in research findings may reflect an emphasis upon a depth-biased conceptualization of multitasking in some studies and a breadth-biased attentional style in other studies. A related issue for media multitasking research is the type of media used for presentation of cognitive stimuli and the logging of data for interpretation of behavioral performance.

Researchers are just beginning to investigate the impacts of media multitasking upon learning (Uncapher et al. 2017), although multitasking phenomenon has been studied in different disciplines from different angles since early 1900s (Lin et al. 2015). The phenomenon has been studied using different terms (e.g., dual task, multitasking, polychronicity, task switching, and parallel processing) and methods (e.g., lab experiments, questionnaires, diaries, observations, and interviews) (e.g., Bluedorn et al. 1999; Foerde et al. 2006; Just et al. 2001; Meyer and Kieras 1997; Monsell 2003; Rosen et al. 2013). These studies all provide insights; however, they provide stories within specific disciplines, resulting in limited findings and potential misinterpretations (Lin 2009; Lin et al. 2015). For instance, anthropologists have discovered that people in some cultures perceive time as linear or chronological, interpreting anything different from the chosen task as an interruption, while people in some other cultures perceive time as more open-ended, welcoming simultaneous or frequent back-and-forth engagements (Bluedorn 2002; Hall 1959). Psychologists and neuroscientists, on the other hand, have focused on the brain, executive control and cognitive processes, thus using the terms such as dual task or task switching (Just et al. 2001; Meyer and Kieras 1997). The scholars were looking at similar behaviors or activities but used different terms, resulting in different findings and implications. As media usage becomes increasingly ubiquitous, learning continues to break the boundaries of time and space. We need new, cross-disciplinary methods and findings to understand and develop strategies to address media multitasking issues in learning and in everyday life.

In this paper, we incorporate research findings and implications from cross-disciplinary fields, with the intention to examine the impact of media multitasking on attention and learning in real life contexts. We suggest the need to move from static single stimulus presentations to dynamic multi-stimulus presentations to better assess the relationships between media multitasking, attention, and learning engagement abilities.

Executive Control Processing of Heavy and Light Media Multitaskers

An important question for research into the impact of media multitasking is the extent to which media multitasking impacts attention and cognitive processes. Ophir and his colleagues (2009) investigated cognitive control performance that they believed to be indicative of attention allocation to static stimuli. A filter task was employed to measure filtering of distractions. An AX-continuous performance task was used that required participants to observe cue-probe pairs of letters, and then to respond “yes” when they observed the target cue-probe pair. Two- and three-back tasks were used to examine the monitoring and updating of multiple representations in working memory. Finally, a task-cued stimulus-classification was used to assess task-set switching abilities. Although it could be hypothesized that regular performance of multiple tasks at the same time and the frequent switching between tasks (or media) would allow HMMs to outperform LMMs on tasks measuring multitasking, results revealed that LMMs performed better than HMMs during all tasks. That said, the lower performance in HMMs were not global in nature. Instead these results were relative to situations involving distractors. For example, on the AX-continuous performance task participants in both groups performed equally well in the condition without distractors. However, the HMM group had slower response times when distractors were present.

While the authors concluded that media multitasking may negatively impact executive control, a later study by Minear et al. (2013) failed to replicate the findings of Ophir and colleagues. In this study, Minear et al. (2013) used the same media multitasking index, as well as the participants’ self-reported impulsivity and self-control. Furthermore, the participants took park in measures of attention, working memory, task switching, and fluid intelligence. Findings revealed no evidence that HMMs had inferior multitasking or any deficits in processing irrelevant or distracting stimuli. Interestingly, Minear et al. (2013) did find that even though there was no difference in actual performance, the HMM participants self-reported greater levels of impulsivity. This was a cross-sectional study. Participants who were less resistant to distractors might also engage more frequently in media multitasking. Further, the participants with lower attentional control might have been more drawn to multitasking as a working heuristic.

Other studies have found variability in the impact of media multitasking on cognitive processes. While Ralph et al. (2014) found media multitasking was negatively related to some measures (e.g., metronome response task), no significant relation was found for other measures of sustained attention (e.g., sustained-attention-to-response task). In the study by Ralph and colleagues, the participants in the HMM group were less resistant to distractors (Ralph et al. 2014).

What contributed to these inconsistent findings? Baumgartner et al. (2014) investigated the relationship between media multitasking and executive control in 523 early adolescents. Baumgartner et al. (2014) used self-reports and cognitive tasks that use static stimulus presentations (Digit Span, Dots–Triangles Task, Eriksen Flankers task) to assess three aspects of executive functioning: working memory, shifting, and inhibition. Results revealed that subjective self-reports of deficits in everyday activities were related to frequent media multitasking. Results from cognitive tasks that use static stimulus presentations revealed that media multitasking was not related to performance on the Digit Span; Dots–Triangles task or even for the assessment of inhibition (i.e., Eriksen Flankers task). In fact, more frequent engagement with media multitasking was related to greater capacity for ignoring irrelevant distractions. Contrary to Ophir et al. (2009)‘s findings, these results suggest a potentially positive impact of media multitasking for ignoring distractions.

Greater Attentional Breadth

The findings of Ophir and others may have limited ecological validity due to their emphasis upon cognitive tasks that emphasize attentional depth. Lin (2009) contended that heavy media multitaskers may not have an attentional style that emphasizes attending to the information that is relevant to one task at a time. Instead, HMMs may have an attentional approach with ‘greater breadth of attention,’ that is, they are inclined to pay attention to a larger scope of information instead of focusing on a particular piece of information. This could suggest enhanced performance in tasks that include some unanticipated information that is relevant to the task at hand. For example, while reading, the media-multitasker may detect a ringtone more readily from a mobile phone, even though the ringtone does not carry information useful for the primary task of reading. Such ecologically valid stimulus response paradigms may in fact be more representative of what happens in activities of daily living. Cain and Mitroff (2011) assess breadth-biased attention using a singleton task, in which an array of shapes (e.g., squares and circles) are presented and participants had to attend to the target shape (e.g., circle) while ignoring the non-target shapes (squares). Two experimental conditions were included. In the “Sometimes” condition the color was the target. In the “Never” condition, the color was never the target. Performance across conditions in the HMM group revealed a lack of response modulation between these two tasks. This suggests that HMMs maintained a broader attentional scope despite the explicit task instructions.

In another study investigating a breadth-biased attentional style, Lui and Wong (2012) instructed participants to search for a vertical or horizontal line among an array of red and green distractor lines of multiple orientations. Within trials the line colors were altered intermittently. Colors of target and distractor lines changed at variable frequencies. Line orientations were held constant within each trial. During some conditions, a tone and flickering target line were presented in synchrony. Participants were not informed explicitly about tone meanings. Results revealed increased target detection in the presence of the tones. Further, a positive correlation to scores on the MMI was found. This suggests that a breadth-biased attentional style allows HMMs to better integrate multisensory information. Questions remain as how executive functions relevant to media multitasking can be studied to better understand human brain capacities and to better understand to what extent human activities in the technology world may be affecting the plasticity of our brains.

Media Multitasking in Learning Contexts

The study of media multitasking and its impact on learning are closely linked to several important learning concepts, theories, and frameworks. They include attention, time on-task, expertise, self-regulation, and multimedia learning to name a few (Chase and Simon 1973; Ericsson et al. 1980; Foerde et al. 2006; Konig et al. 2005; Just et al. 2001; Lin 2013; Rothbart and Posner 2015). Foerde et al. (2006) found that learning new things is dependent on working memory (i.e., the system of the brain that permits the storage and processing of information needed in the execution of tasks), while learning based on habit or conditioning (e.g., driving in the same neighborhood for many years) is not as sensitive to working memory. Konig et al. (2005) found that attention, fluid intelligence (i.e., the ability to reason and to solve novel problems), and working memory were the most important predictors of multitasking performance. A study conducted by Sanbomnatsu et al. (2013) showed that undergraduates who self-reported as high compared to low real-world multitaskers had lower working memory capacity and were more impulsive and sensation-seeking, although they were highly confident on their ability to multitask effectively.

Rothbart and Posner (2015) discussed the brain’s attention networks and plasticity of the attention networks. They speculated that certain brain circuits could be modified by new media exposure, and by a person’s constant need to switch between tasks and to deal with interruptions inherent in media multitasking. According to Rothbart and Posner (2015), the brain may change with habitual multitasking and multimedia experience, while meditation and other techniques may moderate such effects and improve self-regulation.

Multitasking activities are directly related to the amount of time needed for one to develop expertise. In their seminal work on expertise in chess, Chase and Simon (1973) concluded that one would have to spend 10,000 to 50,000 h contemplating chess positions and strategies to become a chess master. Since then, many expertise researchers reached similar conclusions and coined phrases such as “10 years of silence” (Hayes 1989), “ten-thousand-hour rule” (Ericsson et al. 1980; Gladwell 2008). What counts is not only the amount of time, but also the “deliberate practice” in acquiring expertise (Ericsson et al. 1993) and the “grit” – the perseverance and passion for long-term goals (Duckworth 2016). Multitaskers, however, usually skim the surface of the information and move on to the next stream – they pay attention, but only partially (Jackson 2008). The effortful control, the degree to which people can voluntarily control their own behavior and emotions, is related to the control of impulses and the ability to carry out long-term goals, and it reflects self-regulative abilities (Rothbart and Posner 2015).

The self-regulative abilities as well as the engaged time and learning, however, are complicated by factors in the learning contexts. For instance, a study on memory and notetaking abilities in different media environments revealed that there were significant interactions between media environments and note-taking options (Lin and Bigenho 2011). A study on instant messaging distractions during lectures showed complex dynamics of students’ on-task and off-task activities (Schellen et al. 2017). A series of studies examining the impacts of different sound backgrounds on people’s cognitive task performance discovered interactions between age, gender, task, and environment, and showed across the studies that the silent and quiet environment might not be the best learning environment for many people (Cockerham et al. 2017). A study on virtual collaborative learning showed complex relationships between collaboration, multitasking and problem-solving abilities (Lin et al. 2016). These studies and results contributed new findings and opened windows for new areas of inquiry in research, while at the same time, they would be much strengthened with neurobiological, virtual reality, and other methods that would reveal students’ natural and habitual behaviors and activities. Much needed are novel designs with innovative assessments of media multitasking that are ecologically valid and better reflect real-world activities (Lin 2009; Parsons et al. 2017a, b).

Novel Approaches to Assessing Media Multitasking

With development of technological innovations, researchers including neuroscientists have begun to combine sophisticated experimental paradigms from cognitive psychology with the new brain imagining techniques. Electroencephalography (EEG) and virtual reality settings may provide better, more accurate data to obtain temporal information, visual path, cognitive state, and workload (Posner 2017). The ability of the newer techniques to actually measure precisely localized activities has generated a renewed interest by a wider community of researchers. Scott et al. (2011) used a novel approach to study multitasking, in which participants balanced the demands of four interconnected performance-based functional tasks (i.e., cooking, financial management, medication management, and telephone communication). For them multitasking was operationalized as the participant’s ability to plan and carry out multiple, distinct tasks within a specific timeframe where the participant must switch between tasks. This definition of multitasking includes multiple cognitive processes that must be performed for successful execution: 1) organization and strategies related to the temporal and conditional relations among behaviors; and 2) sustainment of these relations and information about instantaneously presenting environmental stimuli, goals, and sub-goals in working memory (Burgess et al. 2006).

Evidence supporting the move to novel approaches to assessing media multitasking can be found in clinical studies that have found decreased multitasking abilities after brain injuries. Although the patients tended to perform normally on traditional executive functioning measures their performance of everyday activities revealed marked deficits (Alderman et al. 2003). They conclude that the cognitive demands assessed by traditional neuropsychological assessments of multitasking (i.e., tests of executive functioning) may be different from the sorts of cognitive processes involved in performing everyday multitasking activities. Neurocognitive researchers are increasingly emphasizing the importance of ecological validity (Burgess et al. 2006; Chaytor et al. 2006; Manchester et al. 2004; Parsons et al. 2017a, b). Burgess et al. (2006) discuss neuropsychology’s adaptation of outmoded conceptual and experimental frameworks that emphasize construct driven assessments that fail to represent the actual functional capacities inherent in cognitive (e.g., executive) functions. Construct-driven measures like the Digit Span, Dots–Triangles Task, and Eriksen Flankers task may be useful tools for constrained assessment of specific cognitive constructs. However, there is need for multitasking assessments that reflect the everyday activities found in ecologically valid assessments of functional capacity.

Examples of an ecologically valid assessment of multitasking can be found in recent virtual reality developments for assessments (Bohil et al. 2011; Parsons et al. 2017a, b). On the one hand there have been attempts to place construct-driven assessments (e.g., Stroop and/or continuous performance tests) into simulations of real world environments. For instance, Parsons et al. (2007) demonstrated the validity of the Virtual Classroom in a study in which performance on the construct-driven continuous performance task differentiated children with attention deficient hyperactive disorder (ADHD) from controls on numerous measures of attention and activity. The individual differences in Virtual Classroom attention performance were associated with parent reports of ADHD symptoms (Parsons et al. 2007). These results have replicated in a number of studies (Gilboa et al. 2015; Neguț et al. 2017; Nolin et al. 2016). Furthermore, the Virtual Classroom environment has evolved to include cognitive control measures like the Stroop task (Parsons and Carlew 2016; Lalonde et al. 2013). Parsons and Carlew (2016) extended the Virtual Classroom assessments to persons with autism using a construct-driven Stroop task embedded into the simulated classroom. Findings supported the idea that a Virtual Classroom can be used to distinguish between pre-potent response inhibition (non-distraction condition) and resistance to distractor inhibition (distraction condition) in persons with high functioning autism.

While these results are interesting, Parsons and colleagues (2017a, b) have recently questioned the generalization of construct-driven findings in virtual environments to predictions of real-world behaviors. They argue for a new approach to test development that starts with everyday multitasking behaviors and move backwards to assess how a succession of actions precede a behavior in everyday activities. One attempt at this has been the development of a multitasking assessment using the Edinburgh Virtual Errands Task (EVET). Logie et al. (2011) theorized that “everyday multitasking” consists of multiple distinct errands with sub-goals. The participants were immersed in the EVET and directed to complete tasks in a particular order. Their operatonalization of everyday multitasking differs from construct-driven task-switching approaches in that multiple tasks with apparent end points are included and the time scales for tasks in the EVET are much longer (Logie et al. 2010).

The EVET includes a four-story building with five rooms along the left and right ends of each floor. These room are in close proximity to a central stairwell with two sets of stairs (one right, and one left) and a central elevator. Participants were instructed that they had eight minutes to complete eight errands. For three of the tasks there were two-stages in which users had to gather and transport objects. For five of the tasks, only one action was required. Further, time limits were involved for completion of two of these tasks (e.g., at 5:30 turn off the cinema). One open-ended task was included that involved folder sorting at any time during the eight-minute test period. An important aspect of the investigation was to assess cognitive factors contributing to everyday multitasking using a virtual environment. Of particular interest here is that this approach moves beyond many depth-biased and construct-driven approaches to assessing multitasking in that it focuses on multiple cognitive functions employed in a corresponding manner. Results revealed separate multitasking components for memory, preplanning, and plan implementation. Furthermore, they found significant and distinct contributions from measures of retrospective memory, visuospatial working memory, and online planning. This approach needs replication and further development, but it supports a growing interest in approaching media multitasking from a dynamic rather than static approach.

Other virtual reality based-Multiple Errands Tasks are beginning to emerge. For example, the Virtual Environment Grocery Store (VEGS) uses a simulated shopping scenario to measure the ways in which participants accomplish a sequence of errands that involve planning and organization while completing multiple shopping errands (Parsons and Barnett 2017). At the beginning of the task, they drop off a prescription at the pharmacy and receive a number that they are told to listen for while shopping. The VEGS allows the assessor to make systematic modifications to a participant’s cognitive workload (impacts goal maintenance) (Parsons and McMahan 2017). These virtual reality-based multiple errands tasks offer the examiner a way to assess the heavy media multitasker in a manner that more closely reflects real-world multitasking.

We believe that these more ecologically valid assessments may aide in the inconsistencies apparent in the literature. Instead of the depth-biased simple stimulus presentations used in studies by researchers, more dynamic and ecologically valid assessments may reflect the breadth-biased approach of heavy media multitaskers. For example, while completing multiple tasks in a simulated shopping scenario, a media-multitasker may hear their prescription number more readily from a public-address system, even though the prescription number does not carry information useful for the primary task of shopping. Such ecologically valid stimulus response paradigms may in fact be more representative of what happens in activities of daily living.

Conclusions

Our world has become increasingly screen-saturated as digital technologies become ubiquitous and pervasive in daily life. Under this condition, students’ daily and social life is mingling with their academic life. Research shows that young people are using technologies to enhance learning; while at the same time, research also draws attention to the negative, distractive, and disruptive side of technologies. It remains unclear to what extent and how digital technologies become vehicles for learning or source of distraction.

In this paper, we discussed the relationships between media multitasking, attention, and learning engagement. Based on literature, we contemplated the ecologically valid ways to study the relationships from the angles of breadth-biased and depth-biased cognition. As researchers break boundaries and become more interdisciplinary, it is clear that the study and understanding of this phenomenon are more than lab studies on brain capacities or executive functions. The studies are also about the changing cultures, values and learning environments. Consequently, the methods to study this phenomenon are changing and need to be more ecologically valid.