Abstract
This study investigated the belief that student attention declines after the first 10 to 15 min of class by analyzing vigilance decrement in a guided inquiry physical science course. We used Tobii Glasses, a portable eye tracker, to record student gaze during class sessions. Undergraduate students (n = 17) representative of course demographics (14 female, 3 male) wore the eye tracker during 70-min classes (n = 84) or 50-min classes (n = 26). From the gaze point and fixation data, we coded participant attention as either on-task or off-task for every second of data. This analysis resulted in a percentage of vigilance time on task for each minute as well as the amount of time that participants spent looking in various locations during the class sessions. Participants exhibited on-task vigilance percentages starting with 67% at the start of class and rising to an average of above 90% on-task vigilance at the 7 to 9-min mark with minor fluctuation. Contrary to the belief that attention declines rapidly during a class, the participants on-task spans were larger and more numerous than their off-task spans. These results seem to support the conclusion that well-structured classes punctuated by student-student and instructor-student interactions can be an effective method of maintaining student attention vigilance for entire class sessions, not just the first 10 min.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
It is not difficult to find literature showing that student attention in classes declines after 10 to 15 or 20 min of a lecture (Macmanaway 1970; Davis 1993; Naveh-Benjamin 2002; Wankat 2002; Lucas et al. 2004; McKeachie 1986, 1999). Indeed, we have heard this repeated as truth in meetings and faculty development sessions; it seems to have become a part of higher-education teaching lore. However, others have argued that this lore is not accurate overall, that variability in student attention is often based on the instructors themselves instead of the lecture teaching format, and that many of the studies suffer from methodological flaws and subjectivity in data collection (Wilson and Korn 2007; Bradbury 2016).
Sustained attention is believed to be a critical component fundamental to learning. Theories of sustained attention, or “vigilance,” posit that voluntary attention—attention to relevant or expected events (i.e., “targets”) guided by internal goals or external instruction—is limited in capacity and cannot be sustained indefinitely (Craig 1984; Parasuraman 1986; Parasuraman et al. 1987). As the length of time required to sustain attention increases, ability to detect relevant information decreases, termed “vigilance decrement.” According to the attentional resource model of sustained attention, vigilance decrement occurs when our limited information processing resources are depleted due to high demands and cannot be replenished within the time available (Warm et al. 2008). High event rates presented at unpredictable intervals, such as seen in a classroom environment where relevant material is embedded in the continuous monolog of a lecture, are known to enhance demands on sustained attention performance and are a critical variable in vigilance decrement (Sarter et al. 2001).
Within this framework, variability in student attention during task performance will depend on the rate of individual vigilance decrement. Traditional vigilance research shows that most of the decrement typically occurs within the first 15 min of a sustained attention task but can rapidly occur within the first 5 min when task demand conditions are high (see Warm et al. 2008, for review). However, this research usually involves vigilance tasks that are more simplistic compared with a classroom lecture, with computer presentation of infrequent visual or auditory target stimuli that are embedded in a sequence of frequent non-target stimuli and have little variation in stimulus complexity or presentation context.
Previous studies of student attention within the classroom have utilized methods such as student note taking (Hartley and Davis 1978; Maddox and Hoole 1975), which provided contradictory results. Others have been done by instructors observing their students and plotting their perceived attention (Lloyd 1968; Johnstone and Percival 1976), which showed great variability among classes and instructors. Self-report studies have been conducted in which students reported their level of concentration (Stuart and Rutherford 1978) or mental workload (Young et al. 2009), again showing great variability among instructors and lecture formats. Clicker studies have asked students to self-report their level of attention at regular intervals (Bunce et al. 2010), resulting in low self-reported attention. Finally, others studied attention by administering recall tests immediately after class (McLeish 1968) to compare two different methods of course delivery, live and recorded instruction, with no difference in retention found.
One major limitation of prior research is a lack of ecological validity in measurements of student attention or vigilance. Measures based on subjective report of instructor or student self-report can be confounded with reporter biases, and the anticipation and act of self-reflection may even impact attention allocation itself. Insight into the time course of sustained attention in the classroom may instead necessitate more ecologically valid tools for measuring attention. We attempted to address some of the issues of prior research through an exploration of a possible new objective method for the scholarship of attention in teaching and learning utilizing an eye tracker. Eye fixation or gaze duration is a way to quantify vigilance during naturalistic viewing such as in a classroom, and eye tracking provides a virtually continuous measure of gaze duration (see Fig. 1 for sample data and the eye tracker used). While recording behavior with eye tracking is still a step removed from the naturalistic behavior that occurs within a classroom environment, it is a much closer approximation than previously used subjective measures.
To investigate student sustained attention and vigilance decrement in lecture classes, we designed this study to answer these questions:
-
Research question 1: What percent of the time does a student maintain gaze duration on on-task stimuli and what do students focus on during class?
-
Research question 2: What is the rate of vigilance decrement during an inquiry-based class?
-
Research question 3: Is there a difference in gaze focus between students who wore an eye tracker for an entire semester versus students who wore the eye tracker for a limited number of classes during the semester?
We used the attentional resource model of sustained attention to track student vigilance and vigilance decrement via gaze focus and duration during classroom lectures where the task was to learn the material presented by the lecturer. We created categories of on- and off-task to code the gaze focus and coded subsequent duration of vigilance as high, medium, and low spans. We used changes in gaze focus across time to determine the presence of vigilance decrement. We also varied the number of times students wore the eye tracking glasses across a semester with the intent to determine whether participant’ eye gaze focus and duration differed with limited use compared with those who participated for entire semesters, for instance due to a heightened awareness of being observed in a research study (McCambridge et al. 2014) that may diminish over time.
Method
Context
We conducted this study at a large public university in the state of Georgia. We ran this study for five semesters in an integrated physical science course for pre-service elementary education majors called ISCI 2002. ISCI is one of two required science content courses for all elementary education majors in the state. The majority of students who take this course are females within the traditional ages of 19–24. The number of students ranged from 24 to 72 in each section; however, the instructor did not modify instruction based upon class size.
The instructor had taught this course several times before the start of this study and did not modify his teaching in any way for the study during the five semesters. The instructor followed the Investigative Science Learning Environment (ISLE) format (Etkina and Van Heuvelen 2007), a guided inquiry learning system that engages students in the active construction of knowledge mirroring the processes used by physicists to acquire knowledge during the physics parts of the course. The goal is for students to construct the knowledge themselves with help from the instructor which involves significant group discussions. A significant part of this method is the use of multiple representations to aid in the problem solving and conceptual development process (Kohl et al. 2007; Rosengrant 2007; Rosengrant 2011). The first two thirds of the semester was focused on physics topics while the rest of the class sessions were a split between chemistry and astronomy.
Participants
We solicited participants from all members of the classes over a five-semester period. A total of 17 students participated in this study across all semesters (see Table 1). We paid participants $10 per class if they were part of the first two semesters of the study or $250 if they participated for an entire semester during the final three semesters. We used every dataset generated by the eye tracking device worn by each participant in the study; none were deleted. None of the participants were in more than one semester of class sessions (meaning that nobody failed and retook the class with the instructor at a later time).
Participant demographics were representative of the program of study, being primarily female (69%) and Caucasian (77%). The participants’ ages (20–51 years, M = 26.65) was also consistent with typical classroom demographics.
Apparatus
We used a pair of Tobii Glasses 1, purchased in 2011. This is an eye tracker that looks similar to a normal pair of eyeglasses, except it has a forward-looking video camera and microphone on one arm as well as an infrared illuminator built into each lens. This illuminates each pupil and, along with a camera, tracks the participant’s gaze point (see Fig. 1). The glasses use a pupil-centered corneal reflection, dark pupil eye tracking technique, tracking the right eye and capturing video at a rate of 30 Hz, or 30 times per second. The camera can capture video with a resolution of 640 × 480 pixels, at an angle of 56° horizontal and 40° vertical. The glasses weigh only 0.17 pounds but are connected by a wire to the recording device which is slightly larger than a cell phone. Prior to each use, the researcher calibrated the glasses with the participant using a nine-point calibration procedure. We used Tobii Studio (version 3.2.3) to create the eye tracking data of what the participant is looking at overlaid onto the video and audio for the entire class meeting. Since the environment was dynamic, it was not possible to use any automatic coding functions found within the software; thus, all data was manually coded second by second based upon the eye tracking video data that was the output from the software. It is important to note that while recording the data, we kept all factory settings at their defaults including the 30 Hz event detection setting.
Research Design
This study utilized a descriptive, longitudinal, design. During the first two semesters, between six and eight participants (see Table 1) wore the eye tracking glasses. The original intent was to have each participant wear the Tobii glasses three times over the course of the semester, but this did not work out due to the personal issues of the participants, and because one student withdrew from the course (labeled W in Table 1). During the third and fourth semesters of data collection, one student per semester wore the glasses during each 70-min class session, twice a week. The participants wore it whenever class met except for the first one and when they took an exam during the class session. During the fifth and final semesters of data collection, one student wore the glasses during each 50-min class, three times a week, except for the first class and exams. She did not want to wear the glasses during the brief weekly quizzes at the beginning of class, which she opted for and we complied to as mandated by IRB procedures.
This difference in number of times wearing the glasses was by design (see Table 1 for participant wear times and class length) to identify any differences between participant gaze patterns if the glasses were worn a few times or over an entire semester.
Data Coding
The primary source of data in this study is the audio and video recording of the gaze of each participant as seen through the glasses. Whatever a participant looked at and whatever sounds could be heard by the participant was recorded by the glasses (see Fig. 1). We manually coded these eye tracking videos on a second-by-second basis for on-task and off-task attention behavior categories based on what the participant looked at within the specific subcategories. Within a semester, the percentages of gaze duration are the total number of seconds a participant looked in a subcategory focus area divided by the total time of data collected for the semester.
For each 70-min class, there were 60 data points per minute, multiplied by 70 min, for a maximum of 4200 data points. For each minute of video, we averaged on- and off-task codes to determine the overall task status for that minute. For example, if a participant was on task 60 s out of 60 s, then they had a 100% on-task minute. If they were on task for 53 out of 60 s, then they had an on-task percentage of 88.3% (53/60 = .883) for that minute. If during that minute, part of the data included down time or there was no data, then that time was removed from the 60 s and we averaged the remainder. Thus, if a participant was on-task for 42 s, off-task for 13 s, and had 5 s of no data, we coded that minute as 76.4% on-task (42/55 = .764). Since this approach was completely new and there are more ways we needed to analyze the data for specific questions, we describe the full data coding process in detail in Appendix.
Procedure
Prior to the start of each class, the participant arrived early enough to be properly fitted to the glasses. Once the participant put the glasses on and the system was running, a researcher checked to make sure the participant was properly wearing the glasses and that they were fully functional. The researcher then did a nine-point calibration with the participant wearing the glasses. We did this procedure with each participant each class session. We did not save and reuse any calibration settings.
We instructed participants to behave normally during class and to pay attention to that which they normally would for the duration of the class. After the class was over, the instructor went to the participant to properly turn off the glasses and remove the apparatus for storage and data download. After each lecture, we downloaded the video and made a backup copy to avoid any data loss. We did not begin to analyze any data from a class session until that specific semester was over. Thus, Fall 2011 data was not viewed or analyzed (after the download) until 2012.
Results
How Long and What Do Students Look at during Class
To answer the question, “What percent of the time does a student maintain gaze duration on on-task stimuli and what do students focus on during class?,” we followed the coding procedure described in the Appendix. Gaze duration results (see Tables 2 and 3) show that students generally spend the largest percentage of their time paying attention to the white board at the front of the class (M = 32.92%, SD = 3.66%) and the largest percentage of their attention was spent on-task (M = 88.69%, SD = 4.04%). When off-task, students generally spend the largest percentage of their time looking around the classroom (M = 3.49%, SD = 2.00%). However, it was only a small percentage of their overall gaze duration that was off-task (M = 7.43%, SD = 3.10%). Down time and no data also accounted for very small percentages of total gaze duration (M = 2.76%, SD = 1.30% and M = 1.13%, SD = 1.08%, respectively). This last data point suggests that our apparatus was highly reliable.
Students’ Sustained Attention During Class Meetings
Answering the question “What is the rate of vigilance decrement during an inquiry-based class?” was an opportunity to see variations in on-task percentages over the span of class sessions. During the first semester of the study, percentage of time on task started at 67%, rose to nearly 100% at the 8-min mark, stayed above 80% (typically above 90%) for the remainder of the class meeting (see Fig. 2).
The second semester of the study showed similar patterns of on-task percentages. Those similarities included slightly lower on-task percentages at the start of class meetings percentages but then percentage of on-task behavior increases shortly afterwards (see Fig. 3).
During the third through fifth semesters of the study, when there was only one participant per semester, the pattern of on-task time was not entirely consistent with the first two semesters. Although these three participants showed high on-task percentages for the duration of these class meetings (generally above 85% at all times), these data showed more fluctuation over the duration of class meeting sessions than the first two semesters of the study, but not the same initial vigilance curve starting around 65% and climbing to 95 or 100% after 9 or 10 min into the class meeting (see Fig. 4). When comparing the on-task time in each of these semesters in a between-subject ANOVA, there is a main effect of the dependent measure on-task time between semesters (independent variable), F(2, 187) = 63.40, p < .001. On-task time in Spring 2013 (M = .93, SD = .03) and 2015 (M = .92, SD = .03) semesters was not significantly different, p = .740, while both were significantly lower than Spring 2014 (M = .97, SD = .02), ps < .001. When we average out all five semesters of data into one timeline, we see the initial vigilance decrement with the beginning 6 min but a relatively constant rate of on-task time between 90 and 95% afterwards (see Fig. 5).
In addition to analyzing average overall vigilance for entire class sessions, we analyzed discrete on-task vigilance spans (see Appendix for complete data coding methodology). Table 4 shows, for each vigilance level (high, medium, low) and each semester, the average length of time spent in each level of vigilance (average time span), the total length of time spent in each level (total number of minutes), the number of times they maintained each level (number of durations), the duration of the longest time span held at each level (longest span), and the number of time spans held at each level that were greater than 5 min (number of spans > 5 min). Across all five semesters, students’ average time spans, total number of minutes, number of durations, longest spans, and number of spans > 5 min were all largest for high on-task vigilance spans (on-task focus 90 to 100% of the coded minute) compared with medium and low on-task spans. The last two columns, longest span and number of spans > 5 min, are arguably the most important. The longest span in the high categories varies from 32 to 67 min, whereas low-level spans never exceed a 6-min duration (which only happened 3 times during 5 semesters). Summing up the total number of spans greater than 5 min across all five semesters, the high level contains 264 spans compared with 3 medium and 3 low spans.
Comparison Between Participants Who Wore Eye Tracker Frequently and Occasionally
To determine if there was a difference in gaze focus patterns between the first two semesters of the study, in which participants wore the eye tracker for only one class session each, and the final three semesters of the study, in which each participant wore the eye tracker for the entire semester, we used the percentages from Tables 2 and 3 to rank order the subcategories on which the students focused for each group. We then performed a Wilcoxon signed-rank test for paired samples because the data were not normally distributed (Shapiro-Wilk p < .001). Results indicated that there was no significant difference between the attention priorities (dependent variable) of participants in the first two semesters of the study (mean rank = 5.00) and students in the final three semesters of the study (mean rank = 6.38), Z = − 1.60, p = .123.
Discussion
We conducted this study to better understand both participant sustained attention patterns and the use of eye trackers as a tool in the scholarship of teaching and learning. In this study, the instructor created a guided inquiry environment that focused on lecture and student participation as a way of building knowledge. Contrary to the prediction from the attentional resource model of sustained attention that would predict a vigilance decrement during the lectures, Table 2 shows that participants demonstrated on-task focus nearly 89% of the time. We believe that the high levels of on-task sustained attention focus in these courses supports the conclusion that it is possible to maintain students’ attention throughout an entire class session. In these particular courses, there is interactivity between students and the instructor via the guided inquiry format.
Not only is sustained attention consistent after the beginning of class, it has a high on-task percentage. Focusing specifically on participant vigilance, we recorded multiple individual participants exhibiting high-level, on-task vigilance, with spans of up to 67 min in length (Table 4). While individual mean on-task vigilance decrements ranged from a low of 4.49 min to a high of 11.19 min (Table 4), breaks in high level, on-task sustained attention were brief. Even when students go off-task during a lesson, they do it for short periods of times and infrequently. Individual participants spent an average 78.53% of class time on high-level vigilance behaviors, 14.62% on medium-level vigilance, and 6.86% of time in a low level of on-task vigilance.
The noticeable vigilance decrements were typically in the first 5 min of class. One possible explanation is that the calibration procedure was a distraction because this dip went away in later classes. Another is that the participants were still getting situated while mentally preparing themselves for class. Since the beginning of class was typically a short review of the previous lecture’s material, they may have not paid attention because they felt that they already knew the material, thus nothing new was happening in the class.
Aside from vigilance questions, we were also interested in knowing what participants would focus their attention on during class. Table 2 shows the coded categories of attention during classes. Participants attended to the board (which included power point projections) more than any other category. In four of the five semesters, the board was the object in the classroom that students spent the most time focusing on during class. The overall averages show that the professor and notes received about the same amount of gaze duration. However, the individual semester shows distinct differences between the categories. For example, the participant in the Spring 2013 semester focused their gaze on the professor 36% of the time while the notes are only about 13% of the time. However, in the Spring 2014 semester of data, the numbers are nearly reversed with the participant only focusing on the instructor 17% of the time but spends just over 35% of the time on her notes. The participant for the Spring 2015 seems to be a hybrid of the other two participants. Overall, these three categories comprise just under 81% of where participants focus their gaze during a class. Fourth was the classmate discussion, which was only just above 5% while answering questions. All of these values were higher than any off-task category we coded.
The off-task sustained attention categories have less variance than on-task. Off-task computer activity (students either looking at websites on their own computer non-course related (such as Facebook) or looking at what is displayed on the screens of their classmates’ laptops) received larger amount of gaze focus from the students in the first two semesters than for the students during the later semesters. This could be explained by the room layout. In the first two semesters, the students were situated in a semi-circle room that was tiered (meaning that you have to walk up steps to get to the back of the room). However, the three later semester courses were held in a traditional single level classroom. This tiered approach allowed students in the back of the room to see everyone in front of them unlike the traditional single-level classroom. Furthermore, the students in the first two semesters of the case study were seated in the front row so they did not have any students with computers between them and the front of the room.
We believe that the Wilcoxon signed-rank test supports our conclusion that there were no statistically significant differences in sustained attention due to wearing the eye tracking glasses for varying times across a semester. There was no significant difference between the attention patterns exhibited by the participants in the first two semesters, when there were multiple participants each semester, and the participants in the final three semesters, where there was only one participant each semester.
The current study does not support the prediction from the attentional resource model of sustained attention that students suffer from vigilance decrements during a classroom lecture. Vigilance decrements are predicted to occur when students experience a depletion in their limited processing resources due to high demands, such as high event rates presented at unpredictable intervals, and cannot rapidly replenish their resources. We believe that the guided inquiry format of the classes in the current study was effective in allowing students to maintain vigilance. To support this, Wilson and Korn (2007) have stated that the research does not support attention declines after the first 10 to 15 min, especially when visual aids and active learning techniques are used.
This study also supports the idea by using eye tracking in a way not previously used. Previous work shows that perceived attention varies greatly by courses and instructors (Lloyd 1968; Johnstone and Percival 1976). Our work expands that by showing how participants also vary their attention greatly in the same type of courses with the same instructor. Our overall results show that participants can and do maintain sustained attention in classroom settings.
Limitations
There are two categories of limitations: those with the eye tracker and those of the study. With regard to limitations in the equipment itself, the Tobii glasses can only record data for 70, an issue during four of the five semesters when the lectures were 75 min long. Additionally, at extreme angles, the dot indicating exact point of gaze is lost, making it impossible to determine attention category, and we reported “no data” in these situations. This is shown in Table 3 and was an average of 1.13% of the time; thus, we do not feel that this was a major limitation to the study. Another equipment limitation was its cost which only allowed one participant per class. Only having one participant per class does not allow for comparisons of exact teaching moments. Though there is an extensive number of hours of data collected in the same class over different semesters in which the same material was taught, each semester is different. This small sample sizes limited the power of our statistical tests.
The last equipment limitation, which is also partly a study limitation, is that the eye tracker only shows us the point in space at which the participant is focusing their gaze. However, gaze focus does not always equate to sustained attention. Without observing a corresponding action, such as taking notes about the class, we had to assume on- or off-task attention. A participant could be looking at the instructor but thinking about the plans they have for the weekend ahead. In this study, such attention would be coded as on-task when in reality the participant was not paying attention. Vice versa, we used gaze focus as a reflection of sustained attention to objects in an overt manner; however, attention may also be allocated covertly, with shifts in attentional focus without a change of fixation via eye movements. Thus, it was also possible that gaze focus coded as off-task included participant on-task attention without corresponding gaze focus.
A limitation of the design was that there was not a comparison group, limiting the experimental design possibilities. We were unable to establish a comparison group because no other instructor was willing to teach and be recorded in this manner using traditional methods in comparison with the instructional methods used with the participants in this study.
Even with these study limitations, this work makes a contribution by establishing eye tracking as a method of analyzing in-class attention and as a tool for improving teaching and learning with this study both in how the study is conducted and as a baseline of comparison for others. Though the class was taught using research-based practices, best teaching practices such as active student engagement extend beyond a physical science classroom but into all disciplines.
Future Research
This study compiles nearly 120 h of footage from participants with varying backgrounds focusing on their attention in class. This study is the foundation for future work. We would like to conduct future studies comparing different methods of instruction, and different designs of learning materials, and their effects on attention and learning as well as different instructors much like Bradbury (2016) suggested in his work. Furthermore, would results be different with the same instructor if it were a graduate level course or a course focused on teaching pedagogy and not on science content? Finally, this methodology could be applied to a similar study but in a laboratory setting, or in more classrooms with differing modes of instruction. Studying gaze focus and durations of sustained attention could also contribute to the study of effective design of instructional materials, such as slides, notes, or other visual aids. We believe this method could also be used to inform the types and timing of student-student and instructor-student interactions during various types of instruction.
There are also two areas in this study that need empirical validation. First, we mentioned the idea of “zoning out.” Though this concept that happened infrequently in the data has high face validity, it needs empirical validation. Finally, the high, medium, and low vigilance levels shown in Table 4 also need to be validated.
Conclusion
We believe that this study provides answers to our first two questions by suggesting that student attention during lecture classes can be sustained for durations of classes up to 70 min long if the instructor uses a variety of research-proven and instructional techniques which engage the learner, such as guided inquiry strategies. Students spend more time on task than they do being off-task. They focus mostly on work and presentations on the board, then followed by the instructor and their own notes. Furthermore, not only were there more intervals that students were high on task, but those intervals were also longer.
By using an eye tracker, we significantly reduced the subjectivity and variability of previous studies of student attention. This methodology and coding schemes can also be used in many other scenarios. We also believe that this method can be used for a single participant case study in a course or with multiple participants throughout a course which answers our final research question. This work provides a new method to study student attention in a classroom setting, helping to advance the scholarship of teaching and learning.
References
Bradbury, N. (2016). Attention span during lectures: 8 seconds, 10 minutes, or more? Advanced Physiology Education, 40(4), 509–513. https://doi.org/10.1152/advan.00109.2016.
Bunce, D. M., Flens, E. A., & Neiles, K. Y. (2010). How long can students pay attention in a class? A study of student attention decline using clickers. Journal of Chemical Education, 87(12), 1438–1443. https://doi.org/10.1021/ed100409p.
Craig, A. (1984). Human engineering: the control of vigilance. In J. S. Warm (Ed.), Sustained attention in human performance (pp. 247–291). Chichester: Wiley ISBN-13: 978-0471103226.
Davis, B. (1993). Tools for teaching. Jossey-Bass ISBN 9780787965679.
Etkina, E., & Van Heuvelen, A. (2007). Investigative science learning environment—a science process approach to learning physics. Research-based reform of university physics, 1(1), 1–48.
Hartley, J., & Davis, J. (1978). Note taking: a critical review. Programmed Learning and Educational Technology, 15(3), 207–224. https://doi.org/10.1080/0033039780150305.
Johnstone, A., & Percival, F. (1976). Attention breaks in lectures. Education in Chemistry, 13(2), 49–50.
Kohl, P., Rosengrant, D., & Finkelstein, N. (2007). Comparing explicit and implicit teaching of multiple representation use in physics problem solving. In AIP Conference Proceedings (Vol. 883, No. 1, pp. 145–148). College Park: American Institute of Physics.
Lloyd, D. (1968). A concept of improvement of learning responses in the taught lesson. In Visual Education (pp. 23–25).
Lucas, S. G., Bernstein, D. A., & Goss-Lucas, S. (2004). Teaching psychology: a step by step guide. Psychology Press ISBN-13: 978-1138790346.
Macmanaway, L. (1970). Teaching methods in higher education, innovation and research. Universities Quarterly, 24(3), 321–329. https://doi.org/10.1111/j.1468-2273.1970.tb00346.x.
Maddox, H., & Hoole, E. (1975). Performance decrement in the lecture. Educational Review, 28(1), 17–30. https://doi.org/10.1080/0013191750280102.
McCambridge, J., Witton, J., & Elbourne, D. (2014). Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects. Journal of Clinical Epidemiology, 67(3), 267–277. https://doi.org/10.1016/j.jclinepi.2013.08.015.
McKeachie, W. (1986). Teaching tips: a guidebook for beginning college teachers (8th ed.). Heath. https://doi.org/10.1016/0307-4412(88)90090-8.
McKeachie, W. (1999). Teaching tips: strategies, research, and theory for college and university teachers (10th ed.). Heath. https://doi.org/10.2307/328598.
McLeish, J. (1968). The lecture method. Cambridge: Cambridge Institute of Education.
Naveh-Benjamin, M. (2002). The effects of divided attention on encoding processes: underlying mechanisms. Perspectives on human memory and cognitive aging: Essays in honor of Fergus Craik, 193-1207. ISBN 9781134949694.
Parasuraman, R. (1986). Vigilance, monitoring, and search. In K. R. Boff, L. Kaufman, & J. P. Thomas (Eds.), Handbook of perception and human performance, Vol. 2. Cognitive processes and performance (pp. 1–39). John Wiley & Sons ISBN 0471829560.
Parasuraman, R., Warm, J. S., & Dember, W. N. (1987). Vigilance: taxonomy and utility. In L. S. Mark, J. S. Warm, & R. L. Huston (Eds.), Ergonomics and human factors (pp. 11–32). New York: Springer-Verlag. https://doi.org/10.1007/978-1-4612-4756-2_2.
Rosengrant, D. (2007). Multiple representations and free-body diagrams: do students benefit from using them? Rutgers University: ProQuest Dissertations Publishing.
Rosengrant, D. (2011). Impulse-Momentum Diagrams. The Physics Teacher, 49(1), 36–39.
Rosengrant, D., Hearrington, D., *Alvarado, K., & *Keeble D. (2011). Following Student Gaze Patterns in Physical Science Lectures. 2011 Physics Education Research Conference Proceedings, 1413:323–326.
Sarter, M., Givens, B., & Bruno, J. P. (2001). The cognitive neuroscience of sustained attention: where top-down meets bottom-up. Brain Research Reviews, 35(2), 146–160. https://doi.org/10.1016/s0165-0173(01)00044-3.
Stuart, J., & Rutherford, R. (1978). Medical student concentration during lectures. The Lancet, 514-516(8088), 514–516. https://doi.org/10.1016/s0140-6736(78)92233-x.
Wankat, P. (2002). The effective, efficient professor: teaching, scholarship and service. Allyn & Bacon. https://doi.org/10.1080/1937156x.2003.11949512.
Warm, J. S., Parasuraman, R., & Matthews, G. (2008). Vigilance requires hard mental work and is stressful. Human Factors, 50(3), 433–441. https://doi.org/10.1518/001872008X312152.
Wilson, K., & Korn, J. (2007). Attention during lectures: beyond ten minutes. Teaching of Psychology, 34(2), 85–89. https://doi.org/10.1080/00986280701291291.
Young, M. S., Robinson, S., & Alberts, P. (2009). Students pay attention! Combating the vigilance decrement to improve learning during lectures. Active Learning in Higher Education, 10(1), 41–55. https://doi.org/10.1177/1469787408100194.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix. Data coding methodology
Appendix. Data coding methodology
On-task gaze focus included the following subcategories: professor, board (whiteboard with notes and power point projections), notes by the students, classmate (discussion pertaining to class), assessment (quizzes), and other (an activity relevant to class but not one of the previously mentioned categories such as participating in a demo). Off-task gaze focus included the following subcategories: classmate (activity not pertaining to class), computer/electronic device, room, doodling, and other. We developed the subcategory codes during a pilot study (Rosengrant et al. 2011) that contained eight eye tracking videos (results are not part of this study). Each researcher developed a list of objects of attention, to include whether that attention was on- or off-task. We compared codes to reach consensus between the researchers until 100% agreement was reached. As the 2011 pilot study progressed, the initial coding scheme was analyzed based on the newest participant video. We found no need to add or modify our coding scheme as this full study progressed.
To ensure that the videos were coded properly, each coder underwent a training. The coder watched a video that was already coded and compared their work with the accepted codes. Once any disagreements were resolved (usually minor), they then coded new videos on their own. Author 1 then coded random minutes of new videos and compared that work with the coder to ensure agreement based upon the already accepted codes.
At extreme angles, the eye tracker tended to lose where the eye was looking. For example, a student taking notes would typically look downward with their eyes and not move their entire head downward. Though there was no eye tracking data, this gaze focus was still coded as on-task because the video showed participants actively taking notes. Similarly, if participants were looking downward to send/receive text messages on their cell phones, we labeled that as off-task. When a participant had their gaze focused for any length of time and eye tracking data were recorded, there were noticeable changes in the eye movements. For instance, a participant may have been watching the instructor, but the red dot showing their gaze would constantly be moving. There were very few times where the eye tracking data showed very little to no movement. These rare cases with no eye motion (exceeding 2 s) were labeled as off-task since we considered this a zoning out instance regardless of the where the dot was located.
We labeled situations in which we had no eye tracking data and we could not tell what the participant was doing as no data. If there was video data but no audio data for some reason (this happened rarely), we also coded this as no data as the verbal discussions could be used to help determine on- and off-task coding. We also created a code called “down time.” This was for any situation where students were in class but there was no instruction happening, or when the class took a short break. This was typically after a quiz when the instructor was collecting papers or if the instructor was distributing papers such as exams.
As we stated earlier in the body of the paper, our method allowed for a minute-by-minute determination of on- or off-task gaze focus for each participant within the specific subcategories. Within a semester, the percentages of gaze duration are the total number of seconds a participant looked in a subcategory focus area divided by the total time of data collected for the semester. For each 70-min class, there were 60 data points per minute, multiplied by 70 min, for a maximum of 4200 data points. For each minute of video, on- and off-task codes were averaged to determine the overall task status for that minute. For example, if a participant were on task 60 s out of 60 s, then they had a 100% on-task for that minute. If they were on task for 53 s out of 60, then they had an on task percentage of 88.3% (53/60 = .883). If during that minute, part of the data included down time or there was no data, then that time was removed from the 60 s and we averaged the remainder. Thus, a participant was on-task for 42 s out of 60, off-task for 13 s, and had 5 s of no data, we coded that minute as 76.4% on-task (42/55 = .764). Thus, our overall equation for each minute was dividing the number of seconds a participant spent each minute on task by the number of seconds spent either on or off-task. If the bulk (more than 75%) of a minute was categorized as no data, then this minute was not calculated in the on-task percentage. For example, if 55 s were downtime and the remaining 5 s were on-task, we did not code this as 100% on task for that minute. When we reported the data for averages, it was based on the sum of the seconds per code each semester, not on an average of daily values.
Once we had our percentages for each minute, we extended this analysis to the semester as a whole in two ways. In both ways, we started with displaying the mean percentage of time on task on a minute-by-minute basis for all participants during each semester of the study. We averaged each minute of class meeting time to give a percent of time that the student was on-task as described previously. For our first semester analysis we then averaged each minute of a class session with the corresponding minute of the other class sessions in that same semester unless there was no data for that particular minute. For example, if minute 23 of class 4 was no data (which could include down time) then minute 23 of class 4 was not averaged in with minute 23 of all of the other classes. Thus, every other minute 23 of a class session would be averaged together to form the timelines presented in the results section.
For our second semester analysis, we operationalized vigilance in three levels: high (on-task focus 90 to 100% of the coded minute), medium (on-task focus 70 to 89.99% of the coded minute), and low (on-task focus less than 70% of the coded minute). We chose 90% as our cutoff because this allowed for small variations in a minute where a person may have quickly glanced at something else coded as off-task but still maintained on task activity for nearly all of the minute. We chose 70% as the cutoff for low since we would rather be overly critical in this area. So, even though a student may have been paying attention for a majority of the minute, they typically had larger chunks of time of off-task behavior. Next, we calculated durations of vigilance for each level. If a student was in the high category for seven consecutive minutes and then exhibited another level of vigilance or down time in the next minute, we counted that as one high level vigilance for a 7-min duration. We then counted the number and length of all vigilance durations of all three levels in every class session every semester. Next, we calculated (per level per semester); the average duration of vigilance (along with SDs), the total number of minutes, the longest single duration, and the number of durations that occurred longer than 5 min. We chose 5 min since we felt this was the minimum duration demonstrating sustained attention and it was relatively close to the averages we discovered in most of the semesters.
Rights and permissions
About this article
Cite this article
Rosengrant, D., Hearrington, D. & O’Brien, J. Investigating Student Sustained Attention in a Guided Inquiry Lecture Course Using an Eye Tracker. Educ Psychol Rev 33, 11–26 (2021). https://doi.org/10.1007/s10648-020-09540-2
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10648-020-09540-2