Out-of-school environments, such as museums, are important learning venues because they support rich, sensory-filled, and authentic learning experiences (Bell et al. 2009). Out-of-school environments uphold learners’ interactions with technical resources, including museum exhibits, computers, and multimedia tools, and with parents, peers, siblings, and teachers (Land and Zimmerman 2015; Manches 2013). Thus, understanding the interactions that happen in out-of-school environments is critical to designing and enhancing the quality of educational experiences. However, the majority of studies in instructional design and technology (IDT) use traditional methods that measure certain outcomes afterward (e.g., survey questionnaires) rather than methods that can describe the learning processes—as evidenced by Stauffer’s (2017) content analysis study of recent papers in TechTrends. Therefore, the goal of this paper is to serve as a methodological case study illustrating mobile eye-tracking as a tool to gain a fuller understanding of learning processes in situ for IDT researchers and practitioners.

Mobile Eye-Tracking as a Tool for IDT Research and Practice

Based on our research, we suggest the utility of mobile eye-tracking for IDT research and practice, particularly in out-of-school settings, because it obtains detailed and situated information on learners’ engagement. Stationary eye-tracking has been used in the IDT field, particularly for computer-based simulated or gaming environments (e.g., Kiili and Ketamo 2010; Ozcelik et al. 2010; Romero-Hall et al. 2016). However, stationary eye-tracking is not appropriate for research in museums, afterschool clubs, nature centers, and the like because stationary eye-tracking devices cannot follow individuals through the environment. Stationary eye-tracking is also unable to assess learners’ social interactions because it cannot capture a person’s fixation on other people. In contrast, mobile eye-tracking may help trace learners’ gaze and attention as they move around, which helps to understand learner interaction with social and technical resources (Bulling and Gellersen 2010) in an ecologically valid manner (in other words, a manner reflecting the experiences of the learner).

In this paper, we illustrate how mobile eye-tracking can be used to capture learner interaction with physical and social resources in an out-of-school setting. We demonstrate how we collected and analyzed mobile eye-tracking data from a hands-on science museum by describing a single case of a child who visited the museum with his mother. We also discuss the advantages and pitfalls of using mobile eye-tracking in IDT research and practice.

Literature Review of Mobile Eye-Tracking in Learning and Media Studies

Eye-Tracking in IDT Research: From Stationary to Mobile

Eye-tracking collects information about a person’s eye movement (saccade) and gaze (fixation) that is then presumed to reflect a person’s perception and visual attention (Rayner 1998). Eye-tracking can capture precise, moment-by-moment information regarding what a person was looking at, for how long, in real time (Armstrong and Olatunji 2012; Morales et al. 2016). For example, eye-tracking has been used to identify which words people focused on when reading particular texts (e.g., Rayner 1998). Eye-tracking data also has been employed to find psychological phenomena, such as fear, toward stimuli varying in threat (e.g., Morales et al. 2017).

For research in IDT, such information about gaze directions and movements can evidence learners’ attention and engagement with diverse educational resources and media (Hyönä 2010; van Gog and Scheiter 2010). For instance, Romero-Hall et al. (2016) used eye-tracking to measure nursing students’ perceptions and emotional responses in a simulation environment with animated agents. Kiili and Ketamo (2010) used eye-tracking to investigate how cognitive feedback supported children to engage with problem-based game learning related to mathematics and geography.

The above eye-tracking studies show the potential of stationary eye-tracking to investigate how learners interact with provided educational tools and games during learning processes. However, stationary eye-tracking uses a geographically fixed monitor screen to capture gaze parameters, so that it cannot collect eye movement data when learners walk around, move their heads, and talk to other people. Also, stationary eye-tracking often requires lab-based settings where learners are asked to only look at provided (controlled) visual stimuli, which would be different from authentic out-of-school learning environments where learners interact with multiple resources as they want (Bulling and Gellersen 2010; Isaacowitz et al. 2015).

The development of mobile eye-tracking (i.e., ambulatory head-mounted eye-tracking) complements the drawbacks of stationary versions that are not adequate to capture information in authentic educational environments (Eghbal-Azar and Widlock 2012). Mobile eye-tracking enables eyeglasses-based video recorders to document person-centered patterns of gaze while the wearer moves through a particular space (Bulling and Gellersen 2010; Mayr et al. 2009). In this way, it obtains both (a) eye-movement parameters and (b) point-of-view video recordings. Thus, learners’ eye fixation and movement can be superimposed on video recordings of their view, which can help capture the real-world contexts surrounding their visual attention (Foulsham et al. 2011).

Mobile Eye-Tracking for Research in Out-of-School Settings

Mobility and Context-Relatedness of Mobile Eye-Tracking Data

Because of its mobility and context-relatedness, mobile eye-tracking has expanded research boundaries in IDT. With this technology, eye-movement data can be tracked beyond a fixed monitor screen. Accordingly, scholars have recently utilized mobile eye-tracking for research in out-of-school settings (i.e., museums). For example, Mayr et al. (2009) invited three adult visitors to wear mobile eye-tracking in a science museum. They found that the visitors’ gaze tended to be fixated with exhibits grouped by similar concepts. Also, Eghbal-Azar and Widlock (2012) used mobile eye-tracking for sixteen adult visitors in two exhibitions respectively from an ethnological museum and a literature museum. They identified several patterns of how the visitors scanned the exhibits (e.g., changing visual perspectives quickly by moving heads, starting with diving in to one exhibit, etc.).

These studies illuminate the potential of mobile eye-tracking for researchers and practitioners in IDT. Mobile eye-tracking can be beneficial for in-depth research about place-based learning. Place-based learning allows learners to explore in a particular space and supports learning by engaging with community-based resources when meaning-making (Zimmerman and Land 2014). Mobile eye-tracking enables researchers to capture authentic contextual information about the place as well as precise eye fixations, which can be an indicator of learner engagement with educational resources. Because researchers cannot know definitely if there is actual engagement solely by observing learners’ body and head movements (Eghbal-Azar and Widlock 2012; Mayr et al. 2009), information about their visual attention can benefit the understanding of their engagement in place-based education settings.

Mobile Eye-tracking’s Potential for Investigating Children’s Socio-Technical Interactions

Previous studies have shown that mobile eye-tracking helped to investigate adult learners’ interaction with physical resources in place-based education settings. They have focused on how learners explore the exhibits broadly without describing detailed eye movements on multiple subparts of each exhibit. However, context-related gaze information from mobile eye-tracking can guide researchers and practitioners to understand affordances of various hands-on resources, which can provide implications for the better design of educational tools. Affordances are people’s culturally attributed perceptions of how specific objects can be used (Norman 1988). Designers need to understand how learners perceive affordances in hands-on, sensory museum exhibits because the affordances influence how people engage and interact with particular objects (e.g., Yoon and Wang 2014). Therefore, examining affordances of multiple hands-on exhibits helps researchers to understand how learners interact with specific educational resources and how these resources better enhance learners’ engagement.

Moreover, previous studies have not thoroughly discussed its applicability to uncover social interaction in museums, although learners make not only technical but also social interactions (Land and Zimmerman 2015). Mobile eye-tracking can provide detailed information related to learners’ social interactions and collaboration. For example, mobile eye-tracking can capture if learners visually interact with other people (e.g., family members, peers, other visitors) while exploring museum exhibits. Especially because gaze is associated with both cognitive and affective status (i.e., interest) (Morales et al. 2016; van Gog and Scheiter 2010), mobile eye-tracking helps to determine if learners in the group had the same or different areas of interest.

In addition, previous mobile eye-tracking studies have addressed adult learners visiting museums, but younger learners’ cases may be different from adults’. How children interact with physical and social resources may be more active compared to adults. Also, children’s museums often provide more interactive resources in diverse ways. Considering these gaps, we used mobile eye-tracking to investigate children’s interaction with exhibits and family members in a museum.

Given that our goal is to illustrate the utility of mobile eye-tracking as a methodology to examine learning, we provide a case study focused on one child while he explored a science museum with his mother. We illustrate how mobile eye-tracking can inquire into research questions such as how did a child interact with museum exhibits and with his mother while exploring the science museum? After we demonstrate how mobile eye-tracking can show how the child learned in the out-of-school environment, we discuss implications for utilizing mobile eye-tracking for research and practice in IDT.

Methodological Case Study: Qualitative Analysis with Mobile Eye-Tracking

Setting and Participants: Ian’s Family in a Science Museum

This study comes from a larger investigation of informal learning at the Discovery Space of Central Pennsylvania, a children’s science museum. Seven families volunteered to participate; this case study includes one family: a 10-year-old boy Ian and his mother Judy (pseudonyms). Ian’s case was selected because eye-tracking footage of him maintained adequate accuracy without technical issues for the entire exploration period unlike other cases. Thus, Ian’s data can methodologically illustrate the role of mobile eye-tracking in supporting research about learning in the museum.

When Ian visited the museum, other families who did not wear mobile eye-trackers were present. Although all activities were performed independently by each family, they could interact with other families if they wished. Ian had visited this museum once before, and he was interested in science, particularly in weather and planets.

During the study period, the museum displayed approximately 50 exhibits organized into multiple exhibitions covering various science topics. These exhibits ranged from low- to high-tech to support science-related learning experiences. Exhibits included (a) highly interactive technologies, such as an exhibit adopting augmented reality to encourage learners to connect their current experiences with virtual situations that were presented through a monitor, (b) reactive displays, with which learners could make some input or experiment, and (c) simple presentations that usually visualized a phenomenon, such as precipitation.

For the broader study, in which our case is embedded, the research team interacted with learners in two-part family learning sessions: (1) exploring exhibits and (2) making artifacts with clay. For exploration, the participants were asked to explore the exhibits in the museum freely for 30–45 min (about 30 min for Ian). As previous studies show that visitors tend to spend less than 1 min on average for each exhibit in a science museum (Sandifer 2003), having 30–45 min to explore 50 exhibits in the museum was deemed appropriated. This case study does not include the second part of the session because it did not include mobility, as the participants mainly sat at a table for crafting. Thus, our methodological case study focuses on when Ian and Judy moved around and explored various types of museum exhibits.

Procedures for Collecting and Processing Mobile Eye-Tracking Data

In our research, we used mobile eye-trackers (hardware) designed for youths and an open-source mobile eye-tracking platform developed by Pupil Labs (Kassner et al. 2014). The hardware consisted of eyeglasses with two-sided cameras—(a) a world camera toward the outer world and (b) an eye camera aimed inward for detecting eye movement—and a tablet computer connected to the eye-tracker. The Pupil Lab’s software system was installed on the tablet so that it could capture, collect, and save eye-tracking information. The world camera was able to capture images at a resolution of 1280 × 720 pixels, and the eye camera at a resolution of 640 × 480 pixels. An audio recorder was attached to record Ian’s vocalizations in sync.

The Pupil Labs’ software also could combine eye-tracking footage (e.g., fixations, saccades) with the audio and video recordings after data collection. Thus, the produced data stream combined the point-of-view video from the world camera with eye-tracking information from the eye camera. The final video then was the point-of-view video footage with dots and lines superimposed, which indicated where the learner was looking an object or person at the moment. A dot represents the fixation point of the learner, which is the object or person the learner’s gaze is fixated at, while the line presents when the learner moved his or her gaze over time. The dots and lines can be analyzed by an IDT researcher to understand what specific item(s) in a field of view that a person fixated on. For instance, in Fig. 1a, b, a learner had the same field of view; however, in 1a, the learner was scanning multiple exhibits, while in 1b, the learner was reading a specific sign. Being able to differentiate which objects are being used by a learner can provide information for an IDT researcher about resource utilization and influence of educational objects in a learning environment.

Fig. 1
figure 1

Examples of the processed eye-tracking data from Ian’s exploration: Red dots indicate where Ian was looking, and red lines indicate the path of the gaze movement. Despite the same world camera angle, Ian’s gaze was fixated differently (in (a), his attention was on museum exhibits from a distance; in (b) it was on an advertisement paper on the wall)

We collected data through the following procedures: (1) preparing the device, (2) calibrating, (3) recording data during the exploration, and (4) supplementing eye-tracking footage with the world camera data after recording.

Preparing the Device

On site, before the participants arrived, we set up all mobile eye-tracking devices. When fully charged, the tablet computer could run the video-intensive eye-tracking software program for two hours. The storage was cleared between users because mobile eye-tracking data requires more storage capacity than normal video files as two video streams are generated—one from the learner’s pupils and the other from the learner’s point-of-view. The eyeglasses were connected with the tablet computer and the audio recorder and tested to ensure that the software on the tablet worked well.

Calibrating

Once the participants arrived, we had to calibrate the mobile eye-trackers (hardware). Using the Pupil Labs’ software installed in the tablet, calibration was performed to capture the participant’s pupil positions and sync them to the standard scale of eye-tracking data. We used a screen marker calibration method (Kassner et al. 2014) that prompted nine animated points and tracked the participant’s eye movements toward these nine points. Calibrating took 10–15 min per person; however, current methods allow for faster calibration (e.g., Fu 2018). After calibrating, we started recording with the tablet computer in a backpack that the participant wore to remain mobile (Fig. 2).

Fig. 2
figure 2

Ian wearing a mobile eye-tracker over his eye (face) and a backpack holding the tablet computer in the museum

Recording Data during the Exploration

Once calibration was done, the participants were asked to explore the museum exhibits freely for the first part of the session. To help their exploration, they were given a paper map describing the exhibition’s scientific themes (e.g., physics, animals).

Supplementing Data after the Exploration

After collecting mobile eye-tracking data on site, we processed the data by using Pupil Labs’ software to merge the two data streams—the eye camera data (i.e., eye-tracking footage) and the world camera data (i.e., video recordings from the front-facing camera)—with audio recordings into one video-format file. When merging eye-tracking footage into the world camera recordings, manual gaze correction was performed by checking and adjusting the participant’s point of gaze (x and y coordinates) to the target they were asked to see when calibrating. Then, the resultant data were presented as video recordings overlaid with circles and lines, which indicate where the participant’s gaze was fixated. (Note: When calibration is not reliable enough, manual gaze correction cannot enhance the validity, so eye-tracking footage with poor calibration cannot be used for analysis. Calibration is extremely important in this method.)

Qualitative Interaction Analysis with Mobile Eye-Tracking Data

The advantages of eye-tracking have been overlooked by qualitative researchers, as most eye-tracking studies utilized quantitative approaches (e.g., Romero-Hall et al. 2016). However, unlike eye-tracking, mobile eye-tracking can provide detailed contextual information, which is critical for qualitative research. Thus, we employed qualitative interaction analysis (Jordan and Henderson 1995) for this case study. We used software for video analysis, called V-Note, to repeatedly view the mobile eye-tracking data that were integrated into video format with video and audio recordings. We analyzed via the following procedures: (1) content-logging, (2) coding fixated targets of Ian’s gaze, (3) documenting close information of fixated targets, (4) identifying what interactions—bodily and verbally—happened when Ian’s gaze was fixated, and (5) deducing thematic patterns of Ian’s attention and interactions during his museum exploration. Content-logging was performed to summarize the flow of his exploration (e.g., which exhibit he stopped by). Then, for every second, we coded targets that were fixated among different categories: parent (Ian’s mother), other people (other visiting families), specific exhibits he was engaging with, surrounding exhibits for scanning, and the provided paper map. After coding for each second, we revisited the data to document closer, detailed information of fixated targets. For example, when he looked at an exhibit, we documented if his gaze was fixated on the label, text information, knob, light bulb, or something else among diverse parts composing the exhibit. Then, we reviewed the data to identify what interactions Ian made with the museum resources or people (e.g., his mother).

What Findings from Mobile Eye-Tracking can Tell an IDT Researcher

Our mobile eye-tracking data attained precise information about Ian’s attention and interactions during his museum exploration. Overall, the mobile eye-tracking provided direct feedback on the objects with which Ian interacted. Without our mobile eye-tracking data, we summarize duration times for each coding category (fixated targets, which are the object that eye-tracking data show Ian’s gaze was fixated on) and then show how Ian made visual, bodily, and verbal interactions with the museum exhibits and with his mother.

Distribution of Ian’s Gaze

The total eye-tracking data collected from Ian’s museum exploration (first part of the session) lasted 30 min. During his exploration, he played with 15 exhibits, for an average of 1 min and 18 s on each one. Ian’s attention was fixated on the exhibits that he was interacting with (total 16 min and 28 s), rather than scanning the scene more broadly. Between the exhibits, his eye-tracking data showed that he either read a map (total 29 s) or scanned the exhibits (total 1 min and 55 s) before choosing which one to visit next. In the beginning, he tended to read the map to choose which exhibit to explore. However, as time passed, he tended to scan exhibits and decided where to move rather than looking back at the map.

Our eye-tracking data also show how often he looked at other people, such as his mother or other visitors in the museum. Although many of the exhibits required collaborative behaviors (e.g., building blocks together), his fixation was rarely on his mother (43 s). Also, his fixation was just over one minute on all other visitors combined (1 min and 5 s). Figure 3 presents the distribution of Ian’s gaze during museum exploration.

Fig. 3
figure 3

Distribution of Ian’s gaze during museum exploration and duration (minutes) of each category

Mobile Eye-Tracking Revealed Ian’s Interaction with Museum Resources

Mobile Eye-Tracking to Identify Use of Educational Texts

The eye-tracking data allowed for the development of Ian’s interaction patterns where he used his body first and read the exhibit label second. While one may assume that the museum label would be more helpful to read prior to interacting with an exhibit, our mobile eye-tracking data indicated that Ian tended to read the text later—usually right before he left for another exhibit. For example, when he played with Bowling Ball Pulleys—consisting of three different knobs connected to identical weighted bowling balls (15 pounds each) with varying numbers of pulleys—he first gazed at the balls. Then, he grabbed and pulled down one of the knobs without reading the instructions. He did not read the text label until his final attempt at pulling (see Fig. 4 for details). Even when he had to take some time to figure out how to play with an exhibit called The Gravity Willa big rubber funnel into which people can put a ball to watch it orbit and fall into the pit—he observed the exhibit itself rather than reading the text instructions (see Fig. 5 for details).

Fig. 4
figure 4

Ian’s gaze flow while playing with Bowling Ball Pulleys in chronological order. Red dots and lines indicate where his gaze was fixated. In (a) his gaze was fixated on the balls and pulleys. In (b) his gaze moved to one of the knobs and the ropes. While he pulled down this knob, in (c) his gaze was fixated on the balls and pulleys again as they moved up and down. After multiple attempts, in (d) he read the text label

Fig. 5
figure 5

Ian’s gaze flow while playing with The Gravity Will in chronological order. Red dots and lines indicate where his gaze was fixated. He at first found the exhibit (a) and kept looking at the pieces of the exhibit itself (b) rather than reading the text instruction

As IDT researchers, elucidating these patterns across multiple visitors can provide designers with valuable information about how educational texts are used, even if not in the intended way. Mobile eye-tracking gives a level of detail that camcorders or point-of-view cameras alone cannot provide. The perspective on intended versus actual uses of texts and other educational resources is aligned with the importance of affordance (Norman 1988) when designing educational tools. As shown in Figs. 4 and 5, using mobile eye-tracking was helpful to observe where Ian looked at and in what order, which might not have been captured by other types of data. Mobile eye-tracking can then tell an IDT researcher if, when, and for how long learners are using educational texts.

Mobile Eye-Tracking to Identify Educational Resources that might Not Work for all Learners

Some resources in the museum were designed such that learners could engage without reading the instructions, but other resources were not. Mobile eye-tracking can be used to find when learners struggle and how or if they overcome those struggles. For example, unlike the Bowling Ball Pulleys above, Ian found it difficult to start the Magnet Maze. This exhibit was designed to allow learners to move a piece of metal through a giant wood maze by using a magnet stick; however, at first, Ian could not find the appropriate tool (i.e., the magnet stick). Then, our mobile eye-tracking data show that when Ian was not successful, his attention moved to other visitors playing with another exhibit, as shown in Fig. 6. This episode shows that eye-tracking can tell an IDT researcher or practitioner how easy or challenging tools are to use at the beginning of engagement with an exhibit.

Fig. 6
figure 6

Ian’s gaze focused on other children having fun with another exhibit, while he was playing with Magnet Maze. Red dots and lines indicate where his gaze was fixated

Mobile Eye-Tracking Revealed Ian’s Interactions with his Mother

Mobile Eye-Tracking Shows the Balance between People-Focus and Object-Focus

Because this session was a family learning program, Ian performed most of the activities collaboratively with Judy. However, our mobile eye-tracking data revealed that while they worked together, he rarely looked at Judy, even during conversations or when they used resources collaboratively; his attention was mostly on his own activities or tools. For example, when they were working with Magnet Maze, our mobile eye-tracking data (Fig. 7) noted that Ian’s attention mainly followed the metal ball that he was navigating. Sometimes, Judy verbally called Ian’s attention to the exhibit she wanted to explore, such as Musical Materials, which consists of four sets of xylophones made from different materials (e.g., plastic, copper) that respond with different sounds upon hitting. Ian at first passed this exhibit, but Judy, who followed behind, brought him back by saying, “Music. Ian.” Our mobile eye-tracking data (Fig. 8) captured that Ian’s attention moved from looking around various exhibits to Musical Materials, after hearing Judy’s exhortation. Even when Judy called him, his attention was mainly toward the exhibit, after he glanced at Judy just for a second. These episodes imply that this youth tended to collaborate behaviorally rather than interact visually with his mother. For IDT researchers and practitioners, collecting this type of information about who and what a child was looking at helps to understand collaboration patterns with certain exhibits, which is beneficial to further develop educational resources for diverse educational activities.

Fig. 7
figure 7

Ian’s gaze focused on the magnet stick and the metal ball when he watched his mother working (a) and when they both were working together (b). Red dots and lines indicate where his gaze was fixated

Fig. 8
figure 8

Ian’s gaze flow while playing with Musical Materials in chronological order. He at first passed the exhibit and scanned other exhibits (a). Then, Judy called him, so he turned around and approached the exhibit (b) without looking at his mother but while focusing on the exhibit. Red dots and lines indicate where his gaze was fixated

Discussion

Implications for out-of-School Learning Research

In our research, mobile eye-tracking allowed us to develop findings that would have been unavailable with other types of data (e.g., video recordings) that could not capture precise gaze information. By using eye-tracking, we were able to capture Ian’s interactions with physical and social resources while he was engaged in exploring exhibits in the museum. When Ian interacted with exhibits, patterns of use of text labels and accessibility of exhibits emerged. Investigating these types of behavioral patterns is critical to IDT research and practice because the field needs to shape educational tools (such as exhibits) that are easy to use and have approachable ways of navigating (see Pea 2004). By using mobile eye-tracking to determine learners’ attention and engagement, the design of educational resources can be strengthened.

Mobile eye-tracking was also beneficial to understanding how learners engaged with others. For instance, while Ian and his mother collectively explored diverse exhibits together and had conversations, Ian’s visual attention was barely on his mother during the session. Instead, Ian’s attention tended to remain with the tools or exhibits he was using. Findings, such as that Ian used his hands to collaborate with his mother without looking at her often, can be used to develop or refine theories, as in this case, applying the concept of body engagement (Smith 2014) for collaborative exploration in museums. Also, Ian sometimes observed other visitors in the middle of his exploration although he did not have any direct interaction with them. His attention to other people indicates the potential relationship between children’s experiences and observing other visitors in museums.

Trade-Offs and Progress of Mobile Eye-Tracking

Generally, mobile eye-tracking systems can be quite costly, limiting their widespread use. The system used here from Pupil Labs is relatively affordable, which could support more expansive research work. Of course, one drawback of using affordable systems is that they do not always support the wide array of functions seen with more expensive commercial applications. For example, at the time of testing, this system did not support light, portable, personal digital assistant (PDA)-type applications available with more expensive units. Therefore, at the time we collected and analyzed the data, we experienced some technical issues that limited our research. For example, the system had constraints regarding the duration time of video capture. Given the heavy demands of the software, the tablet computer ran out the battery within two hours or faster, which may be not enough for some longer education activities. Once the battery was out, a tablet also needed time to recharge, so data could not be collected in succession without multiple tablet computers, making mobile eye-tracking costlier.

Also, because the eye-tracking devices consisted of multiple parts and were not fully wireless (i.e., tablet computers, eyeglasses, etc.), sometimes connections between the parts were not stable, and cords were accidentally unplugged during the activities. The museum offered many exhibits that required body movement, so we often faced unplugged wires when participants were actively engaged in the activities. This separation could affect the accuracy of calibrated eye-tracked footage and cause data loss. We indeed lost some portion of data from other children in our study. However, mobile eye-tracking technology is developing rapidly, so many technical issues can be mitigated in the foreseeable future. In the intervening time, the Pupil Labs’ system also introduced a wireless, PDA application. Thus, future work would allow for greater mobility, longer testing times, and less concern with calibration loss.

Besides some technical issues, we did note some drawbacks of using mobile eye-tracking. A mobile eye-tracking system could not be used for a large number of participants at once, because each participant needs each device, which requires extra time for calibration. Furthermore, for children, particularly younger ones, carrying a mobile eye-tracker during their visit could be uncomfortable. During the data collection, we found that sometimes children touched and moved their eyeglasses, which decreased the accuracy of eye-footage or even caused the loss of data from the eye camera. Although recent versions can resolve some discomfort, researchers still need to consider and be aware of this issue when working with children.

Our mobile eye-tracking data provided a very specific and idiosyncratic viewpoint into the setting and learning contexts surrounding objects and people that the child was looking at. On the other hand, the mobile eye-tracker could not capture contextual information beyond the world camera, such as Ian’s facial expressions. Thus, aligned with previous researchers (e.g., Hyönä 2010), we encourage researchers to use other types of data collection (i.e., camcorders for video-recording participants’ activities from a distance), as well as mobile eye-tracking to capture full details of learners’ experiences.

Conclusion

This article highlights the methodological advantages and trade-offs of mobile eye-tracking when studying children’s learning in their out-of-school time. To illustrate how we collected and analyzed mobile eye-tracking data, we presented the methodological case study of Ian and his mother. By showing the kinds of research questions, data, findings, and implications to IDT research and practice, we illustrate how mobile eye-tracking offers meaningful insights to precisely demonstrate children’s interactions in a museum environment. Mobile eye-tracking allowed us to understand better where a learner engaged visually among multiple subparts of educational exhibits, which would be very difficult to capture with other types of data. The detailed, precise, in-depth information from mobile eye-tracking can improve the understanding of how learners interact with certain educational tools and other people. Such details on learner interaction can contribute to enhancing the quality of educational tools and resources.