Introduction

Mild cognitive impairment (MCI) refers to the intermediate period between the typical cognitive decline of normal aging and more severe decline associated with dementia. It is typically characterized by a subjective cognitive complaint, objective decline in at least one cognitive domain, and preserved independence in everyday life. However, the extent to which functional independence is preserved in MCI is highly controversial, and a growing body of research has suggested that everyday functioning is, indeed, negatively impacted at this stage. Moreover, functional difficulties in MCI have been associated with specific cognitive deficits, suggesting direct links between the cognitive decline in MCI and functional changes that have become increasingly evident. Our understanding of functional limitations in MCI is hindered by a scarcity of adequate methods, as subjective reports may be inaccurate, and objective measures do not always capture subtle changes in everyday function. The first half of this paper will review the literature on everyday functioning in MCI and will conclude that the development of sensitive, objective measures is crucial to improve understanding of the breakdown of functioning in MCI, to aid with early detection of risk for decline, and to provide meaningful outcome measures for treatment studies.

The second half of this paper focuses on eye tracking as a potential solution to the challenges associated with functional assessment in MCI. This section begins with a review of recent studies that distinguish healthy individuals from those with MCI with eye tracking methods using a range of paradigms. Next, the review turns to findings from the literature on eye movements in healthy individuals performing everyday tasks; this section highlights consistent patterns of eye movements across individuals and their implications for understanding the mechanisms underlying complex everyday task completion. Then, the few studies that have used eye-tracking methods to examine everyday functioning in individuals with severe cognitive impairment will be reviewed. In conclusion, we will propose that the identification of divergent visual behaviors during everyday tasks and their cognitive underpinnings constitutes a promising research direction to better characterize the changes in MCI that impede efficient completion of daily activities and may herald future decline.

Everyday Functioning in Mild Cognitive Impairment

Controversy of Functional Ability

Individuals with MCI exhibit minor deficits across a spectrum of cognitive domains and are at increased risk of progression to dementia disorders (Petersen et al. 1999). The current diagnostic criteria for MCI stipulate the presence of a cognitive complaint, objective decline in at least one cognitive domain, and generally preserved functional independence/complex activities of daily living (Albert et al. 2011; American Psychiatric Association 2013). Initial criteria proposed for the diagnosis of MCI suggested that functional impairments are not observable at this stage, and went so far as to maintain that the presence of functional deficits differentiates between individuals with Alzheimer’s disease (AD) and MCI (Petersen et al. 1999). Over time this distinction has been called into question. Nygard (2003) highlighted the importance of considering informant and self report of changes in daily functioning when diagnosing MCI, as subtle everyday difficulties may be the earliest signs of illness. Further, Winblad et al. (2004) have suggested that basic activities of daily living (ADL; e.g., bathing, eating) are preserved in MCI, but that complex or instrumental activities of daily living (IADL; e.g., meal preparation, medication management) may be either intact or minimally impaired. Applying these criteria that incorporate a change in activity level was shown to better identify individuals that went on to convert to dementia (Artero et al. 2006). However, a recent revision to the MCI criteria by Petersen (2011) only acknowledged the possibility of mild functional inefficiencies while maintaining that MCI “does not compromise the ability to function” (Petersen 2011, p. 2227). Recently, Morris (2012) has suggested that allowing functional deficits as a criterion for MCI is misleading and premature. Table 1 lists several conflicting quotes on this topic from the MCI literature. Clearly, a high degree of inconsistency and ambiguity continues to surround the inclusion of functional impairment in the diagnosis of MCI (see Gold 2012 for a review).

Table 1 Quotes from the MCI literature demonstrating confusion and controversy regarding everyday functional difficulties

Evidence of Functional Difficulties in Mild Cognitive Impairment

Recent research has emphasized the importance of recognizing and characterizing functional deficits in the initial phases of cognitive decline, and many have thus argued for the incorporation of IADL deficits into the MCI diagnostic criteria (Albert et al. 2002; Artero et al. 2001; Brown et al. 2011; Goldberg et al. 2010; Griffith et al. 2003; Nygard 2003; Perneczky et al. 2006b; Tam et al. 2007). Self and informant reports of independence in IADLs have revealed impairment in individuals with MCI compared to healthy older adults (Ahn et al. 2009; Aretouli and Brandt 2010; Bombin et al. 2012; Perneczky et al. 2006a, b; Reppermund et al. 2011; Tam et al. 2007; Tuokko et al. 2005). Moreover, faster decline in self-reported everyday function has been shown in MCI compared to healthy older adults over a 3-year period (Wadley et al. 2007). In one study, IADL items with high versus low cognitive demand determined by factor analysis showed differential impact in MCI as reported by informants, with differences between MCI and healthy older adults emerging for high, but not low, cognitive demand items (Reppermund et al. 2011). This finding emphasizes the potentially subtle nature of functional difficulties and the need for detailed and sensitive evaluation to capture meaningful changes in MCI. Jefferson et al. (2008) addressed this measurement issue by examining several different informant-rated assessments of functioning in MCI. They found that an error-based informant report measure (Functional Capacities for Activities of Daily Living, FC-ADL) identified functional differences between MCI and healthy older adults, whereas no differences emerged using a more global measure of functioning (Lawton and Brody Instrumental Activities of Daily Living and Physical Self-Maintenance Scale). Discrepancies in results across studies using different functional measures may account, in part, for inconsistent conclusions regarding functional status in MCI.

Functional impairments in MCI have also been demonstrated on performance-based measures of functioning, which can provide more objective information on the degree of functional impairment. An early study in the context of this body of literature demonstrated poorer performance in MCI on an objective financial capacity measure (Financial Capacity Instrument, FCI) relative to healthy older controls, with less impaired performance than that of a mild AD group (Griffith et al. 2003). Schmitter-Edgecombe et al. (2012) also found that MCI participants performed more poorly than healthy older adults on the Day-Out Task (DOT), a naturalistic test of functional ability that requires participants to complete multiple tasks and plan for an outing. Functional difficulties in MCI have also been revealed using “smart-home” technology in which participants were videotaped while performing daily activities in a room equipped with everyday objects (Sacco et al. 2012). In one study, although individuals with MCI completed a performance-based test of IADLs with comparable accuracy to healthy older adults, their performance was significantly slower (Wadley et al. 2008), suggesting reduced efficiency during everyday task completion.

In several studies using performance-based measures (e.g., UCSD Performance-Based Skills Assessment, UPSA; Everyday Cognition Battery, ECB), those with MCI displayed greater impairment than healthy older adults even in the absence of self-reported (Allaire et al. 2009) or informant-reported functional problems (Goldberg et al. 2010). Giovannetti et al. (2008) also found that despite caregiver reports of intact functional abilities, individuals with MCI performed worse than healthy participants on even simple tasks of everyday action as measured by the Naturalistic Action Test (NAT). This suggests that difficulties may not be limited to higher-order functioning and that the capacity to perform basic abilities may also be impacted in MCI. Similarly, a recent study showed that individuals with MCI made more overt errors on a test of everyday action than healthy controls. However, MCI and control participants showed similar error profiles, which these authors interpreted as evidence that the hierarchical structure for naturalistic action (i.e., nested sub-goals within higher-level task goals) remains intact in MCI, despite greater overall impairment (Gold et al. 2014).

Everyday Functioning and Conversion to Dementia

Several studies have demonstrated the relevance of functional difficulties in MCI with regards to risk for conversion to dementia, highlighting the importance of targeting these early changes. In one study, cognitively impaired individuals who went on to develop dementia over a 3-year period demonstrated higher rates of baseline disability as measured by a sensitive informant report than those who did not develop dementia (Artero et al. 2001). These authors thus stressed the danger of denying home help to individuals who may be deemed functionally adept because they do not carry a diagnosis of dementia. In a striking longitudinal examination of rates of conversion to dementia, Purser et al. (2005) found no differences in conversion rates between healthy adults and individuals with MCI, but significant conversion differences did emerge between MCI participants with and without self-reported IADL impairments (i.e., difficulty or dependence). This is consistent with the finding in another study that MCI associated with self-reported difficulties in one of nine IADLs (e.g., transportation, managing finances, etc.) conferred greater risk of conversion to dementia than MCI without functional difficulties (Luck et al. 2011). Relatedly, self-reported restrictions in two of four IADLs (i.e., telephone use, transportation, finances, medications) have been shown to predict conversion to dementia over 2 years, even among individuals with normal cognitive profiles (Peres et al. 2008). In one study, change scores representing rates of decline in informant reports of everyday functioning were shown to be larger than those of cognitive or biological measures, indicating that conversion to dementia may be driven more strongly by functional decline than by changes in the neurobiological disease trajectory (Gomar et al. 2011). These findings suggest that functional status is compromised in MCI and may be more relevant in predicting future decline than the MCI diagnosis itself, which challenges the clinical utility of this diagnosis in its current form and necessitates further investigation of the processes underlying functional decline.

Cognitive Processes and Everyday Functioning

Parsing out the cognitive underpinnings of specific everyday skills constitutes an important step in characterizing functional difficulties in MCI and informing prevention and intervention strategies. This is also important with regards to determining the extent to which IADL dysfunction is driven by cognitive deficits as opposed to physical limitations that are common in older age. In support of the relation between cognition and functional ability in MCI, specific subscales of the Dementia Rating Scale (DRS-2) related to memory and executive functions have been shown to be significantly associated with informant-rated IADLs (Greenaway et al. 2012), and significant relations have been found between semantic dysfunction and performance on the on the UPSA, a performance-based measure of functional ability (Kirchberg et al. 2012). Artero et al. (2001) also found relations between informant-reported competence in everyday activities and several cognitive domains, specifically attention, memory, language, and visuospatial ability. Further, changes in episodic memory and executive functions independently predicted rates of change in informant-rated IADLs (Tomaszewski Farias et al. 2009), and measures of cognitive function and performance-based everyday function were shown to change together over time (Tucker-Drob 2011), highlighting the relevance of cognitive changes to functional decline in MCI.

Several studies on functional difficulties in MCI have suggested that cognitive deficits, particularly episodic memory difficulties and executive dysfunction, are not only broadly associated with functional ability but are also associated with unique functional deficits. Studies have linked episodic memory impairment with specific deficiencies in managing money (Bangen et al. 2010; Barberger-Gateau et al. 1999; Teng et al. 2010), whereas executive dysfunction has been shown to impact health and safety, and more specifically management of medication (Bangen et al. 2010; Mariani et al. 2008). In contrast, one study found executive function to be significantly associated with financial management (Okonkwo et al. 2006), although methodological differences may account for these disparate findings. In the study by Schmitter-Edgecombe et al. (2012), different cognitive functions predicted specific functional difficulties, with retrospective (i.e., episodic) memory deficits relating to incomplete and inaccurate subtask completion, and prospective memory (i.e., planning) problems specifically predicting sequencing difficulties. Moreover, Bangen et al. (2010) found that functional difficulties were even more powerful in differentiating between MCI subtypes (with and without episodic memory impairment) than between MCI and healthy participants, emphasizing the importance of considering the distinct functional profiles associated with MCI subtypes.

The severity of functional deficits has also been associated with specific cognitive domains in MCI. For example, individuals with the amnestic subtype of MCI (aMCI; episodic memory impairment present) have been shown to display greater IADL deficits than non-aMCI participants (Teng et al. 2010; Wadley et al. 2007). Farias et al. (2006) found the most severe functional impairment in skills that rely heavily on memory and, to a lesser extent, on executive functions, based on informant report. Memory, along with processing speed, also emerged in one study as the most frequent unique predictor of specific everyday task ability in MCI (Tuokko et al. 2005). These relations are supported by neuroimaging findings showing that within MCI, those with intact functioning as measured by informant ratings (Pfeffer Functional Activities Questionnaire, FAQ) had greater hippocampal volumes as well as better performance on tests of auditory verbal memory and processing speed compared to those with moderate or severe FAQ ratings (Brown et al. 2011). Consistent with this, episodic memory impairment in MCI has been associated with elevated risk for conversion to AD compared to other cognitive dysfunction (Aggarwal et al. 2005; Luck et al. 2011), although this is in contrast to findings by Kohler and colleagues showing non-amnestic MCI to have the highest risk of dementia, above that of amnestic subtypes (Kohler et al. 2013).

When evaluating cognitive contributions to functional ability in MCI, it is important to acknowledge variable inclusion criteria for MCI samples across the literature. For example, in studies including solely aMCI participants (Brown et al. 2011; Gold et al. 2014; Griffith et al. 2003), findings of greater dysfunction in MCI than controls may reflect aMCI as a more severe subtype of MCI than non-aMCI, and conclusions may not capture the full range of functional status in MCI. Interestingly, however, Gold et al. (2014) showed a prominent role for episodic memory in action errors in both their MCI and healthy control groups (Gold et al. 2014), suggesting that episodic memory may be a particularly important cognitive function for everyday action regardless of severity of impairment or diagnosis. This is consistent with a recent study from our group, in which subtle action inefficiencies, termed “micro-errors,” were significantly predicted by episodic memory in a single group comprised of both healthy older adults and individuals with MCI (Seligman et al. 2014).

Functional status may thus be linked to specific cognitive profiles of mild impairment, and may effectively differentiate between groups of individuals in the early phases of decline. However, the combination of executive and episodic memory impairments in MCI is common (Libon et al. 2010), and multi-domain MCI (i.e., multiple cognitive functions impaired) has been associated with greater functional impairment than single-domain MCI (Alexopoulos et al. 2006; Aretouli and Brandt 2010; Bombin et al. 2012; Seelye et al. 2013; Tam et al. 2007). Further, a combination of cognitive domains, including episodic memory, semantic ability, executive function, and processing speed/attention, together have been shown to explain the majority of variance in a performance-based measure of functioning (Goldberg et al. 2010). Thus, it is important to consider the influence of different cognitive processes along a continuum of functioning, eliminating the arbitrary cut-off scores that determine group placement. Increased understanding of the independent roles of specific cognitive domains in the early phases of decline, along with their potential interaction in a prevalent subtype of MCI, may provide valuable insight into the mechanisms of functional decline and optimal intervention strategies early in the progression of illness.

Challenges in Assessing Everyday Functioning in MCI

Despite the importance of addressing initial signs of functional impairment in older adults, methods to assess and meaningfully characterize functional abilities are limited. In evaluating functional ability, researchers and clinicians use a variety of measures that can produce inconsistent impressions of an individual’s level of functioning. Specifically, self reports tend to inflate ability (Tabert et al. 2002) and caregiver/informant reports can be unstable and influenced in either direction by emotions or motives (Arguelles et al. 2001; Wadley et al. 2003; Zanetti et al. 1999). Variability in older adults’ daily activities (e.g., working full time vs. attending a senior center once per week) and variability in the complexity of daily activities (e.g., managing one medication vs. ten medications) also present difficulties for systematic measurement of IADL and comparisons across individuals and groups. Further, as previously mentioned, the use of disparate methods to measure functional status likely contributes to the wide variability in reports of relations between cognitive and functional abilities (Burton et al. 2009; Gold 2012; Royall et al. 2007) as well as in reports of the degree to which functional ability is compromised in MCI (Goldberg et al. 2010). Indeed, the use of different formats of self-reported functioning (i.e., subjective judgment of disability vs. reported rates of completing IADLs) was shown to yield variable sensitivity regarding functional status in MCI (Bombin et al. 2012). Winblad et al. (2004) point out the additional challenges posed by a lack of longitudinal age-specific and decline-rate norms, given the goal of characterizing cognitive and functional decline, which suggests that the use of current test score cut-offs to diagnose individuals with MCI may be premature.

Attempts to address these problems are also inconsistent across the literature, with some advocating for a combination of self and informant report. For example, Rueda et al. (2014) have suggested that self report should be used early in the course of decline with later precedence given to informant ratings, whereas Tabert et al. (2002) have suggested that a discrepancy score between self and informant reports may be more informative than either report interpreted in isolation. Other investigators have emphasized the need for objective assessment of functional ability that does not rely on reports by individuals impacted by these difficulties (Pereira et al. 2010). Pereira et al. (2010) provided evidence that an informant questionnaire was, in fact, less accurate than an objective assessment (Direct Assessment of Functional Status; DAFS) in distinguishing between control participants, those with MCI, and those with AD. In a study by Tucker-Drob (2011), changes in self reports of everyday function among older adults over 5 years were shown to be weakly correlated with changes in cognitive function, compared to strong relations between changes in performance-based tasks and cognition. Although limitations regarding the ecological validity of a laboratory setting for assessment of daily functioning should be acknowledged, the shift towards objective assessment of functioning may be crucial in establishing a consistent, accurate classification of individuals at greatest risk for dementia.

A challenge posed by objective assessment of early decline is the development of reliable and valid measures that can detect subtle impairment in daily functioning. The Naturalistic Action Test (NAT; Schwartz et al. 2003; Schwartz et al. 2002) requires participants to complete goal-directed tasks (e.g., make toast and coffee) at varying levels of complexity and has been used primarily to assess everyday action abilities in patient populations with moderate to severe impairment. Bettcher et al. (2008) used the NAT to examine error-monitoring abilities in individuals with dementia by coding “microslips,” or rapid corrections prior to full execution of an error, demonstrating the utility of the NAT in evaluating subtle aspects of everyday action. Building on the development of sensitive NAT variables, the reliability and validity of a measure of subtle action disruption termed “micro-errors” was recently established to quantify inefficient but not overtly erroneous task execution that may reflect the early stages of decline (Seligman et al. 2014). Examination of micro-error rates can provide useful information regarding the manifestation of cognitive decline in everyday task completion, but the mechanisms underlying micro-errors remain unclear and are important to our understanding of the functional impact of degenerative processes. The following section of this paper will focus on the literature describing eye movements in MCI and what is currently known about visual behaviors during everyday activities. This review will conclude with the proposal to use eye tracking during everyday tasks as a method to detect and study early functional difficulties in individuals with MCI.

Eye Movements as Sensitive Indicators of Early Disruption of Everyday Function

Human eye movements during cognitive tasks have been studied since the 1800’s. In early studies an examiner directly observed and documented movements of the eyes while reading (Wade and Tatler 2005). In 1901, Dodge and Cline were the first to record eye movements photographically, and from these measurements Dodge began to lay the foundation for our understanding of oculomotor processes in perception (Dodge and Cline 1901; Wade et al. 2003). As eye-tracking technology evolved, stationary devices were developed in which the head was held in place while participants completed tasks such as reading (Buswell 1920), typing (Butsch 1932), or viewing pictures (Buswell 1935). Today, eye-movement data are collected using non-invasive eye tracking technologies that may be worn on the head while the participant moves freely in the environment and performs everyday activities. Eye trackers collect information regarding the position of the pupil in a scene (i.e., fixation location/duration) and the distance moved by the eyes from one location to another (i.e., saccade magnitude/sequence). Studies have shown that saccades and fixations recruit the same neuroanatomical circuitry as “covert” shifts of attention that do not include eye movements, suggesting that eye movements may be useful indicators of the focus and direction of attention (Corbetta et al. 1998; Findlay and Gilchrist 2003; Jovancevic et al. 2006; Shinoda et al. 2001).

While attention and eye movements may be captured by compelling and salient perceptual stimuli (i.e., bottom-up influences), eye movements during a range of tasks, including the viewing of pictures and scenes, have also been shown to be strongly influenced by participants’ goals and knowledge (i.e., top-down influences; Olivia et al. 2003; Yarbus 1967). One fascinating and important advantage to studying eye movements is that they do not involve explicit verbal report and may offer additional insight that may not be gained by the examination of overt behaviors alone (Heine et al. 2010; Norbury et al. 2009). Thus, the examination of both eye movements and overt behaviors may offer an especially rich analysis of cognition (Mogg et al. 2003). Examining eye movements during completion of everyday tasks rather than inferring cognitive mechanisms from stationary scene-viewing or laboratory cognitive tasks is critical for many reasons, including the active and dynamic nature of real-world task completion (Tatler et al. 2011), and this will therefore be the focus of this review.

Despite the seeming simplicity of eye movements during everyday tasks, visual behaviors may reveal complex cognitive processes related to object identification, place memory, task execution, and monitoring. The analysis of eye movements has led investigators to reconsider the notion that routine, everyday tasks can be performed automatically, without monitoring or active cognitive processing (Land 2006; Land and Hayhoe 2001). This is also consistent with the findings of Tucker-Drob (2011), who described direct links between declines in cognition with age and declining everyday function on a performance-based task in healthy older adults and concluded that this link dispelled the myth that daily activities could be performed “automatically” and were impervious to the subtle cognitive decline associated with normal aging. That these early functional changes are not always reported may thus be due to lack of insight rather than preserved functioning in the face of cognitive dysfunction (i.e., there was no association with self-reported IADL functioning and cognition). In support of this conceptualization, sensitive measures such as fixation duration have been used to characterize visual routines that are not normally accessible to consciousness and can therefore yield important insights into task execution that may be lost in self report and gross behavioral observation (Hayhoe 2000; Pelz and Canosa 2001). Further, eye movements may reflect error detection and reformulation of the plan (i.e., preemptive error correction) before detectible (erroneous) actions of the limbs are launched. This form of preemptive error correction may occur implicitly or without conscious awareness. Thus, exploration of saccade sequences and fixation patterns during task completion may reveal subtle but crucial information regarding early changes in everyday function that may not be detected through analysis of manual behaviors or self report.

Eye Movements in Mild Cognitive Impairment

Given this potential utility of eye movements as sensitive markers of early action disruption, eye tracking may be a highly informative tool to address the initial phases of decline (i.e., MCI), when intervention may be most effective. However, to date no studies have examined eye movements in MCI during completion of everyday tasks. Thus far, eye-tracking research in MCI has focused on eye movements during specific visual tasks to address the goal of identifying potential early indicators of future decline. These findings demonstrate the potential to obtain information from eye movements that may reflect mild or even latent cognitive dysfunction, and they provide support for the utility of eye tracking as a sensitive measure of subtle behavioral differences. In a recent review, Pereira et al. (2014) outlined the range of cognitive dysfunction observed in AD that may be detectible as early as MCI with the aid of eye-tracking technology, and they similarly argued for the use of eye-tracking methods to address early cognitive difficulties. We further maintain that the examination of visual behaviors in MCI during everyday task performance is a critical future step in characterizing the changes during this stage that may be most relevant for prognosis and intervention. Findings from studies thus far exploring specific visual behaviors in MCI will be described in this section (see also Molitor et al. 2015; Pereira et al. 2014).

One specific visual task that has been used to examine subtle deviations in MCI, the visual paired comparison (VPC) task, evaluates memory function by determining whether individuals exhibit a preference for a novel as opposed to a previously presented picture; more time spent viewing the novel picture indicates intact memory of viewing the other image. No overt behavioral response to the stimuli is required; participants are simply asked to look at the stimuli on the screen as if they were “watching television.” Performance on the VPC task has been shown to differentiate between healthy individuals and those with aMCI, demonstrating the sensitivity of eye movement patterns to subtle cognitive decline (Crutcher et al. 2009; Lagun et al. 2011). Further, Zola et al. (2013) showed that performance on this task predicted conversion from healthy aging to aMCI and from aMCI to dementia up to 3 years prior to diagnosis. Thus, results obtained using the VPC task may capture the episodic memory deficits characteristic of aMCI not only in their early phases, but also prior to their manifestation on standard neuropsychological tests of cognitive functioning. The success of these studies has led to the development of commercial products such as Neurotrack (http://neurotrack.com) that aim to implement such tasks as a predictive tool in clinical practice.

Another task that has been used to examine eye movements in MCI is the antisaccade (AS) task, in which individuals must inhibit a prepotent response towards a stimulus and voluntarily shift their eyes in the opposite direction when a stimulus is presented. The AS task has been shown to correlate most strongly with measures of executive function in combined groups of healthy older adults, MCI, and dementia participants (Hellmuth et al. 2012; Heuer et al. 2013). It has been demonstrated that performance on this task is similarly impaired in individuals with aMCI and AD compared to controls (Peltsch et al. 2014). In this study, errors on the AS task were negatively correlated with performance on the Stroop task, a cognitive measure of inhibition, in both control and aMCI groups but not in the AD group, suggesting that AS performance may reflect subtle executive difficulties in the early phases of cognitive decline. Interestingly, Heuer et al. (2013) found that although individuals with MCI performed comparably to healthy older adults in their sample on the AS task, cortical thickness in frontoparietal regions typically associated with AD was significantly related to AS performance in the MCI group, but not the healthy older adult group. Similarly, in a study by Alichniewicz et al. (2013) combining eye tracking and fMRI methodology, aMCI participants showed decreased activation of the fontal eye fields compared to healthy older adults during AS performance. Together, these findings suggest that features of eye movements in MCI may uniquely reflect disease burden associated with dementia progression, and specifically executive/inhibitory dysfunction, at this early stage.

Several studies have also investigated more basic visual behaviors in MCI. One study examining fixation properties in aMCI, AD, and healthy older adult groups found that individuals with both aMCI and AD differed from healthy controls in the direction of their “microsaccades,” or small, involuntary movements when instructed to fixate. Individuals with aMCI and AD did not differ from each other, but both made more oblique eye movements in comparison to the typical horizontal microsaccades of their healthy counterparts. Interestingly, global cognitive impairment as measured by the Mini-Mental State Examination (MMSE) correlated with microsaccade direction only in the aMCI group but not within the AD or healthy control groups, suggesting a specific relation between microsaccades and cognitive impairment in MCI, at least when memory impairment is present (i.e., in individuals with aMCI). Microsaccade direction was also correlated with a subjective measure of independence in activities of daily living across the groups, providing support for the relevance of visual behaviors to functional ability (Kapoula et al. 2014).

In contrast, examination of prosaccades (i.e., eye movements towards a stimulus presentation) revealed no differences between healthy and aMCI groups, whereas individuals with AD had longer prosaccade latencies as well as greater variability in speed and accuracy of prosaccades than both healthy older adults and those with aMCI (Yang et al. 2011, 2013). Prosaccades have been described as involving rapid, automatic responding that does not require higher-order executive processing (Peltsch et al. 2014), so it may be the case that this more basic function is not yet disrupted at the level of MCI. In support of this conceptualization, a follow-up analysis by Yang et al. (2013) revealed that when task demands increased to require shifting of attention and decision-making, the aMCI group did show longer prosaccade latencies than healthy control participants. This explanation is also consistent with results of a study by Salek et al. (2011) that examined visuomotor integration abilities in MCI. Task parameters in this study varied the degree to which visual feedback was congruent with the required action to provide increasing levels of visuomotor complexity. Individuals with MCI demonstrated intermediate performance on this task between that of healthy older adults and individuals with AD, with no differences between the healthy and MCI groups on the most basic, congruent condition. The authors suggested that these findings might be due to a subclinical decline in the parietofrontal motor network in MCI.

In light of the current research examining visual behaviors in MCI, it seems reasonable to conclude that basic eye-tracking tasks can detect subtle cognitive deficits indicative of prodromal dementia. Consistent with this, a recent review on eye movements in AD concluded that some of the basic oculomotor changes associated with AD are present in MCI and that eye movements may be useful in detecting the early stages of memory impairment (Molitor et al. 2015). However, these studies predominantly addressed a single domain of cognitive dysfunction (e.g., episodic memory or executive function), most included an exclusively amnestic MCI population (i.e., aMCI), and none examined these processes during completion of everyday tasks, which are subtly compromised in MCI but have been shown to be strongly associated with conversion to dementia. With regards to naturalistic action, the cognitive resources required are much more widespread and complex, and a more nuanced description of eye movements during these tasks is crucial to our understanding of the breakdown of everyday functioning in MCI and dementia. For example, the number of look-ahead fixations made by individuals with MCI during everyday action tasks has greater potential to capture coordinated aspects of planning, memory, and task knowledge than isolated paradigms such as the VPC and AS tasks. Further, findings of deviant eye movements in MCI across several studies (Crutcher et al. 2009; Lagun et al. 2011; Peltsch et al. 2014; Yang et al. 2013; Zola et al. 2013) suggest that eye movement data may reveal marked compensation in MCI relative to healthy populations during everyday tasks. Variability in this compensation across individuals with MCI may explain the controversy surrounding whether everyday functioning is a feature of this syndrome (see Table 1). There is clearly a need for more sensitive measures of early dysfunction that can aid with both early detection as well as increased understanding of the mechanisms associated with early action disruption, and we propose that eye-tracking methods are well suited to serve this need.

Naturalistic Eye Movements in Healthy Individuals

Consistency Across Individuals

Although not yet explored in MCI, characteristics of eye movements during completion of naturalistic tasks such as tea and sandwich making have been examined in healthy individuals. This body of work has revealed several important features of visual processes during these tasks to which eye movements of impaired individuals can be compared. Durations of fixations vary widely within individuals (Land et al. 1999), but both fixation position and duration as well as saccade magnitude show strikingly little variation in their distribution across individuals (Hayhoe et al. 2003; Land et al. 1999; Rothkopf et al. 2007). This suggests the potential to reveal a meaningful, inherent mechanism underlying visual processes during successful task completion, as fixation durations do not appear random or unique to an individual (Tatler et al. 2011). The duration of short fixations across individuals in a sandwich-making task was shown to be particularly reliable, and more importantly was shown to be even briefer than the time typically required to program a saccade. This suggests that a series of multiple very brief fixations reflects a pre-programmed sequence of saccades, indicating the implementation of a planning mechanism as opposed to independent reactions to stimuli (Becker and Jurgens 1979; Hayhoe et al. 2003). Most fixations during everyday tasks appear to constitute these shorter events, with longer fixations relating to prolonged actions that require continuous guidance such as pouring water into a cup (Hayhoe et al. 2003).

Interestingly, within individuals, saccade magnitude has been shown to vary significantly across different actions, and these may reflect the respective cognitive mechanisms at play. For example, saccades occurring during manipulation of a single object or within a single task step (e.g., transferring bread to the plate; within-action saccades) demonstrated a clear peak and tended to be relatively short, whereas saccades between task steps, during transfer of gaze between objects, or during search (e.g., between transferring bread to the plate and retrieving the peanut butter; between-action saccades) were larger and more variable. These differences likely represent distinct cognitive mechanisms that are associated with different types of saccades: between-action saccades are likely driven by stored task knowledge (i.e., schema) that inform the next object to be used in a sequence, whereas within-action saccades may be more strongly associated with direct visual cues or an “itch-like need” for the eyes to move small distances (Land and Hayhoe 2001, p. 3563). Interestingly, differences in within- and between-action saccade magnitude were similar across different everyday tasks (e.g., tea making and sandwich making), again suggestive of meaningful consistency of eye movements in naturalistic action (Land and Hayhoe 2001; Tatler et al. 2011).

Consistency in the relation between eye and hand movements during everyday tasks has also been observed across studies. Most hand movements during naturalistic action are accompanied by fixations to objects involved in the task step, with the exception of placing an object on a stable surface such as a table, which can likely be achieved using proprioceptive and peripheral detection or visual memory (Hayhoe et al. 2003). Closely timed eye-hand relations are more likely to occur within a single task step and have been described as “local” processes relying on dorsal visual regions that facilitate immediate reach and grasp actions. By contrast, eye-hand movements that occur between task steps may be governed by stored knowledge (i.e., schema) or goals that rely on the prefrontal cortex or the ventral visual stream (Morady and Humphreys 2011a). This point will be discussed again later in more detail. With regards to the sequence of events involving eye and hand movements, particularly between actions, it has been reported that in healthy individuals the eyes typically lead the hands, rather than reacting to motor manipulation (Land et al. 1999). Land and colleagues used the term “object-related action” (ORA), which they defined as “the sets of acts (including eye movements) associated with the current object of fixation,” (Land et al. 1999, p. 1315), to characterize temporal relations between visual and motor actions in tea making. They found that the order of ORA components typically involved an initial gross body movement, suggesting that the trunk is first to receive action information. Next, the first saccade to the intended object occurred, followed by initial signs of hand/arm movement.

Interestingly, Land et al. (1999) found “remarkable consistency” in the timing of the body, eyes, and hands (i.e., ORAs) during everyday tasks, again supporting the presence of a common program across individuals that may be disrupted in impairment. They reported that gaze shifted to the next object to be manipulated in the sequence an average of 0.61 s prior to completion of the previous motor act, which suggests the presence of a buffer that can store visual information for up to a second before its use and can inform hand movements for up to a second even in the absence of visual input (Land 2006; Land and Furneaux 1997; Land and Hayhoe 2001; Land et al. 1999). However, it is important to note that in both tea- and sandwich-making tasks, standard deviations of eye-hand latencies within individuals were quite large, with a number of instances in which the hands led the eyes or hand and eye movements occurred virtually simultaneously, particularly in the sandwich-making task (Hayhoe et al. 2003; Land and Hayhoe 2001). Hayhoe et al. (2003) interpreted this variability as a function of the opportunity to plan for motor action afforded by a continuously available scene along with the need to alternate control between hands. More work is necessary to explain differences in eye-hand latencies.

In addition to specifying consistency in the properties of eye movements and the timing and order of eye and hand movements in naturalistic action, eye-tracking research has aimed to describe the typical specific roles served by the eyes during completion of everyday tasks. The primary functions of eye movements in healthy individuals have been described with relative consistency (Hayhoe 2000; Land 2006; Land and Hayhoe 2001; Land et al. 1999) and have been labeled by Land et al. (1999) as “locating,” “directing,” “guiding,” and “checking” (see Table 2 for a description of each). They found that a third of all fixations fell explicitly into these categories and many others appeared loosely associated with them. The classification outlined does not apply to other, non-naturalistic tasks such as copying of blocks (Ballard et al. 1992) or sketches or to continuous tasks such as reading text or music (Land 2006). This underscores the importance of examining eye movements in naturalistic tasks to understand everyday behavior and how it may be disrupted in impaired individuals rather than attempting to apply conclusions from vision research in other domains.

Table 2 The functions of eye movements in everyday activities (Land et al. 1999; Pelz and Canosa 2001)

Figure 1 represents a theoretical model for the flow of information during an ORA that incorporates the eye movement characteristics just described. Specifically, when a task step is activated, the semantic (i.e., schema) and spatial representations of task objects and actions inform the body and eyes to orient towards the relevant object. Actions are then performed under supervision of the eyes, which determine when a given criterion has been met and the step is complete. The spatial representation of objects is updated if objects in the scene have been moved. Once the step is complete, the next step is selected, and the process repeats. Directing, guiding, and checking/monitoring eye movements occur during the execution of a task step (i.e., within-action saccades), whereas locating/look-ahead fixations may occur at any point during the task but typically occur prior to or between task steps (i.e., between-action saccades) to maintain the spatial representation of objects (see Fig. 1).

Fig. 1
figure 1

Six stages of an object-related action (ORA) are numbered and depicted in ovals. Representations that influence performance are depicted in boxes and include “Schema” (i.e., long-term semantic knowledge of the task that informs the objects, actions, and goals/outcomes associated with each task step) and “Spatial Representations” (i.e., temporary representations of the locations of task objects in the environment that are constructed and modified during performance). The four types of eye movements are shown in bold capital letters adjacent to the stage in the task during which they are likely to occur. Eye movements within brackets (i.e., []) reflect within-action saccades; eye movements without brackets reflect between-action saccades. Dotted lines depict the flow of information during an ORA. The solid line from “Schema” to “Spatial Representation of Objects in Environment” depicts a process that may occur at any time, but typically occurs before or between ORAs

Top-Down Versus Bottom-Up Visual Processing

An important debate in vision research, particularly with regards to naturalistic action, is whether the eyes are driven by inherent salience of objects in the environment in a “bottom-up” approach (Itti and Koch 2000) or a targeted search is employed using planned, “top-down” mechanisms (for a review, see Tatler et al. 2011). Indeed, naturalistic vision is unique compared to the predominance of research using tasks that involve a single visual or motor operation across repeated trials in that the observer is the active initiator of complex behaviors, and as such more complex cognitive processes may drive eye movements during everyday tasks. This also provides further support for the notion that visual behaviors during naturalistic action may yield insight into the subtle cognitive dysfunction that occurs in MCI. The “locating” (Land et al. 1999) or “look-ahead” (Pelz and Canosa 2001) fixation category of eye movements reveals the influence of knowledge and planning mechanisms on visual behaviors and has received a great deal of attention because of its relevance to this important debate. Thus, the research on locating/look-ahead fixations will be the focus of our review on the categories of eye movements.

The term look-ahead fixation has been used to refer not only to initial search fixations that establish the location and identity of target objects at the start of the task, but also to fixations made during task execution to objects that are used later in the task. Through active location of objects necessary for future use, look-ahead fixations may serve as a mechanism to “stitch together” visual stimuli during the preparatory phase of everyday task execution, facilitating the presence of a seemingly continuous environment in both time and space that is not captured on the retina (Pelz and Canosa 2001). This representation is continuously updated throughout task execution (Hayhoe et al. 2003; Pelz and Canosa 2001). The brief fixations of shorter duration than the time required to program a saccade (previously described as a pre-programmed sequence) are likely based on this spatial memory representation, as well as the task goals (Becker and Jurgens 1979; Hayhoe et al. 2003). The notion that saccadic eye movements work to piece together a mental representation of a visual scene has been proposed by multiple investigators (Irwin 1991; O’Regan and Levy-Schoen 1983).

Additional features of look-ahead fixations support an influence of task knowledge and goals. Pelz and Canosa (2001) highlighted the findings that targeted objects are rarely fixated after their use, objects irrelevant to the task are rarely fixated, and the specific objects fixated vary based on task instructions. Further, look-ahead fixations comprise specific, targeted saccades and fixations to objects needed in the future rather than indiscriminate sweeping saccades to the full array of objects in a scene, strongly suggesting that they constitute a targeted search process using “top-down” mechanisms. Top-down influences also help explain the observation of eye movements described as “in path” fixations by Hayhoe et al. (2003), as these occurred primarily to task-relevant objects for future steps along the path of a saccade towards an object for an immediate task step. This suggests that the spatial representation that is being built by the visual system is highly relevant to the task plan and future goals. Further, it seems that this representation is continuously updated by the visual system during everyday task execution. Pelz and Canosa (2001) suggested that building a representation of needed objects may help to ease the processing demands on the visual system during complex activities.

Rothkopf et al. (2007) specifically examined the question of whether a top-down, task-driven or bottom-up, salience approach best characterized vision during everyday, goal-directed tasks. Using a virtual reality environment, they tracked individuals’ eye movements while they completed a series of approach and avoidance tasks involving target objects and obstacles. They demonstrated that the task goals consistently influenced participants’ behavior, with fixations on identical objects showing striking variability depending on whether the goal was to approach or avoid the object. Consistent with other studies (Ballard et al. 1995; Hayhoe et al. 2003; Johansson et al. 2001; Land 2006), Rothkopf et al. (2007) also found that the majority of fixations during task execution were made to task-relevant objects rather than those irrelevant to the task, implying the influence of task goals on visual search and gaze allocation. They also noted that their findings could not be accounted for by a saliency model proposed by Itti and Koch (2000), in which visual behaviors are influenced by salient visual features (i.e., bottom-up processes).

Another line of research that supports the notion that internal spatial representations are influenced by knowledge and goals describes the phenomenon of “change blindness,” or the failure of individuals to notice changes to the visual scene that occur during saccades (Hayhoe 2000; Irwin 1991; Irwin et al. 1990; O’Regan 1992). The inability to detect physical changes in the environment has been interpreted as evidence that only limited and highly relevant information is included in the internal visual representation of a scene (Hayhoe 2000). Others have provided further support for this notion, demonstrating the importance of task goals in determining what visual information is detected (Folk et al. 1992; Hayhoe et al. 1998, 2003; Wallis and Buthoff 2000). According to Hayhoe (2000), change blindness is likely facilitated by task specificity, with task-irrelevant changes to a scene going unnoticed as a means of computational efficiency that prioritizes the task at hand. Prior research has revealed effects of attention and behavioral context on activity in visual cortex, even primary visual cortex (Gilbert 1998), supporting the theory that task goals influence what is perceived even at the most basic level.

It has also been widely demonstrated that fixations are often tightly locked to the ongoing task, with eye movements directly tracking the hands during object manipulation (Hayhoe 2000; Hayhoe et al. 2003; Land and Hayhoe 2001; Land et al. 1999; Pelz and Canosa 2001). This has led to the conclusion that visual information is obtained, at least to some degree, on an “as needed” basis rather than entirely proactively. Taken together with evidence that the eyes do, in fact, locate relevant objects prior to and during task completion, it may be the case that look-ahead fixations actively piece together a gross depiction of the scene that provides general location information for future use. During task completion, the eyes must still acquire specific information that hones this representation to facilitate both efficient and accurate task execution (Land and Hayhoe 2001).

Some have proposed that task conditions may influence the extent to which the eyes are influenced by top-town as opposed to bottom-up processes. For example, in a study using both sparse and cluttered scenes, fixations to task-relevant objects during initial search were more prominent in the sparse scene, whereas equal numbers of initial search fixations to relevant and irrelevant objects were observed in the cluttered condition. (Hayhoe et al. 2003). In this study, irrelevant object fixations still markedly declined during task performance in the cluttered condition, likely representing a shift from target selection based on intrinsic salience (bottom-up) to an instruction-driven (top-down) approach during task execution (Land 2006). This suggests that perhaps a salience or bottom-up strategy is only employed if the visual system is overtaxed or relevant objects are less easily targeted. The bottom-up strategy may be the default or “back-up” strategy, with task-guided visual search taking precedence when possible or when behavior must be directed by a goal. Consistent with this account, Tatler et al. (2011) suggested that the visual system “might only prioritize surprising stimuli when there is no other pressing demand” (p. 13), supporting a shift in strategy based on available cognitive resources (see also Lavie 2005). This shift is also evident in other tasks such as driving, in which both task instructions and local context appear to influence fixations to different degrees during different task segments (Shinoda et al. 2001).

Naturalistic Eye Movements in Cognitive Impairment

As suggested, understanding the contribution of visual processes to the completion of everyday tasks in healthy individuals can help characterize functional difficulties experienced by individuals with cognitive impairment. For example, the task-driven, top-down characteristics of eye movements described previously suggest that degraded task knowledge due to illness or injury may result in more bottom-up, salience-driven eye movements and fewer look-ahead fixations that rely on intact mechanisms for planning upcoming actions (Forde et al. 2010). Bottom-up driven visual search also may lead to an increase in fixations to irrelevant, off-task objects. Importantly, these perturbations may not be evident in overt actions. The evaluation of such hypotheses is critical to establish an informed and testable theory of everyday action breakdown in neurocognitive disorders. Eye movements have been examined in dementia disorders such as AD, but studies have primarily focused on basic ocular changes and complex scene viewing/visual search, which are outside of the scope of the naturalistic focus of this review (for a review, see Molitor et al. 2015). Although the literature on naturalistic eye tracking in the context of impairment is sparse, this technology has been applied in several studies of individuals with severe cognitive impairment.

One case study described a patient, FK, who suffered carbon monoxide poisoning that caused widespread brain damage and impeded his ability to complete many everyday tasks (Forde et al. 2010). On a tea-making task in this study, FK committed 19 behavioral errors consisting of perseverations, step omissions, semantic, sequencing, and spatial errors, compared to no errors made by two healthy participants and an individual with AD. FK’s eye movements were similar to those of healthy individuals in many aspects including his ability to scan the environment and the time locking of fixations and use of fixated objects, but several important differences were noted. Specifically, he fixated on more task-irrelevant objects, did not make orienting or look-ahead-type fixations, and made shorter eye movements than normal during perseverative errors. It is also notable that FK’s fixations on task-irrelevant objects were made even during correct actions (Forde et al. 2010). The authors took these differences to suggest that FK’s actions were not successfully driven by knowledge of task actions and goals, resulting in increased competition among objects for response selection (Forde et al. 2010). This account was supported by prior evidence that FK’s stored knowledge of everyday tasks was impaired (Humphreys and Forde 1998). Increased competition for response selection as suggested by FK’s saccades to irrelevant objects may also explain his subsequent behavioral errors, emphasizing the contributory value of eye movement data to action research.

Eye movements during errors were evaluated for their potential to offer insight into the cognitive mechanisms that contributed to FK’s poor performance. Forde et al. (2010) interpreted the brief eye movements to both the object being used and to neutral locations during perseverations to imply high distractibility during these errors. Another interesting aspect of FK’s behavior was that although he omitted more actions than normal, his eye tracking data showed that he did fixate on subsequently omitted task objects. Similarly, despite committing semantic errors (i.e., substitutions), FK fixated on the correct target objects that should have been used in the task. These data argue against impairment at the level of perceptual selection associated with more posterior brain regions, which would likely have led to a reduced number of fixations on target objects. Instead, these features of FK’s eye movements suggest a breakdown at the level of action selection and resolution of object competition subserved by an anterior attentional system (Forde et al. 2010; Posner and Petersen 1990). Consistent with this dissociation, Forde et al. (2010) pointed out that fixating on the object of perseveration was not sufficient to prevent the error and concluded that response monitoring may not always occur even when visual attention is directed towards an object.

Another interesting aspect of the study by Forde et al. (2010) was the performance of the comparison patient, EC, who was diagnosed with AD. EC’s performance on the tea-making task was errorless, similar to the healthy control participants. Despite her intact behavioral performance, EC made strikingly fewer fixations overall than the healthy controls and even FK. She also had generally longer fixation durations, which may be explained by the reduced number of fixations compared to the other three participants. With regards to her eye-tracking profile, EC demonstrated a higher proportion of neutral fixations (e.g., looking out the window but not at relevant or irrelevant objects) and a lower proportion of relevant fixations compared to the healthy control participants. These data were not discussed by Forde et al. (2010), as FK was the focus of the paper. However, we suggest that EC’s data are quite fascinating and provide support for the potential to detect discrepancies in visual behavior in individuals with cognitive impairment and functional disability (i.e., a diagnostic criterion for AD) who do not commit overt behavioral errors in laboratory tasks of everyday activities.

Morady and Humphreys (2011a) described the eye movements associated with everyday task completion in another patient, BL, following a stroke affecting the left occipitotemporal cortex. BL suffered a range of neuropsychological impairments, including deficits in reading, object recognition, semantic knowledge, and executive functioning. Her perceptual processing, in contrast, was relatively preserved. Her performance of everyday tasks was shown to be severely impaired, even compared to brain-lesion comparison patients (Morady and Humphreys 2011b). On tasks of tea and sandwich making (Morady and Humphreys 2011a), BL made omission, sequencing, and spatial errors, as well as a “toying” error involving manipulation of an object that was not subsequently used, and she performed the sandwich-making task more slowly than comparison participants. With regards to her eye movements, BL showed comparable time relations between fixations and object use to control participants (two patient and two healthy controls), similar to FK. She also was comparable to controls in her number of directing, guiding, and checking eye movements (as defined by Land et al. 1999; see Table 2). However, unlike FK, BL’s number of look-ahead fixations was higher compared to controls, as was the number of “other” eye movements that did not fall into the four categories previously defined in this review (see Table 2). She also made more irrelevant fixations than all control participants. Further, BL’s eyes departed from objects earlier than control participants’ relative to action completion.

The authors suggested that these atypical eye movement patterns could be due to task knowledge deficits and resulting failures in task planning or to difficulty resolving object or action competition. The authors also noted that BL made multiple brief fixations on an object she erroneously “toyed with” and speculated that this reflected similar characteristics to FK’s eye movements during perseveration, representing a disconnect between attention-grabbing features of objects and top-down processes (Morady and Humphreys 2011a). Together, the case studies of FK and BL suggest a shift towards a more salience-based or bottom-up approach to everyday tasks when cognition is severely compromised. Although these studies included different variables, which makes direct comparison challenging, some consistencies emerged that have broad implications for visual processing during completion of everyday tasks. They suggest a dissociation between attentional processes directed towards “global” task features that govern between-step actions and those relevant to “local” processes, such as the timing of fixations and an upcoming reach/grasp. It is suggested that the former, thought to rely on prefrontal cortical networks, is disrupted in these cases with severe naturalistic action impairment whereas the latter, dependent on more dorsal visual regions, is spared (Morady and Humphreys 2011a).

Studies of eye movements in other clinical populations have suggested relative preservation of top-down influences during everyday tasks. In a study on eye movements during naturalistic action in schizophrenia, individuals with schizophrenia and healthy control participants completed a “familiar” sandwich-making task and an “unfamiliar” construction task (Delerue et al. 2013). The schizophrenia group completed both tasks more slowly than healthy participants, but they performed more poorly only on the unfamiliar task. The only group difference in scanning that emerged in the familiar condition was that schizophrenia patients displayed longer gaze durations on irrelevant objects than the control group, suggesting greater susceptibility to distraction even during a familiar everyday task. Group differences in scanning were primarily observed in the unfamiliar task such that the schizophrenia group made more fixations on distractors, or task-irrelevant objects. The authors concluded that this might reflect a susceptibility to object saliency in schizophrenia along with an inability to disengage from salient features to complete the task. The schizophrenia group also made fewer look-ahead fixations only in the unfamiliar task, suggesting a failure to establish an efficient planning strategy like that employed by healthy individuals only in a novel context. Finally, schizophrenia participants had to reference the display model to complete the unfamiliar task more frequently than healthy controls, indicating difficulty constructing a mental representation and then holding that representation in mind while performing the task (Delerue et al. 2013). With regards to everyday, naturalistic action, this study suggests that individuals with schizophrenia are slower but display similar visual behaviors to healthy individuals; it is only when task demands are increased by introducing a novel task for which the action plan is less familiar that susceptibility to feature salience, distractibility, inefficient planning, and working memory limitations appears to differentiate between impaired and healthy individuals.

Together, these studies suggest that individuals with significant cognitive impairment retain some similarities in their eye movements during everyday tasks to controls, namely the time locking of fixations and object use associated with local visual attention. However, they reveal several ways in which eye movements can differ from healthy individuals across disorders, including increased numbers of irrelevant fixations and differences in the number of look-ahead or orienting fixations, particularly when impairment is severe or task demands are high. A striking difference that emerged between the case studies presented was that of FK’s absence of look-ahead fixations compared to that of healthy individuals, in contrast to BL’s greater number of look-ahead fixations. One potential explanation for this discrepancy may be differences in the severity of injury between these two patients, with FK having sustained more widespread brain damage. Greater numbers of look-ahead fixations, as demonstrated by BL, may thus reflect attempts to compensate for impairment facilitated by some preservation of task knowledge or planning capability, which may decline as impairment progresses and additional cognitive functions are lost. If this is the case, look-ahead fixations may be expected to increase in MCI and then decrease into dementia and may therefore represent an effective target in examining visual behaviors in MCI. However, the progression of eye movement patterns may also vary by MCI subtype (i.e., with or without episodic memory impairment; single or multiple domain), which will be an important consideration for future studies.

Conclusion and Future Research Directions

MCI is a transitional period between normal aging and dementia during which functional difficulties are present and are associated with cognitive decline. Central to our understanding of these difficulties is the development of sensitive measures to characterize subtle functional changes and identify those at greatest risk for conversion to dementia. Examination of eye movements during completion of everyday tasks in healthy individuals and those with severe impairment has demonstrated the potential for this methodology to reveal the mechanisms underlying completion of complex daily tasks. Eye-tracking methodology has also been shown to capture subtle differences associated with cognitive changes in MCI. However, this has not yet been explored in naturalistic action, which involves more complex cognitive interactions than isolated visual tasks. Among the important initial future directions of eye-tracking research is the establishment of standardized naturalistic eye-tracking variables to promote comparison across studies. Currently, much of the literature contains qualitative descriptions of eye movements during everyday action, although early studies suggest that the number and timing of look-ahead fixations and the number of fixations to non-target objects may be the most promising indicators for initial research. As the field expands, the development of quantitative norms will be important to provide a valid and meaningful assessment standard.

Many important open questions exist, including whether changes in naturalistic eye movements can help predict risk for conversion to dementia, and whether eye tracking has the potential to reveal specific functional difficulties that can be targeted with intervention and prevention strategies. The sensitivity provided by eye-tracking technology may also allow for exploration of differences in functioning across MCI that can aid in characterizing cognitive and functional heterogeneity within this population. Specifically, examination of the cognitive underpinnings of eye movement features will be important in bridging our understanding of neurodegenerative changes at the level of the brain with their impact on daily life. This is particularly important given the range of cognitive functions required for naturalistic action including knowledge of task objects and goals, planning for efficient completion of everyday tasks, recollection of what has been completed previously, maintenance of broad task goals during execution of sub-goals, and facilitation of coordinated eye-hand movements in pursuit of these goals. Systematic investigation of these questions will help elucidate effective markers for early identification of those at risk for future decline. Early markers also might identify treatment targets, which could include specific cognitive processes (e.g., task knowledge) and their neural circuitry (e.g., prefrontal cortex) as well as atypical eye movements (e.g., increased saccades to irrelevant objects). Novel research suggests that manipulating eye movements may improve performance on cognitive tasks (Thomas and Lleras 2007). Thus, investigations of visual behaviors may lead to a multitude of alternative intervention approaches to preserve independence and delay the functional disability associated with dementia.