Keywords

1 Introduction

The aim of this chapter is to examine how the concept of impairment has been applied in geriatric populations. In so doing, we will focus on impairments in cognition and in the performance of everyday behaviors as they are known to be age associated and interrelated. Moreover, impairments in cognition and everyday behavior are some of the greatest challenges faced by this population. As people live longer, more are likely to be affected by age-associated neurodegenerative diseases (e.g., Alzheimer disease, AD) resulting in a substantial number of cognitively impaired people requiring support and assistance in performing everyday behaviors (Gruenberg, 1977; Kramer, 1980). For these reasons, it is important to consider how cognitive impairment has been conceptualized, as well as factors that influence its expression.

We have chosen to examine the cognitive and functional impairments associated with later life within the disablement process, a broad conceptual framework emerging from discussions and research on disability (Verbrugge & Jette, 1994). We have chosen to do this, rather than limiting ourselves to the concept of impairment alone, because research with geriatric populations has revealed that the conceptualization and identification of impairment is heavily influenced by a myriad of factors. These factors include characteristics of the individual (e.g., biological, psychosocial, socio-demographic ) and actions that may be taken to reduce or accentuate impairment. These concepts are central to the disablement process and to understanding how behavioral interventions can be used to optimize functioning and well-being, minimize the risk of disability, and prevent the development of dysfunctional family or social functioning. We will address the concept of impairment, the many influences (e.g., lifestyle, psychosocial, compensatory) that may affect the consequences of impairment for an individual, and whether or not benefits from interventions are likely to be derived within the context of the conceptual framework of the disablement process.

The disablement process , a “sociopsychobiological” model of disability (Barberger-Gateau, Fabrigoule, Amieva, Helmer, & Dartigues, 2002), describes a pathway from pathology to various kinds of functional outcomes and incorporates psychological, social, and environmental factors that modify or alter the proposed pathway. According to Verbrugge and Jette (1994), “disablement” refers to impacts that chronic and acute conditions have on the functioning of specific body systems and on people’s abilities to act in necessary, usual, expected and personally desired ways in their society” (p. 3). The term “process” is used to acknowledge the dynamic interplay of factors that affect the direction, pace, and patterns of change over time.

The main pathway of the disablement model consists of four interrelated components: pathology, functional impairments, functional limitations, and disability (see Fig. 5.1). In this context, pathology refers to the biological and physiological abnormalities medically labeled as disease or injury. Pathology leads to functional impairments, defined as dysfunctions and significant structural abnormalities in specific body systems (e.g., neurological, cardiovascular, musculoskeletal) that have consequences for mental, physical, or social functioning. These consequences are referred to as functional limitations and are defined as restrictions in physical actions, such as mobility, discrete motions and strength, and mental actions, such as cognitive and emotional functions (Verbrugge & Jette, 1994). The final consequence of the pathway is disability, or difficulty performing everyday activities of daily living (i.e., basic and instrumental) and work-related activities.

Fig. 5.1
figure 1

Extended Disablement Process model

This main pathway, then, posits the sequence of events that lead from pathology to disability when medical factors are considered, and aids in distinguishing between constructs. For example, in the context of this model, “functional impairment” refers to dysfunctions or structural abnormalities in specific body systems (e.g., metabolic, cardiovascular, neurological, renal) that are identified through clinical examinations, laboratory tests, imaging procedures, and symptom reports. The term “functional limitation” is used to refer to restrictions in physical and mental activities (e.g., trouble seeing, short-term memory problems) that are frequently identified as “impairments” outside the context of this model. For example, the International Classification of Functioning, Disability and Health (ICF; World Health Organization, 2001) refers to impairment (or significant deviation or loss) of body functions (i.e., physiologic functions of body systems including psychological functions) and structures (i.e., anatomical parts of the body). Similarly, that described as disability in the disablement process model is often described as functional impairment or activity limitations in other contexts. To further extend the model, the social disadvantage resulting from an impairment and/or a disability has been referred to as restrictions of participation within the ICF (WHO, 2001). These distinctions begin to allow us to differentiate one set of consequences, resulting from an underlying pathology, from another.

However, it is well known that relations among pathology, impairments, limitations, and disability are not straightforward and are influenced by a myriad of other factors, many of which are psychosocial in nature. These include characteristics of the individual that affect the presence and severity of impairment, functional limitations, and disability (i.e., risk factors) (e.g., van Gool et al., 2005). In addition, actions or interventions may be taken in response to age-associated changes that mitigate or accentuate their impact. These may be internally generated (operate within a person) or may be dependent on others (external to the individual).

In practice, it is often disability and/or functional limitations that bring older adults to the attention of clinicians. The clinician’s role often is to determine the underlying impairments, abnormalities in specific body systems that give rise to these limitations and/or disabilities. For example, it may be determined that an older adult who presents with mild memory impairment (functional limitation) and difficulty handling finances (disability) is in the early stages of dementia (impairment). Medical investigations and a detailed clinical history (risk factors) would ensure examining for reversible forms of dementia and clarify the differential diagnosis. If no identifiable medical foundation for the dementia was evident, a presumptive diagnosis of AD (pathology) may be given.

At this point, the clinical focus may shift from diagnosis to interventions aimed at minimizing functional limitations and, consequently, disability. An important consideration in many chronic disease conditions, such as AD, is that these interventions are taking place within the context of progressive underlying pathology that is associated with progressive functional decline. This should not deter intervention efforts, but emphasizes the need to be mindful of expected patterns of progressive decline associated with various disorders and the factors that may reduce or accentuate the speed of decline or the manifestation of functional limitations and/or disability. Disability greater than that warranted by existing impairment and functional limitations has been referred to as “excess disability ” (Brody, Kleban, Lawton, & Silverman, 1971; Rogers et al., 2000) and carries with it the implication that vigilance is required to ensure all efforts are undertaken to maximize functional capabilities.

We have chosen to structure the remainder of this chapter in accordance with this clinical process (functional limitations/disability then impairment/pathology), in contrast to the sequence typically described in association with the disablement process model (pathology through to disability). As the focus of this chapter is on the functional limitations (i.e., cognitive impairments) and disability associated with later life, we will begin by examining key issues relevant to understanding the links between functional limitations and disability arising from the literature. We will focus on selected functional impairments (i.e., medical disorders) commonly seen in geriatric populations that differ with respect to expected patterns of progressive decline, risk factors that may influence the course of the disablement process or predispose an individual to cognitive impairment, and underlying pathology. We will then discuss intraindividual and extra-individual interventions that can be used to optimize functioning and well-being, minimize disability, and/or prevent the development of dysfunctional family or social functioning.

2 The Process

2.1 Functional Limitations/Disability

A number of different approaches may be taken to the identification of impairments in cognition (functional limitations) and everyday behaviors (disability) for older adults. In general, these are the same approaches to deficit measurement identified by Lezak, Howieson, and Loring (2004) that pertain to all age groups. However, some specific caveats need to be considered that are particular to this age group and the types of disorders commonly encountered.

2.1.1 Identification of Cognitive Impairment

As is typical of clinical measurement across a number of fields and age groups, measures designed to assess relevant cognitive functions are administered and often the person’ s performance during the test administration is observed to provide information about the individual’s approach to the task, tolerance levels, personal style, and coping skills. In addition, characteristics of speech and language and abnormalities in movement that may be clinically significant can be observed. In addition, information is gathered through interviews with the older adult and/or a person familiar with this person’s daily activities (e.g., family member or close friend). Standardized tests (i.e., tests administered and scored in a set and consistent manner) are used to gather objective data about a person’s performance that permits meaningful comparisons with others (i.e., standardization samples), to assess change over time within an individual, or in relation to a “gold standard” or specific criterion of achievement (Lezak et al., 2004).

Measures of cognitive and everyday behaviors are most commonly interpreted in relation to the performance of a standardization sample, a representative group of people administered the measure in the standardized fashion. Where the scores on the measure are normally distributed in the adult population, an individual’s performance can be evaluated in relation to norms based on the performance of the standardization sample. Many measures of cognitive functions are affected by age and education (or vocational achievement) and the effects of these variables need to be considered when generating norms, and in the interpretation of an individual’s performance in relation to the norms. Although it has often been common practice to use norms adjusted for age and education, Sliwinski, Buschke, Stewart, Masur, and Lipton (1997) and Sliwinski, Hofer, Hall, Buschke, and Lipton (2003) question this approach in the context of dementia diagnosis. Because it has been repeatedly observed that age and education are risk factors for dementia (see section below on Impairment; Bachman et al., 1993; Braak et al., 1999; Canadian Study of Health & Aging Working Group, 1994; Canadian Study of Health & Aging Working Group, 2000; Shaji, Promodu, Abraham, Roy, & Verchese, 1996), Sliwinski et al. (2003) argue that using norms corrected for these factors would compromise diagnostic accuracy by removing predictive variance. They propose, instead, the use of uncorrected raw scores from the adult population as a whole taken in conjunction with demographically based dementia base-rates when seeking information relevant to the diagnosis of dementia (diagnostic norms). On the other hand, when the purpose of the assessment is to describe the cognitive strengths and weaknesses of the older individual, Sliwinski et al. (1997, 2003) support the use of demographically corrected scores (comparative norms).

Even taking these issues into account, the use of norms to identify impairment requires the selection of a cut-off point, such as defining scores ≥1.5 or 2.0 SD below the mean of a cognitively normal sample as being impaired. This approach assumes that impaired people show quantitative differences rather than differences of kind. An advantage to this approach is that no matter how difficult a cognitive measure is, roughly the same number of people will be identified and this will largely determine the prevalence of impairment in the population. The disadvantage is that there will almost always be an overlap in scores between the normal population and the group with cognitive impairment with a percentage of the normal population being falsely classified as impaired (e.g., approximately 7 % of normal sample will fall below −1.5 SD). A related issue is how many measures in a particular cognitive domain must be impairment before impairment is determined. Petersen (2004a), in discussing criterion for identifying mild cognitive impairment (MCI ), a classification thought by some to capture those individuals likely to develop AD, notes that “multiple more challenging memory instruments are required to detect the subtle memory deficits seen in early MCI.” Similarly, Blackford and La Rue (1989) definition of Late Life Forgetfulness requires a performance of 1–2 SDs below the mean established for age on 50 % of memory measures administered. However, in practice, few cognitive assessment batteries have been co-normed (i.e., simultaneous attainment of data on multiple tests for the same cohort; Smith & Ivnik, 2003) and when such norms have been developed, it is common for “normal” participants to show impaired performances on one or more measures within a battery (Tuokko & Woodward, 1996).

Another approach to the interpretation of scores on measures that are normally distributed in the adult population is to examine differences between scores obtained for an individual on the same standardized measure at different points in time. This information may be particularly relevant for older adults as (1) more normative change in cognitive functions is expected in older age groups than in younger samples, (2) inherent in the diagnosis of dementia is recognition that the individual’s cognitive and behavior has changed over time, and (3) being able to demonstrate that interventions may alter the rate at which cognitive functions change in specific forms of dementia (e.g., AD) is an important goal. However, as yet, there is a lack of information about the appropriateness of different change measurement methods, the validity of neuropsychological measures for studying change in older adults, and information about the amount of test score change that can be considered normal (or abnormal) among older adults over clinically relevant intervals (Frerichs & Tuokko, 2005). Methods for measuring change have been discussed for over 50 years (e.g., Harris, 1963; Lord, 1957, 1958; McNemar, 1958; Payne & Jones, 1957) and continue topics of debate (e.g., Crawford & Howell, 1998; Hageman & Arrindell, 1999; Hsu, 1989; Jacobson & Truax, 1991). Our own research suggests that normal change in older adult’s memory test performance can be accurately classified using change score methods (Frerichs & Tuokko, 2005). Moreover, diagnostic change was significantly associated with a number of different change score methods, but differed in strength of association depending on the memory measure under investigation. These findings stand in contrast to those of Ivnik et al. (2000) who concluded that reliable change in test scores did not contribute to dementia diagnosis in older adults beyond chance levels. Given that these studies differed markedly in the samples that were examined, the design of the study, and the measures used, additional research is needed to examine and validate change score methods in other samples of older adults to determine whether these methods can assist in the detection of particular neurodegenerative disorders.

Although many measures of cognitive functioning provide scores that are normally distributed in the adult population, this is not true for some domains of cognitive functioning. In some instances, an underlying assumption of the measure is that all persons of a certain age (e.g., adults) will manifest these capabilities as they are considered rudimentary components of behavior (e.g., following simple instructions). If the task cannot be performed, impairment is assumed. This is a form of criterion-referenced testing (Anastasi, 1988) where performance is evaluated in terms of achievement on the measure, not in relation other people. In criterion-referenced testing, a particular score on a reference test may be selected and designated as an indication of “significant” impairment. This is a more common approach used in the field of occupational therapy, where performance of everyday behavior is of particular concern (see below).

By definition, the identification of functional limitations (e.g., poor performance on measures of cognitive functions) and disabilities are central to criteria for cognitive disorders. For example, in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5, American Psychiatric Association, 2013), diagnoses of neurocognitive disorders are all based on changes in defined cognitive domains that impact everyday activities. Neurocognitive disorders are then further subclassified according to underlying pathology (e.g., AD, vascular). When cognitive impairment is evident but does not interfere with everyday activities, a variety of other sets of criteria may be employed. For example, the DSM-5 (American Psychiatric Association, 2013) provides categories such as “mild neurocognitive disorder” linked to underlying pathology or etiology, “multiple etiologies” or “unspecified.” The International Classification Diseases-10 (ICD-10; World Health Organization, 1993) provides a classification for Mild Cognitive Disorder to capture objective evidence of decline in cognitive performance not attributable to other mental or behavioral disorders identified in ICD-10.

In 2004, Petersen (2004b) proposed an algorithm for identifying Mild Cognitive Impairment, a hypothesized interim state between normal and abnormal cognitive functioning indicative of incipient dementia. According to Petersen (2004b), MCI is identified when: (1) an individual presents with a cognitive complaint (either subjective or by proxy), (2) a determination of abnormal cognitive function in relation to age and education is established after clinical examination, (3) the individual’s cognitive functioning represents a decline from previous function, and (4) the individual exhibits intact activities of daily living (ADLs). Once the presence of MCI has been established, the type of MCI can be further subdivided based on the presence or absence of memory impairment into amnestic MCI (aMCI) or non-amnestic MCI (naMCI). These types can be further subdivided into aMCI single domain (aMCIsd; memory impairment only), aMCI Multiple Domain (aMCImd, memory impairment plus other cognitive impairment), naMCI single domain (naMCIsd, impairment in a single non-memory domain), and naMCI multiple domain (naMCImd, impairments in multiple domains other than memory). MCI, then, is cognitively heterogeneous with subgroups that differ with respect to cognitive profiles. In addition, MCI appears to be etiologically heterogeneous and some promising work linking etiologic subtypes to cognitive subgroups using neuroimaging techniques and genetic markers (Smith, Machulda, & Kantarci, 2006; Wilson, Aggarwal, & Bennett, 2006; Wolf & Gertz, 2006).

Although the presence of these sets of criteria for cognitive disorders are useful, at least conceptually, few specify procedures for identifying cognitive impairment but instead involve the application of clinical judgment based on the overall impression (Petersen, 2004a). Criteria for Neurocognitive Disorders, as outlined in the DSM-5, give no specific direction as to the meaning of impairment beyond “modest” or “significant” cognitive decline from previous level of cognitive performance. The National Institute of Aging—Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease (McKhann et al., 2011) specify the presence deficits in two or more areas of cognition as established through a combination of history-taking from the affected person and a knowledgeable informant, and an objective cognitive assessment (i.e., mental status examination or neuropsychological testing). The major disadvantage of relying on clinical judgment is that a broad understanding of brain–behavior relations is required and a number of factors (e.g., risk and protective factors) need to be taken into consideration. This will affect the reliability with which cognitive impairment is identified (Tuokko, Gabriel, & The CSHA Neuropsychology Working Group, 2006).

Measures of everyday behavior vary in terms of content and metho d. Content refers to whether a measure is more global (i.e., fewer questions per domain, spanning a number of domains) or specific (i.e., many questions per domain, usually focusing on only one domain). Method refers to the manner in which information is collected from participants (i.e., whether data is collected in a subjective or objective manner). Most commonly employed measures of everyday behavior are subjective, relying on self-report or report of a knowledgeable informant, when there is reason to believe participants may not be able to accurately self-report (Diehl, 1998; Fillenbaum, 1985, 1987a, 1987b; Lawton & Brody, 1969). Moreover, most of these measures are global in nature, spanning a number of domains with few questions per domain. Typically, questions relevant to each domain are evaluated on a 3- or 4-point scale. For example, a question relevant to the ability to transport oneself outside of walking distance might read, “can you use public transportation: (a) without help, (b) with some help, (c) not at all?” (Willis, 1996). Self-report measures of everyday behavior tend to focus on what is happening rather than why. They provide minimal information on concomitants and causes of incapacities in particular domains. Asking an older adult whether they can transport themselves does not provide information as to why that may be the case. For example, the self-reported inability to transport oneself may be due to immobility, or a lack of knowledge of the local bus schedule.

Moreover, a distinction can be made between a person’s ability to intrinsic ability (doing an activity without personal or equipment assistance) versus functional ability (doing activity with personal or equipment assistance). Many people with cognitive impairment can continue to perform many activities of daily living if provided with minimal support and assistance. For example, making use of direct deposit and automatic withdrawal banking functions can alleviate concerns about paying bills on time for people who may have memory difficulties. A final distinction can be made between a person’s ability to perform everyday tasks and her/his understanding of her/his limitations and the consequences of these limitations. This distinction is central to compensatory and adaptive processes (see below-Intraindividual interventions).

2.1.2 Relations Among Impairments in Cognitive and Everyday Functions

As noted earlier, impairments in both cognitive and everyday functions are central to the definition of dementia, and their co-occurrence is expected in this context. However, a number of studies have shown a clear co-occurrence of cognitive impairments and disabilities in samples of older adults without dementia (Barberger-Gateau, Fabrigoule, Rouch, Letenneur, & Dartigues, 1999; Black & Rush, 2002; Njegovan, Man-Son-Hing, Mitchell, & Molnar, 2001; Steen, Sonn, Hanson, & Steen, 2001). It appears that progressive cognitive decline is associated with a natural hierarchy of loss with instrumental activities of daily living (IADLs) (e.g., shopping, banking, and cooking) being lost at higher levels of cognitive functioning than basic ADLs (e.g., eating, dressing, and walking) (Njegovan et al., 2001). In addition, strong associations have been found between measures assessing a broad range of cognitive domains and dependency in four IADLs (i.e., telephone use, use of transportation, medication intake, and handling finances) (Barberger-Gateau et al., 1999). Processing speed was associated with performance on each IADL, whereas specific independent associations between cognitive domains and individual IADL were noted. For example, transportation was also related to visuospatial perception and attention; medication intake was also associated with memory; and handling of finances was the most heavily cognitively mediated being associated with conceptual abilities, orientation, and memory as well as processing speed.

In studies of people identified with MCI, it is clear that they experience difficulty with a number of household and other everyday activities (Albert et al., 1999; Bassett & Folstein, 1991). Artero, Touchon, and Ritchie (2001) found the overall prevalence of impairment in everyday activities for people with MCI to be 30.8 %. The domains with which MCI experienced the most difficulty were walking (18 %), bladder control (16.1 %), bathing (7.7 %), and use of telephone (7.5 %).

There remains controversy in the literature concerning the temporal relations between cognitive impairment and everyday functions. Some longitudinal studies suggest that cognitive impairment occurs first impairment (Greiner, Snowdon, & Schmitt, 1996; Moritz, Kasl, & Berkman, 1995; Steen et al., 2001), while others suggest that both cognitive impairment and disability may show roughly parallel progression (Barberger-Gateau, Dartigues, & Letenneur, 1993). For example, Artero et al. (2001) noted that, over a 3-year follow-up interval, decline in language and visuospatial skills corresponded to an overall drop in activity performance with visuospatial deficits being most strongly related to decline in a number of specific areas of decline on everyday tasks (i.e., dressing, going to bed, use of telephone, mobility, toileting-bladder and bowel, bathing, dental hygiene). Our own work in this area (Tuokko, Morris, & Ebert, 2005) suggests that cognitive impairment and disability may be seen independently, but the likelihood of developing disability after cognitive impairment is high.

2.2 Functional Impairment and Pathology

In the original model of the Disablement Process put forward by Verbrugge and Jette (1994), the development of disability is initiated by pathology. Diseases and disorders affecting many different body systems (e.g., pulmonary, renal, hepatic) can adversely influence cognitive functioning (e.g., Armstrong & Morrow, 2010; Butters, Beers, Tarter, Edwards, & van Thiel, 2001; Lehman, Pilich, & Andrews, 1993; Salmon, Butters, & Heindel, 1993). However, we have chosen to limit our discussion here to the pathological processes of diseases affecting the brain (e.g., abnormal biological or biochemical changes), many of which are often immeasurable until death (e.g., Poser et al., 1999). For instance, despite technological advances in the study of medicine, extracellular β-amyloid senile plaques and intracellular accumulations of neurofibrillary tangles, the neuropathological markers of AD, are only identified postmortem. As such, only presumptive diagnoses of Possible and Probable AD (based on NINCDS-ADRDA criteria) may be assigned premortem (McKhann et al., 2011). Given this substantial limitation, we have elected to focus on the disease processes that affect brain function resulting in measurable cognitive changes in the geriatric population. We have chosen to classify disorders leading to cognitive impairment in old age according to their progression (e.g., rapid deterioration, stepwise decline, maximal neurologic deficit at onset, progressive decline, reversible with intervention, variable; Tuokko & Hadjistavropoulos, 1998).

2.2.1 Rapid Deterioration

2.2.1.1 Delirium

Delirium or Acute Confusional State (ACS) is an acute condition resulting from a general medical condition, substance intoxication or withdrawal, exposure to toxins, medication use, alone, or in combination. To receive a diagnosis of ACS, a person must not meet the criteria for dementia. ACS is especially prevalent among elderly persons: hospitalized (10–30 % point prevalence), 75+ year olds living in nursing homes (60 %), and terminally ill (80 %). In general, 20–25 % of elderly persons admitted to hospital are delirious upon arrival or develop ACS while hospitalized (Lipowski, 1994). ACS serves as a marker for serious illness in the elderly and necessitates emergent care. Although a full recovery is possible following treatment of the underlying condition, elderly persons typically continue to exhibit residual deficits. In the elderly, ACS due to a general medical condition is also associated with a high risk of mortality (15–30 % die within 30 days of hospitalization; Lipowski, 1994).

2.2.2 Maximal Neurologic Deficit at Onset

2.2.2.1 Cerebrovascular Disease and Vascular Dementia

Cerebrovascular disease (CVD ) is associated with significant cognitive and physical deficits. The cognitive deficits are often the result of an acquired dementia (i.e., Vascular dementia (VaD )) resulting from varied cerebrovascular incidents (e.g., stroke, cerebral hypoperfusion causing anoxia; Onyike, 2006). VaD accounts for approximately 13 % of the dementias in the Canadian population (Ebly, Parhad, Hogan, & Fung, 1994). The occurrence and development of VaD is dependent upon the type, severity, and location of the cerebral infarct. Moreover, VaD and Alzheimer’s disease (AD) pathology often coexist, resulting in a diagnosis of mixed dementia. The severity of dementia is often higher in persons with mixed dementia. For example, data from the Nun Study reveal significantly poorer cognitive performance among Sisters whose brains at autopsy met the neuropathological criteria for AD and contained infarcts (Snowdon et al., 1997). In his review of CVD and dementia, Onyike (2006) suggests that AD may be a symptom of VaD, given arguments that sporadic AD is due to cerebral hypoperfusion (de la Torre, 2004). de la Torre argues that, despite its popularity, research does not support the amyloid hypothesis (i.e., deposits of amyloid-β-peptide and neurofibrillary tangles are the cause of progressive neurodegeneration in AD). Rather, he argues that evidence supports a vascular hypothesis wherein age and vascular risk factors create a condition of cerebral hypoperfusion, thereby affecting cellular energy and resulting in cognitive impairment, neurodegeneration, and ultimately, AD (de la Torre, 2004).

2.2.3 Progressive Decline

2.2.3.1 Major Neurocognitive Disorders

The etiology of major neurocognitive disorders may be due to several neurologic diseases including Alzheimer’s disease, Parkinson’s disease, Lewy bodies, or Fronto-Temporal Lobar Dementia. The prevalence of dementia varies from 1.4–1.6 % in persons aged 65–69 and increases to 16–25 % in persons 85 years and older (American Psychiatric Association, 2013). In the Canadian Study of Health and Aging (CSHA, 2000), the prevalence of dementia was shown to increase from 2.4 %, to 11.1 %, to 34.5 % in persons aged 65–74, 75–84, and 85+ years, respectively. Dementia is defined as a progressive, stable, or remitting cognitive disorder that is not better accounted for by delirium. It is characterized by cognitive deficits including memory impairment, and at least one of executive dysfunction, aphasia, apraxia, or agnosia. The symptoms must represent a decline from premorbid functioning and cause clinically significant impairment in social and/or occupational functioning.

2.2.3.1.1 Alzheimer’s Disease

AD is the most prevalent of the dementias accounting for approximately 60 % of all dementias (Terry, 2006). The prevalence of AD is positively correlated with increased age (i.e., 0.6 % in males aged 65 compared to 36 % in males aged 95 years). AD is a progressive dementia with an average survival time of 8–10 years (American Psychiatric Association, 2013). The neuropathological markers of AD, as seen at autopsy, include cerebral atrophy (especially in the temporal and parietal lobes), loss of cholinergic neurons in the Nucleus Basalis of Meynert, abnormal intracellular accumulations of tau protein in the form of neurofibrillary tangles (NFTs), abnormal accumulations of cellular debris and β-amyloid protein in the form of extracellular senile plaques (SPs), and amyloid deposits in the arteries and arterioles. NFTs are typically found in the hippocampus, entorhinal cortex, and neocortex of persons with AD. SPs are found in the neocortex and mesial temporal cortex. The severity of dementia is reported to increase with the distribution of NFTs and SPs (Terry, 2006).

2.2.3.1.2 Parkinson’s Disease

Parkinson’s disease (PD ) is a movement disorder characterized by bradykinesia (slowed movement), rigidity, resting tremor, and postural instability. The neuropathological underpinning of Parkinson’s disease is the degeneration of dopamine neurons in the pars compacta region of substantia nigra. The disease is also marked by neuronal Lewy body inclusions and adrenergic and cholinergic neuronal atrophy. Over 8 years, 78.2 % of persons with PD developed dementia (Aarsland et al., 2001). PD is estimated to affect 2 % of persons over 65 years of age, 20–40% of whom have comorbid depression (Lieberman, 2006).

2.2.3.1.3 Lewy Body Dementia

Lewy bodies, eosin inclusions in neuronal cytoplasm, were first identified in the brains of patients with Parkinson’s disease. Compared to Parkinson’s Dementia, where patients are diagnosed with PD more than 1 year before the onset of dementia symptoms, Lewy body dementia (LBD ) is characterized by dementia early in the course with some features of PD (McKeith et al., 2005).

The distribution of alpha-synuclein Lewy bodies determines the type of pathology: brain stem-predominant, limbic, or diffuse neocortical (McKeith et al., 2005). LBD shares several neuropathological markers with other forms of dementia. Specifically, Lewy bodies are present in the cortex and basal ganglia of both PD and LBD; cortical and subcortical dopaminergic deficits due to atrophy of substantia nigra neurons are observed in both PD and LBD; and cholinergic deficits are observed in both LBD and AD (Selwa & Gelb, 2005).

2.2.4 Variable

2.2.4.1 Frontotemporal Lobar Dementia

Frontotemporal lobar dementia (FTD ) is due to the degeneration of the frontal and temporal lobes of the brain. FTD accounts for approximately 5–15 % of all dementias (Selwa & Gelb, 2005) and is more rapidly progressing than AD (i.e., mean survivals time post-symptom onset of 8.7 ± 1.2 years and 11.8 ± 0.6 years, respectively; Robertson et al., 2005). The average age of onset for FTD is 40–60 years (Tuokko & Hadjistavropoulos, 1998). Although FTD is a progressive dementia, it is also described as having a variable course due to the fluctuating cognitive symptoms of the disorder (Tuokko & Hadjistavropoulos, 1998). FTD may present with personality, behavior, executive, or language (i.e., primary progressive aphasia) deficits. Four variants of FTD have been isolated: behavioral/dysexecutive FTD (a frontal lobe variant), semantic FTD (temporal lobe variant), progressive non-fluent aphasia (PNFA) (Boxer & Miller, 2005), and movement disorders (e.g., amyotrophic lateral sclerosis, parkinsonism, and other corticobasal syndromes (Boeve & Hutton, 2008; Warren, Rohrer, & Rossor, 2013).

2.2.5 Reversible with Intervention

2.2.5.1 Depression

Depressive disorders, identified as mood dysregulation (American Psychiatric Association, 2013), are common in the geriatric populations. Several depressive syndrome are described in the DSM-5 (American Psychiatric Association, 2013) including major depressive disorder and persistent depressive disorder (i.e., dysthymia). A common clinical referral question addresses whether an older adult’s cognitive deficits are related to depression (i.e., pseudodementia) or dementia. Depression in the elderly is often accompanied by cognitive impairments (Lockwood, Alexopoulos, & van Gorp, 2002). Dementia and depression, however, do not necessarily occur in isolation. Rather, increasing depression is associated with the development of dementia. It is not clear whether dementia precedes depression, or vice versa (Barberger-Gateau et al., 2002).

2.3 Relations Between Disease/Disorder and Functional Limitations

The disorders described above differ with respect to underlying pathology and in how they manifest in terms of functional limitations (i.e., cognitive impairments) and associated disability (i.e., impairment in everyday behaviors). These disorders are perhaps best conceptualized as syndromes that may or may not be linked to specific etiologies. It has been proposed that these syndromes can often be distinguished based on key features of the presenting functional limitations (i.e., patterns of cognitive deficits) and associated disability (Tuokko & Hadjistavropoulos, 1998). Table 5.1 links the disease/disorder in question to the typical presenting functional limitations.

Table 5.1 Cognitive and behavioral symptoms of common disorders affecting older adults

It is important to note that some of these disorders and their associated underlying pathology are degenerative and the cognitive or behavioral presentations may change or evolve over time. For example, in the AD literature, Reisberg et al. (1984) have proposed seven identifiable stages based on cognitive or behavioral presentation that are presumed reflective of the severity of the underlying pathological brain damage (see Table

Table 5.2 Reisberg’s functional assessment stages (FAST) in normal aging and AD

5.2). In fact, despite differences in the initial symptoms of different forms of dementia (e.g., primary memory deficit in AD; behavioral and executive dysfunction in the frontal-variant of FTD), because of the progressive nature of most dementias, they are all characterized by severe functional limitations and disability at the end of the disease process (Schneck et al., 1984).

3 Modifying Factors

The Disablement Process is described as the natural process of disease. However, it is not a fixed process. Rather, several innate and developed personal characteristics, as well as intra- and extra-individual processes, occur along the continuum of the Disablement Process and impact the rate of progression and transition from one stage to the next. These modifiers include disease/impairment-specific risk factors, protective factors, and interventions to delay the progression of the disease.

3.1 Risk Factors

According to the original model proposed by Verbrugge and Jette (1994), risk factors are those characteristics of a person that exist prior to the beginning of the disablement process. They include demographic, social, genetic/biological, environmental, educational, and recreational factors. In this chapter, we discuss risk factors that are preexisting personal characteristics associated with an increased incidence of cognitive decline. They predispose an individual to cognitive impairment or dementia and may also influence the course of the disablement process (Barberger-Gateau et al., 2004). A sample of risk factors for select diagnoses of cognitive impairment and associated supportive research follows.

3.1.1 Age

With the lengthening of the human life span, there has been increased interest in the study of aging and dementia. The most prominent risk factor associated with cognitive decline is age. As noted earlier, the prevalence of dementia was shown to increase from 2.4 %, to 11.1 %, to 34.5 % in persons aged 65–74, 75–84, and 85+ years, respectively, in the Canadian population (Canadian Study of Health and Aging, 2000). Increasing age is also a risk factor for cognitive impairment not meeting the criteria for dementia. For example, age was found to be risk factor for Cognitive Impairment No Dementia (CIND) in the older Italian population (Di Carlo et al., 2000), and the Australian population (Low et al., 2004), and for cognitive decline in the Canadian older population (Graham et al., 1997).

Positive correlations between incidence rates of dementia (i.e., the number of new dementia cases each year) and advancing age are also reported. For example, in persons up to 90 years of age, the incidence of dementia continues to increase with advancing age without reaching a plateau (Ravaglia et al., 2005). Similar findings were reported in the European Studies of Dementia (EURODEM), a pooled examination of dementia in the Netherlands, the United Kingdom, France, and Denmark. The incidence rate for dementia in persons aged 65 years was 2.5, compared to 85.6 in persons aged 90 years or older (Launer et al., 1999).

3.1.2 Gender

The role of gender as a risk factor for cognitive decline differs according to diagnosis. Specifically, the female gender is associated with a greater risk for AD. In contrast, men have a higher risk of developing VaD. For example, Yamada et al. (1999) report AD prevalence rates of 3.8 and 2.0 % for women and men, respectively. In contrast, women had VaD prevalence rates of 1.8 % compared to 2.0 % for men.

3.1.3 Genetic Risk

Having first-degree relatives with a history of dementia may be a risk for dementia. Launer et al. (1999) report a positive but insignificant risk for dementia in persons with two or more family members with a history of dementia. Family history of dementia occurs almost twice as frequently in persons with VaD and AD, compared to non-demented persons (Boston, Dennis, & Jagger, 1999).

Genetic risk factors associated with AD involve four genes: amyloid-precursor protein (APP), presenilin genes 1 and 2, and the apolipoprotein E (ApoE) gene. Unlike the first three genes, risk associated with the ApoE gene is not due to mutation of the gene. Rather, its presence is speculated to predispose individuals to AD (Hsiung, Sadovnick, & Feldman, 2004). ApoE is located on Chromosome 19 and consists of three alleles: Є2, Є3, and Є4. The Є4 allele is associated with an increased risk of dementia.

Results from the CSHA (Hsiung et al., 2004) reveal the prevalence of the ApoE Є4 genotype to be significantly higher in those with AD and VaD. Similar findings were observed in persons who progressed from CIND to AD. New and non-progressing CIND cases and CIND cases who subsequently reverted to a diagnosis of No Cognitive Impairment (NCI) had distributions of ApoE Є4 similar to control subjects. Additionally, an interaction between age and ApoE Є4 genotype was noted in persons with AD. Specifically, age of onset of AD and age of progression from CIND to AD were significantly associated with the ApoE Є4 genotype. The authors suggest that these interactions may account for the earlier onset of AD and earlier conversion to AD in persons with the ApoE Є4 genotype.

Similar increase in risk was noted by Frikke-Schmidt, Nordestgaard, Thudium, Moes Grøholdt, and Tybjærg-Hansen (2001) in their sample of Danish participants. The Є44 and the Є43 genotypes were associated with tenfold and threefold increases in the risk of AD, compared to persons with the Є34 genotype. The increased risk associated with the ApoE Є4 allele was not limited to diagnoses of AD. Rather, a 2.5-fold increase in risk of “other dementia” was also noted in persons with the Є43 genotype. The authors report that, overall, the Є44 and the Є43 genotypes, respectively, accounted for 37 and 20 % of AD and the Є43 genotype accounted for 26 % of other dementias in the general population.

The risk of dementia associated with ApoE Є4 genotype has also been linked to vascular risk factors. Baum et al. (2006) found a significantly greater percentage of persons with VaD (23.6 %) compared to controls (15.1 %) who had the ApoE Є3/Є4 or Є4/Є4 genotype. The relationship between VaD and ApoE Є4 was significant only in patients with comorbid hypertension or diabetes.

3.1.4 Vascular Risk Factors

Risk of cognitive decline associated with various cerebrovascular factors differs according to the type of dementia (i.e., VaD v. AD). Hayden et al. (2006) examined the differential risk of AD and VaD associated with cerebrovascular factors, using data from the Cache County Study of Memory Health and Aging. Overall, increased risk of dementia was associated with older age, female gender, ApoE genotype, history of stroke, and history of obesity. The following disease- and gender-specific risk factors were identified: (1) history of diabetes in men with AD; (2) history of diabetes in women with VaD; (3) obesity in women with AD; (4) hypertension in women with VaD.

While hypertension has been associated with VaD, hypotension has been identified as a risk factor for AD. Verghese, Lipton, Hall, Kullansky, and Katz (2003) report that in persons over 75 years of age, ongoing low diastolic blood increases the risk of developing AD. The authors hypothesize that hypotension may predispose a person to dementia and may also be an outcome of dementia.

Xu, Qiu, Wahlin, Winblad, and Fratiglioni (2004) investigated the role of diabetes as a risk factor for dementia using data from the Kungsholmen Project. Diabetes was identified as a significant risk factor for dementia, especially VaD. The risk of dementia associated with diabetes was further magnified with comorbid severe systolic hypertension and heart disease. The authors speculate that diabetes may increase the risk of dementia through both vascular and nonvascular effects. On its own, diabetes was not identified as a risk factor for AD. Hassing et al. (2002) report similar findings of significantly increased risk of VaD, but not AD, in persons with type 2 diabetes.

3.1.5 Pregnancy

Women with a higher number of pregnancies have a higher risk of dementia than women with fewer pregnancies. In a study of 204 AD and 201 control Italian older women, Colucci et al. (2006) found that women with three or more pregnancies had an earlier age of onset of AD (71.7 ± 7 years), compared to women with less than three pregnancies (75.6 ± 6.7 years). Moreover, the risk of dementia was three times greater in women with three or more pregnancies. The authors hypothesize that the greater prevalence and earlier onset of AD in women with three or more pregnancies may be due to increased exposure to estrogen and progesterone.

3.1.6 Head Trauma

There are mixed results in the literature regarding the role of head trauma as a risk factor for the development of dementia. For example, in the Rotterdam Study, none of head trauma with loss of consciousness (LOC), multiple head traumas, time since head trauma, or length of LOC were significant risk factors for dementia (Mehta et al., 1999). Similar results were observed in the European population-based study of dementia (EURODEM; Launer et al., 1999). In contrast, in a study examining the risk of dementia among war veterans with and without early closed head injury, Plassman et al. (2000) found moderate and severe early head trauma to be significant risk factors for the development of AD. In a recent review of 15 case-controlled studies, Fleminger, Oliver, Lovestone, Rabe-Hesketh, and Giora (2003) confirm that head injury is a significant risk factor for AD in males. These studies highlight the disparity of results of the risk of dementia among persons with head injury.

3.2 Protective Factors

Theoretically, protective factors modify the disablement process by delaying or preventing the onset and/or progression of cognitive decline. It can be difficult to identify the specific variables that serve to protect against cognitive decline. As discussed below, easily researched variables, such as education, may serve as a proxy for more remote variables, such as lifestyle, quality of education, access to healthcare, or socioeconomic status (McDowell, Xi, Lindsay, & Tuokko, 2004).

3.2.1 Education

Head circumference and education have been identified as protective factors against the development of dementia. For example, in the Nun Study (Mortimer, Snowdon, & Markesbery, 2003) smaller head circumference and low education were associated with a fourfold increase in the development of dementia. These results are in concert with earlier findings that the clinical manifestation of dementia is delayed in persons with larger brains (Katzman et al., 1988). The “brain reserve capacity” (BRC) is a passive threshold model of cognitive impairment following damage to the brain (Stern, 2002). The BRC model hypothesizes that different clinical manifestations of similar brain damage is due to differences in the brain itself (e.g., number of synapses or neurons). In theory, persons with greater BRC can tolerate more damage to the brain before crossing the “threshold” for clinical expression of cognitive impairment (Satz, 1993). Thus, according to BRC model, sisters in the Nun Study with smaller head circumferences may be described as having lower BRC and, therefore, surpassed the threshold for clinical impairment earlier than those with larger head circumferences.

Some suggest that higher levels of education serve to protect against cognitive impairment by enhancing one’s cognitive reserve, thereby delaying the onset of cognitive decline (Cummings, Vinters, Cole, & Khachaturian, 1998). Cognitive reserve is based on the theory that differences in the clinical outcome of brain damage are due to individual differences in intellectual, educational, and occupational achievements. Persons with higher cognitive reserve can theoretically withstand greater damage to the brain before exhibiting clinical symptoms of cognitive impairment because of proficient use of intact cognitive abilities. Unlike the BRC model, the cognitive reserve model is not a threshold model. It is not assumed that there is a predetermined threshold that, once surpassed, is associated with cognitive or functional impairment (Stern, 2002). Rather, the cognitive reserve model holds that individuals with the same BRC but differing levels of cognitive reserve will exhibit diverse clinical presentations following similar injury to the brain (Fig. 5.2-derived from Stern, 2002). When applied to dementia, Fig. 5.2 suggests that Person A, who has more cognitive reserve, can withstand greater synaptic degeneration before exhibiting symptoms of cognitive decline, compared to Person B, who has less cognitive reserve.

Fig. 5.2
figure 2

Cognitive reserve model

Cognitive reserve is described as an “active model” wherein there is an active attempt by the brain to compensate for damage (Stern, 2002). Le Carret et al. (2003) suggest that level of education supports and increases cognitive reserve by developing and maintaining two multifaceted cognitive functions: controlled processes and conceptual skills. In a population sample of normal, healthy French elderly persons, higher education was associated with higher neuropsychological performance, especially on attention-focused tasks. Together, controlled processes and conceptual skills are hypothesized to delay the clinical expression of cognitive decline through proficient cognitive functioning.

Classifying education as a protective factor that potentially delays the onset of cognitive decline or dementia is not without controversy. Several studies suggest that the protective effects of education are limited with respect to age. For example, data from the Canadian Study of Health and Aging suggest that education protects against cognitive decline in persons younger than age 80 years (McDowell et al., 2004). Similar findings were reported in the Framingham Study, a community-based study examining the role of education in the incidence of dementia (Cobb, Wolf, Au, & D’Agostino, 1995). The authors report an absence of education as a risk factor for dementia, when controlling for age. It has been proposed that the “protective effects” of education in delaying the onset of dementia may reflect an “ascertainment bias.” For example, McDowell et al. (2004) suggest that highly educated individuals may be more adept at and familiar with testing practices similar to those utilized in neuropsychological assessments. Alternatively, given findings that higher functioning (HF) persons with incident dementia exhibit more rapid cognitive decline than lower functioning (LF) persons with incident dementia, Tuokko, Garrett, McDowell, Silverberg, and Kristjansson (2003) propose that the “ascertainment bias” reflects the use of inappropriate normative data for the detection of dementia in HF individuals. As such, cognitive decline is not identified in these individuals until the later stages of impairment. Moreover, education may serve as a proxy for other potentially protective factors such as socioeconomic status (i.e., better lifestyle, access to better healthcare) and occupation (i.e., mental stimulation, exposure to toxins). These possibilities, however, do not invalidate the role of education in the dementia process. Rather, the protective effect of education on the dementia process may be indirect instead of linear (McDowell, Xi, Lindsay, & Tierney, 2007).

3.2.2 Physical Activity

Also reducing the risk of cognitive decline with aging is regular physical activity. For example, in a longitudinal study of the relation between cognitive function and regular physical activity in women aged 71–80 years, Weuve et al. (2004) identified a 20 % reduction in the risk of cognitive decline in the most physically active women. The authors describe the observed decline in risk as equivalent to being 3 years younger than their less active counterparts. The cognitive benefits of physical activity were not limited to extremely active women. Better cognitive functioning was observed in women who walked 90+ min per week, compared to those walking for less than 40 min per week. A meta-analysis of the effects of physical activity on risk of cognitive impairment reported that high levels of physical activity reduce the risk of Alzheimer’s disease by 45 % and, more generally, dementia by 28 % (Hamer & Chida, 2009).

An active lifestyle among aged persons serves to promote cardiovascular and nervous system health, thereby delaying the onset of cognitive decline. In particular, cardiovascular exercise promotes cognitive functions associated with the frontal and parietal regions of the brain which are instrumental in promoting such functions as working memory and attention (Colcombe et al., 2003; Colcombe & Kramer, 2003). Research using magnetic resonance imaging (MRI) of the brain reveals significant increases in both gray and white matter volume in elderly (aged 60–79 years) persons following a 6-month aerobic exercise routine. The largest increase in gray matter is located in the frontal lobes, while white matter volume increases were largest in the anterior third of the corpus callosum (Colcombe et al., 2006). The benefits of short-term cardiovascular training appear to be restricted to specific brain regions and cognitive functions that are vulnerable to age-associated declines and, as with education, it is possible that some of the protective effects of education are due to factors associated with such as nutrition and lifestyle (Churchill et al., 2002).

Although beneficial to promoting both physical and cognitive health, the resulting neural effects of exercise may be enhanced by cognitively stimulating experiences. Human and animal studies have each contributed to the understanding of the complementary roles of exercise and experience in preserving neural and cognitive function in late life. Overall, aerobic exercise promotes neurogenesis into late life, while exposure to cognitively stimulating environments (i.e., learning) promotes the growth of synapses within the brain (Churchill et al., 2002). These results suggest that, in persons “destined” to develop dementia, physically active, well-educated, cognitively stimulated older persons should exhibit slower rates of cognitive decline, compared to sedentary, less educated persons with repetitive non-stimulating occupations or activities.

3.3 Interventions

In contrast to risk and protective factors, intervention practices are typically introduced following the discovery or identification of specific impairments to slow or prevent the progression of decline (Verbrugge & Jette, 1994). For example, following a left temporal lobe stroke, a patient may be enrolled in rehabilitative speech pathology to address issues of aphasia. Interventions can intervene at any level of the disablement process and are classified as either intraindividual or extra-individual. Intraindividual interventions are those processes that originate within the patient (e.g., self-efficacy), while extra-individual interventions are processes that are initiated or provided by sources outside of the patient (e.g., cognitive rehabilitation) (Verbrugge & Jette, 1994).

Interventions have been researched to both prevent and slow the progression of dementia. Using Caplan’s (1964) classifications of prevention, interventions designed to prevent the development of dementia in at-risk, but asymptomatic, persons are means of primary prevention. In the context of the current discussion of interventions implemented in response to the disablement process, the interventions of interest are secondary prevention mechanisms-interventions put into action by or for persons exhibiting symptoms of cognitive decline to prevent or slow further decline.

3.3.1 Intraindividual Interventions

There are a number of actions a person may take to reduce the demands placed on them, thereby allowing them to maximize their functional capabilities. In describing the disablement process, Verbrugge and Jette (1994) make reference to activity accommodations (i.e., what people do or the activities they engage in, how they do it, for how long and how often) and psychosocial coping strategies (i.e., adjustment of the definition of self in the face of chronic conditions and dysfunctions). In describing behavior change associated with the aging process, Baltes and colleagues (Baltes & Baltes, 1990; Baltes & Lang, 1997) refer to selective optimization with compensation (SOC), whereby an older adult selects (actively or passively reduces the overall number of goals and pursuits to conserve energy for goals determined to be most important), optimizes (refines the means and resources necessary to reach a goal and/or to excel in a chosen domain), and compensates (searches for and makes use of alternate means to reach goals once old means are no longer available). Although neither Verbrugge and Jette (1994) or the SOC model examines what motivates an older adult to compensate or select so as to maintain their level of everyday functioning, awareness has been identified as playing a key role in compensatory behavior, where those who are more aware of their own deficits are more likely to compensate for them and find alternative methods of completing desired tasks (e.g., Diehl, 1998). It is this understanding or awareness that promotes the use of compensatory or adaptive behavior that allows people to continue to function well despite difficulties performing specific activities.

Although many people with cognitive impairments are painfully aware of their deficits early in the course of the disorder, others are not. Awareness of deficits has been linked to executive functioning (e.g., Amanzio et al., 2013; Van Wielingen, Tuokko, Cramer, Mateer, & Hultsch, 2004) and executive functions, in turn, have been implicated in self-regulation and possibly to identity (Caddell & Clare, 2013). The onset of dementia, then, poses a threat to the self and people respond to this challenge in different ways (Clare, 2003). For some, the self-concept adjusts to incorporate the changes associated with the onset of dementia (i.e., self-adjusting) and others strive to maintain their prior sense of self to maximize continuity (i.e., self-maintaining). Those who do not adjust their behaviors to accommodate for cognitive changes may not engage in compensatory behaviors, thereby placing themselves and others at risk of harm. For example, there is substantial literature to suggest that some older adults with dementia continue to drive even in the face of significant impairment (e.g., Wild & Cotrell, 2003). Similarly, there is some evidence to suggest that dementia patients with insight make significantly greater gains in intervention programs addressing cognitive and affective functioning (Koltai, Welsh-Bohmer, & Schmechel, 2001) than those without insight.

3.3.2 Extra-Individual Interventions

The Functional Transitions Model (FTM) was designed to improve clinical practice with AD patients by predicting and preparing for progressive functional decline associated with the disorder (Slaughter & Bankes, 2007). Recall that the progression of AD is reported to occur in seven stages. The goal of this staging was to allow clinicians to identify both disease-related progression and disability due to comorbid factors (Reisberg et al., 1984). Understanding the predicted transitions and identifying impairments due to comorbid conditions allows families and caregivers the opportunity to plan for probable declines in the patient’s function (e.g., consider possible intervention strategies; establish the patient’s care wishes [e.g., living will], power of attorney). Anticipating functional declines provides the opportunity to be better able to cope with progressive declines (Slaughter & Bankes, 2007).

Several interventions have been proposed as effective treatments (not cures) for dementia. From a medical perspective, pharmacological treatments, such as cholinesterase inhibitors (ChEIs) , are the most researched extra-individual interventions for slowing the progression of dementia. Despite recent findings that persons in the early stages of AD do not exhibit diminished levels of the neurotransmitter acetylcholine, cholinesterase inhibitors are the most effective treatments for symptoms of AD (Chertkow, 2006). Meta-analysis of three approved ChEIs (donepezil, rivastigmine, and galantamine) revealed significant but modest increases on a global assessment score, compared to placebo (Lanctôt et al., 2003). Long-term treatment with donepezil (i.e., at least 2 years) has been found to reduce levels of annual cognitive decline in persons with AD, compared to non-donepezil-treated patients (annual declines of 1.2 and 2.8 points on the MMSE, respectively; Tomita, Ootsuki, Maruyama et al., 2007).

Positive results for the treatment of AD have been found with Memantine, an NMDA receptor antagonist. This drug is approved for the treatment of severe AD in Europe and the United States. It has also proven to be effective in the treatment of mild–moderate AD. In a 6- month, randomized, placebo-controlled study, mild AD patients receiving memantine treatment exhibited statistically significant better cognitive functioning than placebo-receiving mild AD participants. Statistically superior language and memory abilities were found in the memantine-treated group (Pomara, Ott, Peskind, & Resnick, 2007).

Other pharmaceutical interventions (both prescribed and over-the-counter products) have been utilized for the treatment of memory disorders in old age. Such products include ginko biloba, nootropics (“dietary supplements”), antioxidants, Vitamin E, estrogen, anti-inflammatory agents, to name a few. For a good review of existing and emerging pharmacological treatments for memory impairment, see Chertkow (2006). For a review of the pharmacological treatments available for non-AD, see Arlt and Jahn (2006).

From a clinical psychology perspective, cognitive rehabilitation has been identified as intervention for persons with Alzheimer’s disease and vascular dementia. Clare and Woods (2004) have identified three cognitive interventions with different foci for use with people with dementia. Cognitive stimulation is typically conducted in a group format and, while encompassing a cognitive element, generally has an equal emphasis on social interaction. Cognitive training, designed to maintain current cognitive abilities and slow the progression of cognitive decline, is undertaken in group or individual format and consists of ongoing practice of exercises targeting specific cognitive domains (e.g., memory, attention, language, praxis). Improvement on cognitive tasks is believed to generalize to activities outside of the training regime. Cognitive rehabilitation programs are tailored to the individual patient and involve working with the patient and their caregiver(s) to design-specific strategies (e.g., use of memory aids) to compensate for cognitive deficits. Examples of cognitive rehabilitation techniques include spaced retrieval, errorless learning, and mnemonics. Cognitive stimulation and rehabilitation are reported to be effective treatments for persons diagnosed with early AD. There is limited research to support the utility of cognitive training for the treatment of dementia (Woods & Clare, 2006). Similar cognitive rehabilitation approaches are used to address cognitive deficits resulting from a cerebrovascular event (e.g., stroke, anoxia due to hypoperfusion, etc.), traumatic brain injury (e.g., from a fall), or illness/disease (e.g., diabetes).

The aim of pharmaceutical treatments, cognitive stimulation, and cognitive training is to increase the patient’s cognitive capacity and, in turn, maintain or improve his/her current level of independence. Other interventions aim to decrease the environmental demand(s) that the patient is struggling. The implementation of memory aids in cognitive rehabilitation essentially modifies the memory demand of the task to meet the abilities of the person. Relocation to a care facility reduces the environmental demands for intact instrumental activities of daily living (IADLs; e.g., grocery shopping, cooking) and/or activities of daily living (ADLs; e.g., personal hygiene) by providing the necessary supports for the patient. Different levels of care are available and are dependent upon the patient’s level of independence. Interventions that decrease environmental demand and those that increase personal capacity aim to create a better fit between the patient’s environment and their abilities (Verbrugge & Jette, 1994).

Overall, the goal of implementing intervention programs is to slow the progression of the disablement process. However, interventions can have negative outcomes for the individual and serve to “exacerbate” the existing deficits (Verbrugge & Jette, 1994). For example, relocation to an institution is associated with increased levels of disability (Barberger-Gateau et al., 2004). Woods (1999) suggests that, in many care settings, dependence is encouraged over autonomy. This finding is consistent with Baltes (1982, 1988) theory of learned dependency, wherein dependent behavior among elderly persons is rewarded socially, while independent behavior is frequently ignored (Horgas, Wahl, & Baltes, 1996). Coping with feelings of loss (e.g., freedom, possessions, independence) is perhaps the biggest obstacle for persons entering a nursing home. Inability to do so can result in withdrawal (e.g., activities, meals, socializing) and depression (Harker, 1997). Depression in older adults is associated with impairment in executive functioning (Lockwood et al., 2002). Thus, although the goal of institutionalization is to improve the fit between the personal capacity and environmental demand, it is important to address and plan for the potential negative consequences associated with the transitio n.

4 Conclusion

The use of the disablement process clearly illustrates the complexities of identifying impairment in geriatric populations. Within the disablement process framework, functional impairments refer to abnormalities within specific body systems (here we have focused on disorders affecting brain function) whereas functional limitations refer to restrictions in physical and mental activities, often referred to as impairments outside the context of this model (e.g., cognitive impairments). In practice, it is often these functional limitations and/or the resulting disability (i.e., impairments in everyday functioning) that bring older adults to clinical attention. We discussed a number of different approaches to the identification of cognitive impairment (e.g., comparison to normative samples, to assess change over time, in relation to specific criterion for achievement) and sets of criteria for disorders of cognitive functions. In addition, we described common approaches to assessing for impairments in everyday functions (i.e., disability) and how these impairments relate to cognitive impairment. We described common underlying pathologies related to disorders of cognition in older adults, noting how differences in patterns and presentation of cognitive and behavioral impairments are often the basis from which inferences are drawn concerning the presence these pathologies. We identified modifying factors that impact emergence, rates of progression, and functional outcomes associated with the expression of these pathologies: risk factors, protective factors, and interventions. The inferences drawn about the nature of the underlying pathology are of primary importance for determining prognosis and selecting medical intervention options (e.g., pharmacologic agents to slow, arrest, or reverse the pathological process). On the other hand, it is the clarity with which functional limitations (i.e., cognitive and behavioral) are understood that lays the foundation for behavioral and psychosocial interventions intended to optimize functioning, minimize the risk of disability, and prevent dysfunctional social or family functioning (Woods & Clare, 2006). Particularly within the context of geriatric populations, where biological, psychological, and social changes are expected and highly interdependent, the disablement process framework offers a comprehensive view of the myriad of factors that need to be considered when assessing for and interpreting the meaning of impairment.