Keywords

Introduction

This chapter serves as an update to a previously published chapter on the “Brief History of Functional Analysis” (Dixon, Vogel, & Tarbox, 2012). This chapter will briefly outline the history of behaviorism and applied behavior analysis (ABA) as well as the development of behavioral functional analysis (FA). As ABA has developed as a discipline, so too has the field’s understanding of using functional assessments to develop comprehensive interventions to increase adaptive and decrease maladaptive behaviors.

Key people and early studies will be discussed, including the evolution from Watson’s stimulus-response theory to Skinner’s experimental analysis of behavior to Baer, Wolf, and Risley’s definition of ABA and the first publication of the Journal of Applied Behavior Analysis (JABA). This chapter will then review the first published experimental functional analysis (EFA) by Iwata and colleagues (Iwata, Dorsey, Slifer, Bauman, & Richman, 1994).

After a synopsis of the emergence of the field and the procedures for FA, more recent adaptations of FA from its origins to expansion across methodology, populations, behaviors, and settings (e.g., school, home, telehealth) will be discussed. The implementation of analysis in developing FA procedures to address idiosyncratic variables will be reviewed. Considerations for conducting full EFAs versus abbreviated or targeted EFAs is touched upon, with an emphasis on best practices in analysis and modifications beyond the standard structure to best utilize FAs to get useful information for treatment planning. A review of methodologies in interpreting EFA results will be shared, as well as nonexperimental methods for conducting an FA. Finally, this chapter will review commonly cited barriers to FAs as well as trends in the more recent literature and potential future directions. Readers are directed to subsequent chapters for specific details on methodology, reporting, populations, topography, ethical consideration, informed consent, and treatment planning. Regarding terminology, the terms “functional analysis” (FA) and “experimental functional analysis” (EFA) will be used throughout the chapter for consistency, but it should be noted that different disciplines may use other terms to describe the same procedures; for examples, in school settings, the term Functional Behavior Assessment (FBA) is common.

Historical Roots of Behavior Analysis

Prior to the advent of behaviorism, the field of psychology was primarily focused on mental processes. John B. Watson is widely credited with bringing behaviorism to the forefront of the field. He argued that the construct that should be studied in psychology was observable behavior. Watson introduced stimulus-response (S-R) theory, which suggests that that the understanding of behavior should be based on the relationship between the environmental stimulus (S) and the observable response (Watson, 1913). This theory laid the groundwork for the three-term contingency, or the relations between the stimulus, response, and consequence (Catania, 1984).

B. F. Skinner expanded the field of behaviorism by identifying and describing operant behavior, in addition to the definition of respondent behavior based on Watson and Ivan Pavlov’s work (Skinner, 1938). Principles of operant behavior better explain those behaviors that could not be adequately explained by Watson’s S-R theory by acknowledging the role of consequences that are resultant of the behavior itself. Skinner’s stimulus-response-stimulus (S-R-S), or what now is more commonly referred to as antecedent-behavior-consequence (ABC), three-term contingency model of behavior describes the environmental variables that increase or decrease the likelihood that a behavior will occur (Moxley, 1996). This model gave rise to the term “functional analysis,” which is the study of external variables that allows us to predict and change behavior based on an assumption of cause and effect between the environment and behavior (Skinner, 1953). Skinner then expanded this to the methodology of “experimental analysis of behavior,” which included using a highly controlled experimental setting to observe emitted behavior. This experimental manipulation of aspects of the environment could demonstrate a clear relationship between these manipulations and the behavior of interest (Cooper, Heron, & Heward, 2014).

Some behavioral theories include structuralism, methodological behaviorism, and radical behaviorism. Structuralism focuses only on observable and describable behaviors and does not include experimental manipulations or attempt to draw causal claims regarding behavior. Methodological behaviorism also focuses only on operationally defined behavior and avoids private events but, unlike structuralism, investigates functional relationships through experimentation. Skinner’s radical behaviorism also acknowledged that private events such as emotions and thoughts can also be considered behavior. He also noted that these private events are related to environmental events in the same way as observable behavior (Skinner, 1974). Skinner provided operational definitions for radical behaviorism and discussed private events and the psychological constructs of consciousness, will, and feeling, which he conceptualized as verbal behavior (Skinner, 1945).

Development of Applied Behavior Analysis

Early research on behavior analysis focused on animals such as pigeons and rats. For example, Skinner’s operant chambers, or the “Skinner box,” created arbitrary contingencies to study the relation between simple responses by animals and stimuli such as lights and sounds as well as primary reinforcers such as food and water (Catania, 1984). In the 1950s and 1960s, the focus of research on the experimental analysis of behavior shifted to investigate how these principles applied to humans. Early studies focused on individuals with intellectual and developmental disabilities or severe mental health concerns (e.g., schizophrenia) and were typically conducted in highly controlled settings such as laboratories, hospitals, or residential facilities (Fuller, 1949; Lindsley, 1956; Orlando & Bijou, 1960).

Both the subject matterFootnote 1 of applying behavioral principles to human participants as well as the emergence of single-subject data analysis, which indicated a shift from traditional large n studies, made funding and publication challenging for this field (Cooper et al., 2014). However, as more research emerged to support the evidence for behavioral approaches, the field of ABA began to take shape, including training programs through universities in the 1960s–70s as well as the advent of publication of the Journal of Applied Behavior Analysis (JABA) in 1968. The same year that JABA began publication, Baer, Wolf, and Risley (1968) published an article outlining best practices in research and practice in the field. These seminal events spurred the advancement of the field of ABA. Since that time, the field has expanded and has many applications across populations, settings, and fields. Based on current research and best practice, Cooper et al. (2014) provided the most widely used definition of ABA as “the science in which tactics derived from the principles of behavior are applied systematically to improve socially significant behavior and experimentation is used to identify the variables responsible for behavior change.”

Origin of Procedures to Determine the Function of Behaviors

The early understanding of behavior modification included a focus on punishment and reinforcement of behavior, without much attention given to the reinforcement histories of behaviors, which lent itself to the use of more extreme contingencies (Mace, 1994). However, the field then began to shift to have a greater emphasis on procedures to determine the maintaining variables of the behavior. Carr’s (1977) review on hypotheses of self-injurious behavior (SIB) set that stage for developing and testing hypotheses as to the function of behaviors to inform the intervention techniques. Carr’s (1977) review identified positive reinforcement and negative reinforcement as well as sensory or automatic contingencies.

The focus on consequences of behavior within the three-term contingency generated some general principles, including that consequences can only impact subsequent behavior, they influence response classes, and the immediacy of the consequence influences that magnitude of the effect. The consequences impact whether the future frequency of the behavior will increase or decrease, and the operations are currently described as positive reinforcement, negative reinforcement, positive punishment, or negative punishment. The antecedent is also an important component of functional analyses as the environmental conditions that impact the occurrence of the behavior are considered to be the discriminative stimulus (Cooper et al., 2014).

Understanding the antecedents and consequences of behaviors enable researchers and clinicians to reliably and validly demonstrate behavior change. One of the early publications on this topic, “Some Current Dimensions of Applied Behavior Analysis” by Baer et al. (1968), described the reversal and multiple baseline techniques used to demonstrate reliable control over behavior. The reversal design involves establishing a baseline of the behavior then beginning the experimental condition to determine whether there is a subsequent change in the behavior. If so, then the experimental condition is discontinued to re-establish a baseline and then applied again to determine whether it can again establish behavior change. If a reversal design is not feasible or the behavior change is not reversible (i.e., acquired skills), a multiple baseline technique may be more appropriate, which involves considering multiple behaviors so as to compare the experimental condition across behaviors rather than removing the experimental condition from one to re-establish a baseline. A multielement design (also called an alternating treatments design) involves the concurrent implementation of multiple treatments (alternating treatments across sessions) to determine which treatment is most effective (Cooper et al., 2014). An experimental design, such as those listed above, can be used to examine the effects of different variables (e.g., social positive, social negative, and intrinsic reinforcement) on the occurrence of SIB to test hypotheses for the function of that behavior (Carr, 1977). Previous research on SIB focused on intervention strategies, with varied results (Carr, 1977). However, reviews in the late 1970s proposed that SIB, across individuals, could be maintained by multiple variables and understanding the variables surrounding SIB could lead to better interventions (Carr, 1977). While the idea was present that SIB could be controlled by multiple variables, a systematic method of assessing those variables had not yet been demonstrated in the literature.

The First Comprehensive Experimental Functional Analysis

In 1982, Iwata and colleagues addressed this area of need by publishing the groundbreaking article “Toward a Functional Analysis of Self-Injury”, which remains to this day the model for implementation of EFAs (republished in 1994). Iwata and colleagues developed an assessment protocol using analogue conditions in which environmental events were manipulated in order to provide information about the function of a given challenging behavior. Assessments were conducted in an inpatient setting with nine individuals with developmental delay. Participants were each exposed to controlled conditions (eight participants were assessed with four different conditions and one participant was assessed with three conditions) using a multielement design (Iwata et al., 1994).

The social disapproval condition (often referred to as the attention condition) consisted of a room in which a variety of toys were accessible. At the start of the condition, the experimenter directed the participant to “play with the toys” and proceeded to engage in the outward behavior of reading a book or a magazine. If the participant engaged in SIB, the experimenter provided “statements of concern and disapproval” and brief physical contact (e.g., putting a hand on the participant’s shoulder). Otherwise, the experimenter ignored all responses from the participant. This condition was designed to assess if social disapproval maintained engagement in the behavior of SIB (Iwata et al., 1994).

In the academic demand condition (often referred to as the escape or demand condition), participant-specific educational activities were presented by the experimenter at a table. A three-prompt procedure was used to present the demands. After providing the instruction and waiting 5 seconds, if the correct response was not exhibited, the instruction was repeated with a model prompt. After an additional 5 seconds, if the correct response was not exhibited, the instruction was repeated with a physical prompt. If the participant engaged in SIB, the experimenter immediately ended the trial and looked away for 30 seconds. This condition was designed to assess if escape from demands was a maintaining variable in the engagement of SIB (Iwata et al., 1994).

In the unstructured play condition (usually referred to as the play condition), a variety of toys were available, but there were no educational activities. Without presenting demands, the experimenter remained close to the participant and provided toys to the participant intermittently. The experimenter ignored occurrences of SIB and instead provided social praise and brief physical contact for the absence of SIB (at minimum every 30 seconds). This condition was designed to be the control, with attention and items freely available in the absence of demands. It was expected that minimal SIB would occur during this condition due to the availability of social attention and physical items (Iwata et al., 1994). However, research since Iwata’s landmark study has indicated that there are situations when the rate of the behavior is higher in the play condition; specifically, the behavior may be maintained by automatic variables if a pattern of high, relatively stable occurrences of the behavior is observed across all conditions, including play (Hagopian et al., 1997).

In the alone condition, the room was devoid of toys or other items that might provide engagement. The participant remained in the room by themselves. This condition was designed to assess if there was an automatic function to the individual’s SIB by providing an environment without external stimulation (Iwata et al., 1994).

One condition was presented each session and sessions were 15 minutes in length. Alternating conditions were randomly presented in a successive sequence. Presentation of conditions continued until one of three conditions were met: (1) visual analysis indicated stability in level of SIB, (2) unstable levels of SIB were observed for 5 days, (3) 12 days passed (Iwata et al., 1994).

This study indicated that an individual’s learning history impacted his/her presentation of challenging behavior. In six out of nine participants, SIB consistently occurred at higher levels in one condition. However, this condition was not the same across participants. Providing empirical support for the idea that one topography of behavior may have a different function across individuals, this study demonstrated that experimental analysis of the contingencies surrounding a behavior could yield powerful information. Additionally, this landmark study demonstrated a protocol that could successfully be used to identify the function of an individual’s SIB (Iwata et al., 1994).

Expansion of Functional Analyses

Iwata’s work was groundbreaking in that he coalesced previous research into a methodology to analyze the contingencies surrounding a challenging behavior to determine the variables maintaining that behavior (i.e., the function). Analysis was part of ABA history from its inception; Iwata identified a more precise way of identifying the function to lead to the use of more effective interventions and reinforcers. The long-term value of Iwata’s work lies in the framework for how this analysis could be conducted, allowing it to be applied in a multitude of situations. While his work was a major advancement for our field, it was just a stepping-stone for further refinement in the procedures of EFA. Ensuring that analysis was the focus, researchers who followed applied the principles from Iwata’s 1982 study to numerous other situations.

Summarizing the strides taken since Iwata’s original study, in 2003, Hanley and colleagues conducted a review of EFA literature through the year 2000, encompassing a total of 277 studies, to provide information across various dimensions of EFAs included in research (e.g., population characteristics, setting characteristics, response topographies, condition types; Hanley, Iwata, & McCord, 2003). Beavers and colleagues replicated this review in 2013, including 158 EFA studies from January 2001 through May 2012, providing a picture of how EFAs changed in the field in the ensuing decade. In the following sections regarding expansion of EFAs, comparison of results between these two reviews will be included to provide information regarding trends in the research regarding how EFAs are conducted (Beavers, Iwata, & Lerman, 2013).

The methodology developed by Iwata and colleagues has been replicated numerous times with each major variable expanded and generalized. The resulting body of work has demonstrated the remarkable range of utility of EFA procedures. In the sections that follow, expansions and modifications to the original Iwata EFA are described, including expansions across methodology, populations, behaviors, and settings/individuals.

Expansion Across Methodology

Iwata’s EFA study described several conditions that were consistent with the current understanding within the field of the functions of behavior (i.e., attention, escape/demand, play, alone). However, a function that was not directly addressed in Iwata’s study was tangible reinforcement (where access to items/activities is the maintaining variable). The tangible condition is similar to the attention condition, except access to toys/items/activities were provided in the absence of social attention and was first described by Mace and West (1986). However, their analysis was complicated by a dual function of escape from demands and tangible. It was first investigated as a discrete function in a study by Day, Rea, Schussler, Larsen, and Johnson (1988). Since these initial developments of the tangible conditions, it has been included in EFAs more commonly. In 2003, Hanley and colleagues noted that 38.3% of studies incorporating EFAs included a tangible condition; this number increased to 54% in 2013 (Beavers et al., 2013). However, a recommendation made by Hanley et al. (2003) and continued by Beavers et al. (2013) in their review was to only include a tangible condition when initial data (e.g., observations, interview) suggest that tangible items may be a maintaining variable; the tangible condition has been shown to be more prone to a false-positive outcome and its increasing use may have contributed to the increase in multiple controlled outcomes observed in the 2013 review.

Iwata’s original study involved the use of a multielement design (Iwata et al., 1994); however, this is not the only experimental design that could be used to gather information on the variables maintaining a behavior. A review by Hanley et al. (2003) found that the multielement design was most widely used (81.2% of EFA studies), followed by the reversal design (15.5% of EFA studies). Ten years later, these rates varied only slightly (multielement 79.1% and reversal 12%; Beavers et al., 2013). The pairwise design is also used in EFA, but less commonly; it is often used when multielement designs do not yield clear outcomes (Hanley et al., 2003).

In contrast to Iwata’s original study which involved the manipulation of antecedents and consequences, another EFA methodology includes manipulation only of the antecedent of the behavior. Carr and Durand (1985) first established this methodology by manipulating only the antecedents of challenging behaviors such as aggression, tantrums, and self-injurious behavior by teaching functional communication as a replacement behavior and using differential reinforcement. These same authors then demonstrated maintenance and generalization of these results in a subsequent study (Durand & Carr, 1991). A 2003 review by Hanley and colleagues indicated that antecedent-only methodology was widely published in the research literature, included in 20.2% of EFA studies (Hanley et al., 2003); however, by 2013, the use of this methodology had decreased, to 12% (Beavers et al., 2013). This decrease suggests that the benefits of programming both antecedent and consequences outweigh the potential extra effort in implementation of consequences (Beavers et al., 2013).

Expansion Across Populations

Early in its development, EFAs were primarily conducted with individuals with intellectual and/or developmental disabilities (Beavers et al., 2013). While still the primary population for this method of assessment, the procedures have been demonstrated with individuals with other diagnoses, including attention deficit hyperactivity disorder, conduct disorder, dementia, Tourette syndrome, schizophrenia, and traumatic brain injury (Beavers et al., 2013; DuPaul & Ervin, 1996). The procedures have been increasingly used with individuals without disabilities as well. From 2003 to 2013, the percentage of EFAs conducted with individuals without disabilities increased from 9.0% to 21.5% (Beavers et al., 2013; Hanley et al., 2003). At the same time, studies incorporating EFAs with individuals with autism spectrum disorder (ASD) also increased 20.9–37.3% (Beavers et al., 2013, Hanley et al., 2003). Various reasons could explain the increase, including an increase in prevalence of ASD within the population, increased public awareness of ASD, and increased focus on research with individuals with ASD among behavior analysts.

EFAs have also been conducted with individuals of varying ages. While the majority of studies have been, and continue to be, conducted with children, adults make up a sizeable proportion of the individuals studied (24.7% in studies from 2001–2012, Beavers et al., 2013). While initially studies were conducted with school age children, Kurtz et al. (2003) implemented the techniques with very young children (10 months to 4 years 11 months of age) who engaged in SIB.

Expansion Across Behaviors

Since 1982 when Iwata and colleagues introduced functional analysis for SIB, the procedure has been applied to a multitude of topographies. In 2003, Hanley found that SIB (64.6% of EFAs) was the most commonly assessed behavior; by 2013, Beaver and colleagues found that aggression (47.5% of EFAs) was most commonly assessed. Additional topographies include vocalizations, property destruction, disruption, elopement, noncompliance, stereotypy, tantrums, and pica. From 2003 to 2013 a general trend was observed in expanding EFA procedures to different topographies, including licking/ mouthing/sniffing objects, rumination, expelling/packing food, disrobing, inappropriate sexual behavior, and nail biting. Additionally, in the past 10 years, an increase in assessing multiple topographies in one EFA was observed, from 27.8% to 75.9% of studies (Beavers et al., 2013, Hanley et al., 2003). Recently, an emphasis has also been placed on conducting EFAs for inappropriate behaviors that occur during mealtime. In 2019, Saini and colleagues conducted a systematic review of literature on EFAs conducted for this topography. Their findings supported the notion that EFAs could be effectively conducted on mealtime behaviors. While they found escape was identified as the reinforcer in the vast majority of cases (92%), the identification of multiple functions for one topography and individual was also prevalent.

One concern when conducting EFAs is the potential risk of injury to the individual, due to setting up contingencies that are designed to elicit the challenging behavior (Fritz, Iwata, Hammond, & Bloom, 2013). In order for the results of an EFA to be interpretable, observation of the challenging behavior needs to occur; however, when assessing contingencies surrounding severe challenging behavior that is unsafe to the individual (e.g., SIB) or others (e.g., aggression), a standard EFA may not be possible (Fritz et al., 2013). In 2002, Smith and Churchill identified a potential method to reduce this risk by focusing on precursor behaviors, which precede the target behavior (i.e., severe challenging behavior that is unsafe). If the precursor behavior and target behavior are members of the same response class, analysis of the precursor behavior will enable identification of the function without eliciting the target behavior. In their study, Smith and Churchill identified precursor behaviors that reliably preceded challenging behaviors and demonstrated that there was correspondence in function identified from an EFA for the precursor behavior and the challenging behavior (Smith & Churchill, 2002). In 2013, Fritz and colleagues took this concept further by identifying precursor behaviors using a checklist, identifying the function of precursor and severe challenging behavior via an EFA and then implementing an intervention based on the analysis of the precursor behavior. They found that rates of precursor and challenging behaviors decreased following this intervention (Fritz et al., 2013). In 2018, Hoffmann and colleagues replicated these results with preschool children; an intervention implemented based on the results of the precursor analysis alone (without an EFA completed on the challenging behavior) resulted in reduction of both precursor and severe challenging behavior (Hoffmann, Sellers, Halversen, & Bloom, 2018).

Challenging behaviors that occur at low rates may lead to an EFA with inconclusive results as the behavior may occur infrequently or not all during the assessment process. To address this challenge, Kahng, Abt, and Schonbachler (2001) conducted extended functional analyses, lasting an eight-hour day, with conditions varying across days. Using these procedures, they were able to identify the function and developed a successful intervention to reduce the challenging behavior. The authors noted concerns with this procedure, including the potential difficulty in staffing this extended assessment as well as ethical concerns over exposing the individual to the assessment for this extended duration. In 2004, Tarbox and colleagues identified an alternative procedure, involving starting the assessment when the challenging behavior occurred, that also resulted in identification of function and implementation of an effective treatment for two participants. Another approach that can aid in addressing safety concerns is a latency-based FA, which uses latency to the target behavior as the dependent variable (Davis et al., 2013; Falcomata, Muething, Roberts, Hamrick, & Shpall, 2016; Heath & Smith, 2019; Iwata & Dozier, 2008).

Expansion Across Settings and Individuals

The high degree of environmental control required to conduct an EFA may create a barrier for many clients, as it may not be possible to carry out an EFA in a clinical setting. Additionally, the more well controlled the conditions, the less potential for ecological validity. The assessment setting is often different from the one in which the behavior most often occurs in the natural environment or is at least altered (Hanley et al., 2003), and the EFA setting has been shown to sometimes be related to differences in the EFA results (Lang et al., 2008). One recommendation to address the differences in circumstances is to include people in the assessment who the client has a previous learning history with, such as parent or caregiver, teachers, or peers (Hanley et al., 2003).There was a shift in the research in the late twentieth and early twenty-first centuries to investigate the utility and accuracy of EFAs in other settings, particularly home and school (Iwata & Dozier, 2008). Although there may be some loss in environmental control, there is also value in conducting EFAs in the context in which challenging behaviors most often occur.

Schools

Based on a review by Anderson, Rodriguez, and Campbell (2015), the first research study was published on FA in the school setting in 1981 (Weeks & Gaylord-Ross, 1981). The number of yearly studies on school-based FAs has increased since that time, indicating more widespread interest in applying FAs to address challenging behavior in schools. Since 1997, schools have been mandated to use FAs to develop Behavior Intervention Plans (BIPs) by the Individuals with Disabilities Education Act (Allday, Nelson, & Russel, 2011). Anderson et al.’s (2015) review indicated that many studies reported using more than one form of FA, with over 60% using experimental analysis. Nearly half of the studies used non-experimental methods. Although EFAs are recommended in schools, there are several barriers to conducting thorough EFAs in this setting.

As research on FAs in schools gained traction in the early 2000s, several issues were identified, including lack of inclusion of low-rate challenging behaviors, students with disabilities as participants, and academic behaviors as the target outcome (Ervin et al., 2001). Scott, Liaupsin, Nelson, and McIntyre (2005) conducted a descriptive analysis of barriers to team-based FAs based on feedback from FA teams in schools. Of the 13 teams interviewed, 11 indicated that a referral for an FA was due to a crisis behavior. The other two teams indicated that the referral was made for challenging behaviors that were not at a crisis level that had been occurring for more than 6 months. No proactive strategies were reportedly tried prior to the referral for all 13 teams, and one team noted that no interventions had been attempted in the past while the other 12 reported that only punitive strategies had been implemented. The authors advocated the need for sufficient systems to support FA teams in schools. Proactive use of FAs is recommended but based on these results is not being utilized adequately in schools. This and other studies continue to indicate that although there is significant research to indicate the effectiveness of interventions based on FAs in schools, these techniques are not being applied consistently or successfully in schools due to a variety of barriers (Allday et al., 2011). Similarly, in a statewide survey of practitioners working with individuals with ASD either in schools, private programs, or who had a Board Certified Behavior Analyst (BCBA) certification, two-third of participants endorsed a belief that FA is the best assessment tool for informing treatment, but only one-third routinely used FA for this purpose (Roscoe, Phillips, Kelly, Farber, & Dube, 2015).

The question of who is qualified to conduct a functional analysis of behavior has been addressed frequently in the literature. Some have emphasized the limitations and potential harm of attempting to conduct an FA without proper training and have strongly recommended that only those with a BCBA or a certain level of training in behavior analysis conduct FAs for severe challenging behaviors (Hanley, 2012). Anderson et al.’s (2015) review pointed out that in published studies on school-based FAs, although teachers conducted data collection, a researcher always directed how the FA was conducted. They also noted that teachers were more likely to be involved in data collection for descriptive assessments, but researchers were more likely to conduct experimental designs, which is consistent with two earlier reviews, one by Solnick and Ardoin (2010) which found that teachers rarely participated in data collection and another by Allday et al. (2011) which reported that teachers often completed interview or rating scales but rarely were involved in active data collection. This review highlighted the consideration that the school-based FA research literature may not represent current clinical practice in schools. However, there is a substantial amount of literature indicating that teachers effectively implement EFAs when there is adequate training and consultation (Erbas, Tekin-Iftar, & Yucesoy, 2006; McKenney, Waldron, & Conroy, 2013; Rispoli et al., 2015). Erbas et al. (2006) also found that teachers rated functional analysis more positively following training.

Home and Residences

Another setting where challenging behaviors frequently occur is the home or residence of the individual. Arndorfer, Miltenberger, Woster, Rortvedt, and Gaffaney (1994) conducted descriptive and experimental FAs in the home setting, with parents included as active participants. This study included the use of the functional assessment interview (FAI), ABC data, and motivation assessment scale (MAS). The authors noted that brief EFA, which is discussed later in this chapter in greater detail, may be more appropriate in this setting than standard EFA. They found that data obtained from parental interview and ABC assessment was sufficient to determine the functions of the behavior and were consistent with an EFA. Additionally, parents were able to complete the EFA with instruction from the researcher. This study emphasized the importance of future research on feasibility and validity of FAs conducted by parents and teachers in naturalistic settings. Similarly, Thomason-Sassi, Iwata, & Fritz (2013) found that FAs conducted by caregivers who had received training or in the home setting were sufficiently consistent with FAs conducted by trained staff in clinic settings. However, they noted that this study did not directly investigate procedural integrity, so this variable should be evaluated in the future.

Emphasis in the parent training literature involves education on the functions of behavior and development of behavior plans to address challenging behaviors in children. Many manualized interventions emphasize the use of behavioral principles for parents to understand their child’s behavior, including the Research Units Behavioral Intervention (RUBI; Bearss et al., 2018), Parent Child Interaction Therapy (PCIT), Managing the Defiant Child (Barkley, 1997), Defiant Teens (Barkley & Robin, 2013), the Incredible Years (Webster-Stratton, 2001), and Positive Parenting Program (Triple P; Sanders, 2003), among many others. Therefore, having parents participate in and understand FAs and how they inform treatment approaches would likely be beneficial to fidelity of intervention implementation by caregivers.

Technology and Telehealth

Another area that holds significant implications for behavioral assessment and intervention is the use of technology, such as mobile-based applications and use of telehealth for assessment, intervention, and training. For example, several mobile-based applications have been developed to collect behavioral data. ABA providers have been using programs designed specifically to allow behavior technicians to collect fast and reliable data and enable supervisors to review clients’ progress for years. More recently, apps have been developed for other professionals, such as teachers, as well as parents or caregivers. Apps include programs to prompt and reward appropriate behaviors as well as track ABC data (7 Apps for Applied Behavior Analysis Therapy, 2017). Many research studies have focused on the effect of app-based interventions on improvement in functional communication. For example, one study by Law, Neihart, and Dutt (2018) found that training in parent implementation of the Map4speech app resulted in high levels of procedural integrity by parents and an increase in functional communication in children. However, few have evaluated the use of an app to conduct an FA of behavior. This may be an area of future research to incorporate technology-based systems within the context of parent training.

Telehealth services have been used to train caregivers, teachers, and direct care staff to successfully implement assessment and treatment based on ABA (Boisvert, Lang, Andrianopoulos, & Boscardin, 2010; Ferguson, Craig, & Dounavi, 2019; Tomlinson, Gore, & McGill, 2018). The use of telehealth by professionals also has implications for barriers to FAs such as lack of resources in certain areas. One study investigated the use of telehealth by behavior consultants to conduct FAs with children with ASD who lived significant distances from medical facilities that offer these types of services (Wacker et al., 2013). FAs were conducted by parents in local clinics during weekly telehealth meetings with behavior consultants who were located in a Teleconsulation Center. For EFAs, functions of challenging behaviors were identified with high interrater agreement for 18 out of 20 cases. These results support the use of telehealth to conduct FAs remotely (Wacker et al., 2013). Similar results were reported by another study that evaluated the use of telehealth to train parents to conduct an FA and provide behavioral intervention to address challenging behaviors and increase functional communication (Machalicek et al., 2016).

Idiosyncratic Variables

Standard test conditions for EFAs include attention, escape, tangible, play, and alone. However, there are situations in which elements not contained within those conditions impact the occurrence or non-occurrence of a target behavior that have been examined more frequently in recent years. Idiosyncratic variables are those that are particular to a given individual or situation and impact rate of the target behavior. Failure to identify idiosyncratic variables can result in identification of the incorrect function of the behavior, leading to an unsuccessful intervention. Carr et al. (1997) conducted EFAs with three individuals, demonstrating that idiosyncratic stimuli impacted the outcomes of the EFAs. For one of the individuals, slightly higher frequency of challenging behaviors were observed in the demand condition over the attention condition, which indicated an escape function of the behavior. However, subsequent analysis indicated that the behavior occurred more often in situations where small objects that could be manipulated (e.g., small balls, wristband) were absent. Additionally, Carr et al. (1997) identified guidelines to aid in identifying if idiosyncratic variables are present, thus requiring further analysis. A review conducted by Schlichenmeyer, Roscoe, Rooker, Wheeler, and Dube (2013) identified over 30 idiosyncratic variables that impacted outcomes in EFAs. Additionally, they identified strategies utilized by researchers to identify idiosyncratic variables (i.e., informal observation, anecdotal report, descriptive assessments, manipulation and observation, indirect assessments). Overall, they noted an increase in the rigor used in analyzing idiosyncratic variables.

Taking the process of identifying idiosyncratic variables further, Roscoe et al. (2015) delineated a systematic approach for identifying idiosyncratic variables. Following inconclusive standard EFA results, the researchers were able to identify a function for the challenging behavior for five out of six participants using an indirect assessment questionnaire and/or a descriptive analysis.

In conducting EFAs, it is important to analyze the situation for the specific patient and not rely on standard EFA conditions. The goal of an EFA is to identify the variable(s) maintaining the challenging behavior. At times, this may involve using the standard EFA conditions; however, they may be insufficient to determine the maintaining variables. Before conducting an EFA, it is critical to gather information about the patient and their environment to enable design of conditions that will more likely identify the maintaining variables surrounding their challenging behavior. The standard EFA conditions should be used as a tool and should not replace analysis on the part of the behavior analyst. The procedures used to analyze function of behavior will continue to evolve as more research is gathered on idiosyncratic variables.

Functional Analysis Duration

One potential barrier to conducting an EFA is the time required to complete the procedure. Full EFAs include at least three observations across a minimum of two conditions, while a brief EFA includes two or fewer observations in each condition (Hanley et al., 2003). The brief EFA has gained popularity and is designed to be conducted within 90 minutes (Northup et al., 1991). Wallace and Iwata (1999) investigated the reliability of data when the 15-minute conditions were retrospectively shortened by deleting data from the last 5 or 10 minutes of the session. Their results suggested that duration of each session could be shortened while still yielding informative and accurate data.

In addition to brief EFAs, another way to conduct an EFA in a limited amount of time is to test one function through a repeated-measures analysis, which includes a single test condition compared to a control. This is most often used when indirect measures such as caregiver report or rating scales support one hypothesized function, and if the EFA indicates that this condition is related to the behavior it has implications for more immediate treatment; however, if a clear function is not established, then a more traditional EFA that evaluates several test conditions is warranted (Iwata & Dozier, 2008). This may expedite the EFA process and lead to faster implementation of intervention; however, if the hypothesized function is not clearly determined then follow-up EFAs may be required.

Another variation of EFAs are alone series, which are employed when there is strong evidence that the behavior is automatically maintained and so a series of alone conditions are repeated to test this hypothesis (Iwata & Dozier, 2008). Situations where it is not possible to employ rigorous methodological control and integration into the natural context is beneficial often call for trial-based EFAs (Larkin, Hawkins, & Collins, 2016; Rispoli, Ninci, Neely, & Zaini, 2014; Ruiz, 2017). Trial-based EFAs involve embedding short trials (e.g., 1–3 minutes) into the natural context/environment where antecedents and consequences are manipulated during the course of the shortened trial. They are therefore a useful alternative when there are limited resources or other limitations to conduct a traditional EFA (Bloom, Iwata, Fritz, Roscoe, & Carreau, 2011).

Interpretation of Functional Analyses

Visual inspection or visual analysis is the standard method of interpreting data in single subject design research; this is the primary method of interpreting EFA results as well (Kazdin, 2011). Following the collection and graphing of data from an EFA, the graphs are viewed to identify “patterns of responding within and across conditions to determine, which, if any, of the variables may be responsible for behavioral maintenance” (Hagopian et al., 1997). Elements analyzed include “number of data points within a specific condition or phase, variability of data points, level of data, and the direction and degree of trends” (Roane, Fisher, Kelley, Mevers, & Bouxsein, 2013). Following visual analysis of the graphed data, the results are categorized. Hagopian et al. (1997) identified 12 categories of results, corresponding to the variable(s) maintaining the challenging behavior: (1) undifferentiated; (2) attention; (3) escape from demands; (4) tangible; (5) automatic; (6) attention and escape; (7) attention and tangible; (8) tangible and escape; (9) automatic and escape; (10) automatic and attention; (11) automatic and tangible; and (12) attention, tangible, and escape. However, there are potential downsides to the use of visual analysis, including the subjective nature of the analysis, the lack of specified procedures, low interrater agreement, and the challenge in interpreting data that is highly variable or includes minimal differences in level (Danov & Symons, 2008, Hagopian et al., 1997). Interrater agreement of at least 70% is considered necessary, at least 80% is considered adequate, and of at least 90% is considered good (House, House, & Campbell, 1981). In a survey conducted by Danov and Symons (2008) in which graphs were mailed to faculty and graduate student trainees without a specific visual inspection criterion, overall mean interrater agreement was 0.63. Experts had only slightly higher mean (0.65 compared with 0.63) and no rater reached the standard of good (over 90%). Categories accounting for multiple functions and undifferentiated results received lower agreement. Single social functions resulted in the highest interrater agreement. In 2015, Ninci et al. conducted a meta-analysis of interrater agreement on visual inspection results (19 articles were identified for inclusion, not necessarily functional analysis graphs) and found an overall weighted score of 0.76 (Ninci, Vannest, Willson, & Zhang, 2015).

Accurate interpretation of the function is critical to implementing an appropriate, function-based intervention (Roane et al., 2013). Over the past several decades, procedures have been developed to improve this method. Hagopian et al. (1997) developed structured criteria for interpreting EFA data that included a list of steps to follow. Without using the criteria, the interrater agreement of predoctoral interns was low (0.46); after training (didactic instruction, modeling, practice with feedback) on the criteria, interrater agreement significantly improved (0.81). While this criterion should not replace visual inspection or be applied rigidly (as it does not encapsulate all potential situations), the researchers demonstrated that decision making rules could be operationalized and individuals could be trained in the use of these procedures to increase interrater agreement. In 2013, Roane and colleagues extended these results, applying a modified criterion, allowing the criteria to be applied to a greater range of EFAs (i.e., not requiring a specific number of data points per condition, so allowing for varied lengths of EFAs to be interpreted with this criteria). Additionally, they used training similar to that of Hagopian et al. (1997) to train master’s-level students and postbaccalaureate behavior therapists. During pretraining, when they were provided with the written criteria only, they achieved 0.73 and 0.80 interrater agreement, respectively. After the training, they received 0.98 and 0.95, respectively, indicating that the procedures can be used to train non-experts in visual inspection.

However, results of visual inspection are not always conclusive. When variability in the data lead to an undifferentiated function, the results need to be clarified. A recent review of published research indicated that differentiated results lead to the identification of a function in 94% of cases; while this is a high percentage, it is likely that publication bias resulted in a percentage that is higher than is what is seen in general clinical settings (Beavers et al., 2013). In recent years, procedures have been developed to aid in clarifying inconclusive results to aid in identifying effective function-based interventions; the literature supports the use of a combination of various approaches to identify a function in these situations (Saini, Greer, & Fisher, 2015). In a summary of 176 cases, Hagopian, Rooker, Jessel, and DeLeon (2013) found that a function was identified following the implementation of a standard EFA only in 47% of cases. They then implemented initial modifications to the EFA that they classified into one of three categories (or a combination): antecedent modifications (e.g., using more challenging demand condition tasks), changes to consequences (e.g., providing varied forms of attention), or design modifications (e.g., using a reversal instead of a multielement design) and were able to obtain differentiated results in 84% of total cases. Following secondary modifications, a function was identified for a total of 93% of cases. The most effective initial modification was change to the EFA design. When an EFA resulted in inclusive results for aggression, Saini et al. (2015) used multiple strategies to determine the function, including graphing topographies separately, conducting an EFA for one topography only, modifying the EFA procedures to aid in discrimination between conditions, and evaluating treatments matched to a proposed function.

Recently, research has been conducted on training staff to analyze undifferentiated EFA results. Chok, Shlesinger, Studer, and Bird (2012) implemented a training program (including instruction, modeling, practice, and feedback) for BCBAs that involved teaching four component skills in conducting an EFA: accurately implementing the EFA conditions, interpreting EFA graphs, identifying next steps for undifferentiated graphs, and determining function-based interventions to implement based on EFA results. All three participants demonstrated a significant increase over baseline in all four areas. Schnell, Sidener, DeBar, Vladescu, and Kahng (2018) trained graduate students in making appropriate decisions when presented with undifferentiated EFA data. Computer-based training was used to teach the students that included multi-media modes of presentation, interaction, and quizzes. For 19 out of 20 students, identification of the function (or lack of differentiation) and the next step (i.e., brief EFA, multielement EFA, extended alone condition, pairwise analysis, refer client to treatment) improved over baseline following treatment and maintained 2 weeks following treatment. However, for both of the articles discussed above, training was conducted with prepared graphs as opposed to with graphs that resulted from an EFA with a client.

Nonexperimental Methods for Functional Analysis

Although EFAs have numerous advantages over nonexperimental methods of functional analysis, due to a variety of limitations it is often not possible to conduct an EFA. Nonexperimental methods most often consist of direct observation or descriptive analysis or indirect assessments through interviews and rating scales (Healy & Brett, 2014). Direct or descriptive assessments often consist of identifying relevant information or recording of data such as frequency, duration, and ABC. These methods do not include experimental modification of variables that may be related to the behavior (Herzinger & Campbell, 2007). Indirect assessments include interviews, and questionnaires or scales such as the Questions About Behavior Function (QABF; Paclawskyj, Matson, Rush, Smalls, & Vollmer, 2000), the Motivation Assessment Scale (MAS; Durand, 1989) and the Motivation Analysis Rating Scale (MARS; Wiesler, Hanzel, Chamberlain, & Thompson, 1985). The Functional Assessment Interview (FAI), which is adapted from O’Neill et al. (1997), is also widely used to gather information.

One study compared types of functional assessments (i.e., indirect, descriptive, and experimental) and found that descriptive assessments typically did not yield conclusive information, while indirect and experimental assessments provided what were considered conclusive findings. The authors noted that “current results suggest that indirect and experimental functional assessment procedures may be the most cost-effective and reliable options” (Tarbox et al., 2009). Additionally, Fee, Schieber, Noble, and Valdovinos (2016) compared indirect and direct assessments. Indirect assessments investigated were the QABF, the MAS, and FAI. These measures were compared to brief EFAs (Northup et al., 1991). There were inconsistencies in results across measures, and the authors suggested that using them in conjunction with one another to increase the accuracy of the results. They also noted that information gained from indirect assessments can be beneficial in understanding parent or caregiver’s perception of the functions of behaviors, even if these are not the primary functions identified through direct assessment. This understanding of parent or caregiver perception has implications for treatment, as it may inform parent training following identification of the primary function (Fee et al., 2016).

Future Directions

Several barriers to FAs and limitations of current practice have been identified in the present chapter. Some of the most commonly cited obstacles include issues measuring low-rate behaviors, time commitment, risk of harm, changing reinforcers over time, multiple topographies and functions, and lack of investment from stakeholders (Hanley, 2012). The last several decades have yielded research to address some of the primary initial limitations of the initial FA procedures; however, there continue to be barriers to conducting FAs in real-world settings that are not always reflected in research studies. Ecological validity continues to be a primary concern within the field. It is important moving forward that clinicians and researchers continue to attempt to expand the procedures for FAs to better fit the real-world needs of clients and continue to critically think about the analysis component of FAs (Dixon et al., 2012). Subsequent chapters in this volume are aimed at addressing considerations for practical application of the FAs that expand the methodologies described in the present chapter.