Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Functional behavioral assessment (FBA) refers to a range of methods designed to identify the environmental variables that control problematic behaviors. Methods for revealing these variables include indirect measures, such as interviews and questionnaires, or direct methods, such as narrative recording of the antecedents that precede responses of interest and the consequences that follow them. Many behavior analysts believe that the “gold standard” of FBA is experimental functional analysis (FA) (Iwata, Dorsey, Slifer, Bauman, & Richmond, 1982/1994), which systematically arranges consequences for problem behaviors to identify their functions, that is, the reinforcers that maintain those behaviors. FBA is one of several ways of collecting information about clients, and professional organizations such as the American Psychological Association (APA) and the Behavior Analyst Certification Board (BACB) have established general ethical guidelines regarding how assessments should be conducted and interpreted. For example, Standard 9 of the Ethical Principles of Psychologists and Code of Conduct promulgated by the APA (2010) is devoted entirely to assessment. The same is true of Standard 3.0 of the Behavior Analyst Certification Board Guidelines for Responsible Conduct (BACB Guidelines, BACB 2011 ). That standard is presented in Table 13.1. Any practitioner who abides with the standards established there and elsewhere in the Guidelines is therefore behaving ethically, regardless of whether he or she is involved in functional assessment or another professional activity.

Table 13.1 Guidelines for responsible conduct with respect to assessing behaviora
Table 13.2 Collaboration between school personnel and behavior analysts for functional analyses and behavioral interventions (adapted from Peck Peterson et al., 2002)

Because FBA can be an integral part of effective treatment, as other chapters in this book clearly illustrate, including it in treatment planning is ethical conduct. This perspective is evident in standard 3.02 of the current BACB Guidelines (BACB, 2011), which states: “The behavior analyst conducts a functional assessment, as defined below, to provide the necessary data to develop an effective behavior change program.”

It is, however, true that one can easily envision situations in which specific applications of FBA raise interesting ethical questions, as in the hypothetical case 3.02B presented by Bailey and Burch (2011) in a book devoted entirely to ethics for behavior analysts. In this case, the issue is whether an FA is necessary in situations where the person engages in self-injury, particularly when informal assessment methods have not led to an effective intervention. On the face of it, it appears that further discussion of the ethics of FBA, in general, and of FA, in particular, is merited. The purpose of this chapter is to initiate such discussion and to provide some general guidelines for ethical FBA. Our goal is not to dictate what is the right or wrong course in any one of the many decisions that must be made throughout the FBA process. Instead, we aim to identify applications of the analysis procedures that can give rise to important questions that behavior analysts should consider carefully. We begin our discussion with considerations regarding what might be considered “traditional” uses of FA; in other words, using FA to identify maintaining variables for self-injurious behavior (SIB) of individuals with developmental disabilities. We then discuss additional ethical considerations when expanding FBA to a broader range of populations and settings, using school-based FBA as an exemplar.

Special Considerations Regarding FA

Since the technique was first described by Iwata et al. (1982/1994), FA has been widely used to isolate controlling variables for self-injury and other challenging behaviors exhibited by people with intellectual and other developmental disabilities (e.g., Hanley, Iwata, McCord, 2003;; Hastings & Noone, 2005). Essentially, the procedure involves the systematic delivery of stimuli after the occurrence of problem behavior, whereby one of more of those stimuli is assumed to function as reinforcers. Decisions about functions of behavior are made by comparing rates of responding across different conditions. Those conditions that result in the highest rates of behavior are assumed to reveal the reinforcers for those behaviors. However, because it requires targeted behaviors to occur undercontrolled circumstances, FA in particular poses special ethical considerations.

FA and the Primum non Nocere Principle

The so-called Hippocratic injunction to first do no harm [in Latin, primum non nocere] has long been an axiom central to the education of medical and graduate students in the helping professions (Smith, 2005). Likewise, behavior analysts have a fundamental responsibility to not harm their clients or to allow harm to occur under their watch (Bailey & Burch, 2011). Iwata and his colleagues (1982/1994) were careful to uphold this principle in their seminal description of FA of SIB. In brief, Iwata et al. arranged five test conditions in an analogue setting to determine the conditions under which SIB regularly occurred, hence the variables that appeared to control such responding. They pointed out that the possibility of participants seriously injuring themselves during the assessment of controlling variables was a real concern. They were very careful to arrange protections to prevent this from occurring and they described those protections clearly and in detail. In fact, their article contains a section entitled “Human Subjects Protection” that comprises 56 lines. In it, Iwata and his colleagues indicated that procedures were approved by a human subjects committee (i.e., an Institutional Review Board, IRB), individuals who were at risk of severe physical harm were excluded from participation, and all potential participants received a complete medical exam, with neurological, audiological, and visual evaluations as appropriate “to assess current physical status and to rule out organic factors that might be associated with or exacerbated by self-injury” (p. 199). Criteria for terminating sessions were established through consultation with a physician. The physician or a nurse observed sessions intermittently to assess whether termination criteria needed to be adjusted. If termination criteria were met, ­participants were immediately removed from the therapy room and evaluated by a physician or nurse, who determined whether the sessions would continue. After every fourth session, each participant was examined by a nurse. Finally, each case was reviewed at least weekly both in departmental case conferences and in interdisciplinary rounds. Using safeguards such as those arranged by Iwata et al. and limiting the number and length of sessions to the minimum required to provide useful information minimizes harm to participants during FA.

Despite the possibility that harmful behavior will be temporarily reinforced (and thus increased) during FA sessions, it is important to point out that a properly conducted FA does not increase the risk of harm to participants relative to that they encounter in their everyday environment, a point made by Iwata et al. (1982/1994) in their seminal article. If it is ethically acceptable for a target behavior, such as SIB, to occur outside FA sessions, then the same should be true within such sessions, although safeguards to prevent serious harm might be required. Interestingly, published studies rarely mention such safeguards. Of 116 articles describing the FA of SIB recently reviewed by Weeden, Mahoney, and Poling (2010), 9 (7.7%) described session termination criteria and 23 (19.8%) described other procedural safeguards for reducing risk to participants. As Weeden et al. pointed out, it is possible, even probable, that appropriate safeguards to prevent harm to participants were in place in the other studies but were not described. Nevertheless, it is important for those implementing functional analysis procedures to consider the potential importance of having in place structured termination criteria and safeguards in place to protect individuals engaged in FAs.

Institutional Review Boards and Informed Consent

Having research approved by a Human Subjects Institutional Review Board is one way to ensure that procedures are ethically sound (Bailey & Burch, 2011), but only six of the articles (5%) reviewed by Weeden et al. (2010) indicated that such approval was obtained. Another safeguard is securing written informed consent. Section 3.01 in the BACB Guidelines (BACB, 2011) states that:

The behavior analyst must obtain the client’s or client-surrogate’s approval in writing of the behavior assessment procedures before implementing them. As used here, client-surrogate refers to someone legally empowered to make decisions for the person(s) whose behavior the program is intended to change; examples of client-surrogates include parents of minors, guardians, and legally designated representatives

Although this datum was not reported by Weeden et al. (2010), only one of the 116 studies they evaluated (0.8%) specified that clients (or their surrogates) had provided written informed consent. Based on the information provided, one must presume that no harm was done in most of these studies. In fact, Weeden et al. assumed that protections generally were adequate and were careful not to accuse researchers of unethical conduct. They wrote:

The present findings in no way suggest that FA procedures as arranged in studies of SIB are ethically or otherwise questionable. It is for that reason that we do not cite specific studies when making the case that high levels of SIB were sometimes present across many sessions with no safeguards reported. These findings do, however, clearly suggest that important information about safeguards arranged to protect participants is not included in many articles. If safeguards, such as criteria for terminating sessions and excluding participants, are in place—and we assume that they are—describing them precisely and concisely would be easy. If safeguards are not in place, some explanation may be appropriate. We encourage authors of future articles to ensure, first and foremost, that the protections they arrange to prevent serious harm to participants are in fact adequate and to ensure as well that readers of those articles have sufficient information to evaluate and, if desired, to replicate those safeguards. We also encourage editors of relevant journals to require them to do so before their work is published. FA is an invaluable tool and these actions are suggested not to criticize what has been done in the past, but rather to improve that which is done in the future. (p. 302)

These recommendations appear prudent and we endorse them. From a methodological standpoint, however, it is important to note that the use of protective equipment could potentially alter the results of an FA. For example, Le and Smith (2002) found that FA of the SIB of three participants yielded different results when they did and did not wear protective equipment. When protective equipment was worn, very little SIB occurred and no clear functions were revealed. In the absence of protective equipment, however, SIB appeared to be maintained by negative reinforcement (escape from demands in one participant and escape from a wheel chair in a second participant) in two participants and by nonsocial (i.e., automatic) reinforcement in the third. Although other studies using FA have revealed clear functions in the presence of protective equipment (e.g., Iwata, Pace, Cowdery, & Miltenberger, 1994; Mace & Knight, 1986), the findings of Le and Smith call attention to the need to consider whether inconclusive FA results might be the result of protective equipment or other participant safeguards, such as very conservative session termination criteria that might prevent the collection of adequate data, and, if so, whether those safeguards could be safely and ethically withdrawn. Such decisions should be made by an informed team of individuals that includes a legal representative of, and an advocate for, the participant.

This example is an illustration of why there are not clear answers to questions as to the “right” and “wrong” course of action during an FA. While it may be considered “right” to include protective equipment in the analysis to protect the clients from harm during the analysis, if doing so calls into question the validity of the obtained results, then the inclusion of protective equipment may be less “right.” The questions of if, when, and how protective equipment should be included in an FA is one the behavior analyst should be carefully consider before commencing an FA.

Brief FA and Harm Reduction

Because of the potential to strengthen harmful behavior temporarily during an FA, minimizing occurrences of the target behavior to the lowest number (and intensity) adequate to reveal ­controlling variables is an ethically sound goal. Brief FA (Northup et al., 1991) is one way to accomplish this goal. Brief FA is a modification to traditional FA procedures in which clients participate in fewer, truncated sessions and fewer types of sessions rather than the traditional five (alone, escape, control, tangible, and attention) described by Iwata et al. (1982/1994). Studies have shown that brief FAs are robust and can provide meaningful information about the variables that control target responses. For example, Kahng and Iwata (1999) compared data from 50 traditional FAs (35 with clear response patterns and 15 undifferentiated) with data from brief FAs constructed by isolating the first session of each condition from the rest of the complete analyses. They concluded that brief FA and within-session analysis (the examination of response rates within the isolated sessions to uncover within-session trends which may be obscured by overall session average) yielded results comparable to those of more lengthy evaluations in 66 and 68% of cases, respectively. Further analysis of the data revealed correspondence between brief and traditional methods in 27 of the 35 data sets (77%) where a function was clearly identified. However, it is important to note that when full FA outcomes were undifferentiated, correspondence for the within-session analysis was substantially higher (80% vs. 40%) than for the brief procedures.

More recently, another type of brief FA, termed trial-based FA, has gained considerable empirical support (e.g., Bloom, Iwata, Fritz, Roscoe, & Carreau, 2011; LaRue et al., 2010; Sigafoos & Saggers, 1995; Wallace & Knights, 2003; Wilder, Chen, Atwell, Pritchard, & Weinstein, 2006). Trial-based FAs involve comparing brief test conditions [motivational operation (MO) present] with control conditions (MO absent) of the same length. Trial-based FAs have the potential to yield the same results as extended FAs, thus reducing the time spent engaging in harmful behavior during the assessment. For example, Wallace and Knights (2003) compared the results of trial-based FA with extended FA and found that results were the same for two of the three participants. Further, they reported that the brief evaluations took an average of 36 min to complete, whereas the extended procedures took an average of 310 min (an 88.4% difference in session time). More recently, LaRue et al. (2010) found exact correspondence across trial-based and traditional FA models for the problem behaviors (e.g., aggression, self-injury, disruption, and inappropriate vocalization) of four of five participants, with partial correspondence obtained for the fifth participant. The authors also reported that traditional procedures took an average of 208 session minutes to complete, whereas the trial-based analysis took an average of only 31.6 (an 84.8% difference). Bloom et al. (2011) conducted their analyses in a more naturalistic classroom setting and found correspondence in six of the ten analyses they compared (with partial correspondence on a 7th case). However, their results revealed more modest savings in assessment time (271 min for traditional FA and 233 min for trial-based FA; a 14% difference).

Another potentially viable brief assessment technique involves measuring latency to the first response. In this arrangement, the participant is presented with conditions that resemble a typical FA, but the session ends following the first instance of problem behavior, and the latency to the problem behavior across conditions is measured. Call, Pabico, and Lomas (2009) compared results of a demand condition only latency FA and a standard FA with two participants who exhibited SIB and disruptive behavior. The latency FA yielded a hierarchy of demand aversiveness based on the latency to the first problem behavior. During subsequent functional analyses, the shorter latency demands produced more differentiated outcomes. Thomason-Sassi, Iwata, Neidert, and Roscoe (2011) conducted retrospective analyses of 38 FA data sets, in which data were graphed first as response rates (or, if appropriate, percentage of intervals) across sessions and secondly as latency to respond to the first target response within a session. Eighty six percent of the cases showed a high degree of correspondence between the two types of response measurement. Further, ten newly conducted FAs in which both traditional and latency analyses were performed showed correspondence in nine out of ten cases. These results suggest that latency may be a viable measure of responding in situations where repeated occurrences of behavior are dangerous or when response opportunities are limited. Despite these promising results, however, research on latency FA is currently somewhat limited. More research is needed to draw firm conclusions about the utility of this method.

Regardless of the particular method, brief, trial-based, and latency-to-first response functional analyses necessarily expose participants to fewer sessions or session types (e.g., Barretto, Wacker, Harding, Lee, & Berg, 2006; LaRue et al., 2010; Northup et al., 1991) and/or sessions of shorter duration (e.g., Barretto et al.; Kahng & Iwata, 1999; LaRue et al.; Northup et al.; Wallace & Iwata, 1999) than conventional FA. As a result, these forms of FA may limit opportunities to engage in harmful responses, reduce the likelihood that new topographies of harmful responses will occur, and make it unlikely that delivering putative reinforcers (e.g., attention and tangible items) will significantly strengthen target responses. Moreover, these forms of FA offer the possibility of quickly ascertaining the variables that control a targeted response and using this information to develop an effective intervention. All of these considerations are positive and a strong case can be made that from an ethical perspective these forms of FA are preferred to traditional functional assessment, whenever possible. It may be advisable to begin with more brief forms of FA to identify behavioral function and to only proceed to the more traditional model when behavioral function cannot be identified by these methods (Vollmer, Marcus, Ringdahl, & Roane, 1995).

FA and Right to Effective Treatment

Behavior analysts strongly agree that their clients have a right to effective treatment (Van Houten et al., 1988). Inherent in this principle is the right to a treatment that is both appropriate and timely. A potential ethical issue with any form of FBA, but one especially likely to emerge when traditional FA is used, is that treatment is not designed and implemented until assessment is finished, which can require many hours or even days (Vollmer & Smith, 1996). Behavior analytic practitioners, as well as researchers, should consider whether time is better spent designing and evaluating an intervention based on other forms of FBA data (e.g., brief FA or descriptive analysis) or on collecting extensive FA data in the hope of eventually developing a superior intervention. In reality, it is likely that many practitioners do not have the time or resources needed to conduct an extensive experimental analysis of the variables that control a target behavior; they will of necessity prioritize assessing treatment effects, not accessing the functions of the target behavior. In our view, this strategy is defensible from both ethical and practical perspectives. In truth, behavior analysts use a relatively small number of behavior-change strategies and interventions formulated with and without FA data are often comparable (Schill, Kratochwill, & Elliott, 1998). FA, like FBA in general, is a useful tool but it can easily be overused. Moreover, FA that does not lead to an effective intervention does not benefit participants. More than a few published studies involving the FA of SIB do not even describe an intervention, but instead focus on delineating controlling variables per se. Such work is at best incomplete. In our view, the best (and most ethical) FA research delineates controlling variables, designs an intervention that takes those variables into account, and demonstrates that the intervention produces clinically significant changes in target behavior(s) in the participant’s everyday environment, not just in short experimental sessions.

Issues in Expanding the Use of FBA Across Settings and Populations

FBA and Best Practice

Since its inception, FBA has been used to identify controlling variables for a range of problem behaviors in various populations and settings (Hanley et al., 2003). In 1991, a National Institutes of Health (NIH) consensus panel identified FA as a “best practice” for designing behavioral ­interventions for individuals with developmental disabilities. The value assigned to FBA in more recent years is evident in federal legislation dealing with the education of students with disabilities. When the Individuals with Disabilities Education Act (IDEA, P.L. 105–117) was reauthorized in 1997, FBA was specifically mandated for certain students. IDEA states:

The [Individualized Education Program, IEP] team must address through a behavioral intervention plan any need for positive behavioral strategies and supports. In response to disciplinary actions by school personnel, the IEP team must within 10 days meet to develop a functional behavioral assessment plan [italics added] to collect information. This information should be used for developing or reviewing and revising an existing behavior intervention plan to address such behaviors. In addition, states are required to address the in-service needs of personnel (including professionals and paraprofessionals who provide special education, general education, related services, or early intervention services) as they relate to developing and implementing positive intervention strategies.

This mandate was retained in a second reauthorization, the Individuals with Disabilities Education Improvement Act of 2004 (P.L. 108–446), which states that whenever a child’s educational placement is to be changed because of a violation of a code of student conduct, the IEP team must “conduct an FBA and develop a behavioral intervention plan for such child.” Moreover, “a child with a disability who is removed from the child’s current [educational] placement [because of a code violation] shall receive, as appropriate, a functional behavior assessment, behavioral intervention services, and modifications, that are designed to address the behavior so that it does not recur.”

When FBA is applied in school settings, it is often applied across a wide range of problem behaviors and across a wide range of populations—reaching far beyond the population for which the methodology was originally pioneered (i.e., individuals with developmental disabilities in a highly controlled settings). The mandate set forth in IDEA required that FBAs be applied to a wide range of topographies of problem behavior other than self-injury, including aggression, noncompliance, off-task behavior, bullying, and bringing weapons to school, as these behaviors may all result in a placement change for an individual with disabilities.

Moreover, in addition to its being required by federal law in some instances, there is a growing consensus that FBA is in general “best practice” in developing behavioral interventions in school settings (e.g., Gresham, Watson, & Skinner, 2001; Steege & Watson, 2008), as well as a number of other settings, such as community mental health. For purposes of this discussion, we will focus on the application of FBA to school settings. One reason for the “best practice” view is that several authors have suggested conducting FBAs prior to selecting school-based intervention selection will produce better treatment outcomes than selecting interventions with no FBA data (e.g., Asmus, Vollmer, & Borrero, 2002; Crone & Horner, 2000; Vollmer & Northup, 1996). Given that “best” practices are (or should be) evidence-based, one would expect there are compelling data clearly showing that interventions based on FBAs are significantly superior to alternative interventions across a range of behaviors and educational settings. Reviewing the existing literature for school-based FBA, however, does not necessarily support such a simple and strong conclusion.

FBA in School Settings: Best Practice?

Despite the wealth of studies that employ FBA prior to designing treatments (for reviews, see Ervin et al., 2001; Hanley et al., 2003), few school-based studies directly compare function-based interventions to those selected without the benefit of FBA data. Several studies appear to support the use of FBA prior to intervention in school settings. However, some of the studies have produced conflicting results.

Much of the data confirming the effectiveness of function-based interventions have come from studies that evaluated the relative effectiveness of a given intervention when applied to behaviors maintained by different kinds of consequences (i.e., operant responses with different functions). For example, Taylor and Miller (1997) compared the effectiveness of time-out interventions with children whose problem behaviors were maintained by attention and with children whose problem behaviors were maintained by escape. Time-out generally was effective for attention-maintained behaviors but not for escape-maintained behaviors. In a similar but more complicated study, Meyer (1999) evaluated the effects of two interventions, one that allowed children to access assistance with tasks and a second that allowed children to access praise for working. Those children identified in an initial FA phase as exhibiting higher levels of problematic behavior in the presence of difficult tasks (regardless of the frequency of praise) responded more positively to the treatment that taught them how to recruit help appropriately. In contrast, those children whose behaviors were maintained by adult attention (regardless of task difficulty) exhibited fewer problem behaviors when taught to recruit praise. In a third study, Romaniuk et al. (2002) demonstrated that children whose behaviors were maintained by attention were less likely to benefit from choice-making interventions than those whose behaviors were maintained by escape. For the latter group of children, reductions in target behaviors were not observed until the implementation of differential reinforcement for on-task behavior. These studies and others (e.g., Carr & Durand, 1985) suggest that certain interventions will be effective only if the target behaviors are maintained by specific kinds of reinforcers.

In the studies just described, the researchers implemented one or two general interventions across participants whose target behaviors were maintained by different kinds of events. Another strategy for illustrating the importance of FBA in developing effective interventions is to compare the effects of interventions based specifically on known functions with those of similar interventions without those specific components. This tack was taken by Ingram, Lewis-Palmer, and Sugai (2005) in a study in which relevant antecedents and consequences affecting the behavior of two boys in middle school initially were determined via informant and descriptive assessments. Following the FBA, two behavior intervention plans were designed for each child. One plan was designed to address specific variables identified in the FBA (e.g., task difficulty, escape from demands), whereas a second, similar plan omitted key elements related to function. Interventions were then rated by two experts not associated with the study for technical adequacy (i.e., level of research support for the intervention components) and match to hypothesis (i.e., how well the intervention addressed variables identified in the FBA). Technical adequacy was deemed to be high for both types of interventions. Match to hypothesis was rated higher for both function-based interventions as compared to their non-function-based counterparts. When the interventions were implemented, results clearly showed that problematic behaviors were less frequent under function-based interventions as compared to those that did not address relevant environmental events.

In a related study, Newcomer and Lewis (2004) compared the effects of function-based and non-function-based treatments on the behaviors of three elementary school students. Hypotheses about maintaining variables for the target behaviors were constructed using descriptive and experimental analyses. Following completion of the FBA, each child was exposed to a non-function-based intervention followed by a function-based intervention in a multiple baseline design. For all students, problematic behaviors were as high as or higher than baseline when non-function-based interventions were used. When the function-based treatment was introduced, problematic behaviors decreased immediately for two of the children and more gradually for the third. Ellingson, Miltenberger, Stricker, Galensky, and Garlinghouse (2000) also compared interventions based on hypothesized functions to those targeting a different function than that revealed by informant and descriptive assessments. For one of the three participants, results revealed that the function-based intervention was superior to the non-function-based alternative in reducing problematic behavior. Results were less compelling for the remaining participants, but suggested that the function-based interventions were more effective.

Although these direct comparison studies appear to suggest that function-based interventions produce more favorable outcomes that non-function-based treatments in school settings, a few methodological cautions are warranted. In each of the three studies just described (Ellingson et al., 2000; Ingram et al., 2005; Newcomer & Lewis, 2004), the non-function-based treatments for some of the participants included components that were contraindicated by the FBAs. Specifically, baseline and nonfunctional treatment conditions in Ellingson et al.’s study reinforced problem behavior using stimuli that the FBA suggested maintained those behaviors (i.e., teacher attention). Likewise, one of the children in Newcomer and Lewis’s study engaged in behaviors that appeared to be maintained by escape from peer interactions. During the non-function-based treatment, the child was exposed to a dependent group contingency that effectively put him in closer contact with his peers. It is, therefore, not surprising that his behavior worsened during the implementation of the intervention, as it occasioned more opportunities for escape. Similarly, Ingram and her colleagues used teacher ignoring as part of the non-function-based treatment package for a behavior maintained by escape from demands.

As suggested by Iwata et al. (1994), extinction interventions based on form instead of function can potentially make problems worse. If the strategies compared to FBA-informed treatments reinforce responses targeted for reduction or increase their probabilities in other ways, it is not surprising that function-based strategies would prove more effective. Granted, it is possible that these authors intentionally used interventions contraindicated by the FBAs in an attempt to approximate the relatively common error among school personnel of using interventions that are based on form, not function (Vollmer & Northup, 1996). It is unclear, however, whether the outcomes would have been the same if the comparison interventions had not reinforced problem behavior.

Most investigations within this limited literature suggest that function-based interventions produce better treatment outcomes, but the findings are not universally positive. For instance, Schill et al. (1998) compared treatments based on FBAs to standard treatment packages (i.e., those developed without a preceding assessment of relevant antecedents and consequences of behavior). Nineteen children in Head Start who displayed persistent problem behaviors were randomly assigned to one of two groups. Teachers of children in Group 1 met with trained consultants to functionally assess problem behaviors and develop interventions based on hypothesized functions (functional approach). Teachers of children in Group 2 met with trained consultants to describe the topography of problem behaviors (technological approach). Behaviors were classified as externalizing (e.g., aggression, noncompliance) or internalizing (e.g., social withdrawal), and then Group 2 teachers were given a self-help manual that described strategies for intervening with both categories of behaviors. Analysis of effect sizes between groups revealed no significant differences between function- and non-function-based treatments; both types of interventions were equally effective. It is important to note, however, that one potential reason there were no significant differences between treatments is that the interventions used in the two conditions were often identical. For example, differential reinforcement, goal-setting, and praise featured prominently as intervention components in both the functional and technological approaches. Failure to observe significant differences in treatment outcomes potentially could be accounted for by the inadvertent use of function-based treatments in the technological condition. Because function was not assessed for the children in the technological group, it is impossible to discern whether the treatments selected for those children did or did not address functions of behaviors.

Given the extant literature, drawing strong conclusions regarding the utility of conducting FBAs prior to designing school-based interventions for problem behavior is somewhat difficult. One reason for this difficulty is that the database is relatively sparse and based primarily on small n research designs. This is not to suggest that single-case designs cannot reveal phenomena that hold widely, but only to emphasize that to do so requires sufficient replications of their results. As of yet, the data are simply too limited to draw firm conclusions. Further, and importantly, Gresham et al.’s (2004) review of 150 school-based intervention studies published in the Journal of Applied Behavior Analysis over a 9-year period (1991–1999) revealed that treatments preceded by FBAs were no more effective than those in which FBAs were absent (or at least not reported). Blakeslee, Sugai, and Gruba (1994) found a similar pattern across intervention studies reported over a wider range of settings in journals considered to be primarily or exclusively behavior analytic in nature.

Limitations within the existing literature lead to the conclusion that a good deal more research is needed to provide a firm empirical base for the use of FBAs prior to school-based treatment planning. This is not to suggest that FBAs should not be used in school settings. It is, however, a call to researchers to conduct additional studies in the utility of school-based FBA to broaden our literature base and the evidence upon which best practices can be made. Specifically, investigations that directly compare interventions indicated, contraindicated, and unrelated to behavioral function should be conducted to assess the relative effectiveness and efficiency of different intervention approaches. Comparisons of function-based interventions to alternative interventions commonly used in school settings and favored by teachers (e.g., token economies) and often implemented without the benefit of a FBA would be of particular practical value. In all comparisons, it is essential that a legitimate attempt is made to develop maximally effective interventions and to ensure that those interventions are implemented with sufficient integrity. Until further research is conducted, in our view there are not sufficient data to conclude with confidence that interventions tied to FBA are always, or even typically, more effective than alternative interventions for reducing undesired target behaviors in school settings.

To say this is not to disparage FBA or to deny its usefulness, but it is to suggest that if taken literally to imply that behavior analysts working in school settings must always conduct FBA before developing an intervention, then Standard 3.02 of the current BACB Guidelines (BACB, 2011) is inconsistent with Standard 1.01, which reads: “Behavior analysts rely on scientifically and professionally derived knowledge when making scientific or professional judgments in human service provision, or when engaging in scholarly or professional endeavors.” In fact, there may be several instances where an FBA is simply not warranted for effective intervention, and in these cases, ethical conduct might involve behavioral interventions that are not preceded by an FBA.

Effective Intervention in the Absence of FBA

Research data and our professional experience certainly indicate that FBA can play an invaluable role in developing effective treatments for reducing undesired behavior in school settings. But they also indicate that FBA is not always needed. Consider, for example, a situation in which a behavior analyst is called in to help a special education teacher develop an intervention that the teacher can use to reduce the disruptive behavior of students to acceptable levels. The consultant’s first visit to the classroom reveals that the teacher lacks basic behavior management skills. Clear rules for appropriate student conduct are lacking as are meaningful consequences for inappropriate or appropriate behavior. Activities are poorly organized and the overall impression is one of chaos. In such a situation, FBA is not a pressing priority. Regardless of the variables that control the undesired behaviors of the students, establishing effective strategies for general classroom management is the obvious first step and a prerequisite to reducing disruptive behavior.

The same can occur when consulting with individuals with developmental disabilities. For example, consider another example we recently experienced. One of the authors was asked to provide an FA for a 26-year-old young man who was reported to engage in elopement from his home and aggression toward his mother. Upon the behavior analyst’s first visit to the home, the behavior analyst learned that the young man was only allowed out of his house for therapy 6 h per week. During these times, he displayed appropriate behavior in the community and never eloped. The rest of the week he was required to stay in his house, because no services were available, and his mother did not feel she could handle him in the community. She also did not allow him in the yard, because he often eloped from the yard. Observations within the home revealed a rather sterile environment, For example, all of the cupboards were locked to keep him from getting into them. Rather than conducting an FA, the behavior analyst focused on identifying ways to increase the client’s access to community activities, as it was hypothesized that this would decrease the motivation for elopement and addressed an underlying problem of limited services that resulted in the client’s restricted access to functional activities. In addition, the behavioral intervention focused on teaching him skills he could use to be even more successful in the community.

Good interventions are those which produce desired and lasting effects, and ethical professional conduct comprises actions that lead to such interventions, regardless of how the interventions are selected or their modality (Poling, 1994; Poling, Ehrhardt, Wood, & Bowerman, 2010). In our view, in interpreting standard 3.02 of the current BACB Guidelines “the behavior analyst conducts a functional assessment … to provide the necessary data to develop an effective behavior change program,” it is important to acknowledge that “the necessary data” sometimes means limited if any FBA data. FBA is a useful tool, not a panacea, for improving the behavior of school children. The same is true with respect to other populations, where studies similar to those conducted in schools suggest that treatments tied to FBA data generally are more successful than alternative treatments (Carr et al., 1999; 2009; Kurtz et al., 2003), although it is beyond our purpose to review the relevant data. Given the extant literature, in our opinion the widespread use of FBA is easily justified on both ethical and practical grounds, but it is inappropriate to elevate its use to an ethical imperative.

The Competent Use of FBA

Although FBA is not always required to develop an effective behavior-change intervention, it is often of real and significant value. For that value to be realized, however, FBA data must be collected and interpreted appropriately and interventions skillfully crafted in view of those data. Standard 1.02 (a) of the BACB Guidelines (BACB, 2011) dictates that, “behavior analysts provide services, teach, and conduct research only within the boundaries of their competence, based on their education, training, supervised experience, or appropriate professional experience,” and this convention obviously applies to the use of FBA. It is essential than any behavior analyst who uses FBA ensures that he or she is competent with respect to FBA in general and with respect to the specific information-gather strategies that she or he uses. Given the recognized importance of FBA in behavior analysis, graduate training programs in the area typically provide appropriate instruction and useful information about the topic can be obtained at workshops, such as those held at the Association for Behavior Analysis International conference, and in written works such as this book. Given these considerations, it appears that most legitimate applied behavior analysts currently possess, or could easily acquire, expertise in FBA.

The same is not true, however, for school personnel. Although the majority of educators are not trained in behavior analysis, legislative mandates may require that they conduct FBAs, despite their reservations regarding skills for doing so. Pindiprolu, Peterson, and Bergloff (2007) surveyed special education teachers, administrators, support staff, and general educators and found that the vast majority of them reported that developing interventions for problem behavior and conducting FBAs were among the areas in which they most desired professional development. In addition, when specifically asked about their skill level in conducting FBAs, special education teachers stated they felt especially weak in (1) testing hypotheses regarding the purpose of problem behaviors, (2) interviewing caregivers regarding problem behaviors, (3) devising procedures for measuring problem behaviors, and (4) developing intervention plans to decrease problem behaviors or increase desired behaviors.

If schools are to use FBAs effectively to inform treatment selection, then ensuring these assessments are done with integrity is a critical issue. Further, if school personnel are to conduct FBAs, then it may be up to behavior analysts to train them how to assess and analyze behavioral functions appropriately. It is incumbent upon these behavior analysts not only to teach school personnel to use best practices in FBA and intervention selection, but also to use best practices in the training procedures used to teach these skills.

Ethics and FBA Training

Given the relative scarcity of behavior analysts in schools, teaching others to conduct FBAs is often necessary to attenuate resource deficits. Therefore, several researchers have endeavored to develop effective training strategies for school personnel and to evaluate the effects of those procedures. In an early study, Sasso et al. (1992) showed that with minimal training two special education teachers could be taught to conduct descriptive assessments and classroom-based FAs, as well as simultaneously collect data on behavior. Training consisted of providing a written description of the FBA procedures combined with approximately 2 h of instruction and practice for each procedure. Data from teacher-conducted assessments and analyses were compared to data yielded by a “conventional” FA conducted by Sasso. Results revealed a high degree of similarity in teacher- and experimenter-collected data, suggesting teachers could accurately identify controlling variables and descriptive assessments produced the same results as FAs. One potential limitation of this investigation was that the procedures for training teachers were not described in sufficient detail to allow for replication. Fortunately, later investigations have supplied more clearly specified protocols for teaching FA and other FBA skills to people with limited or no training in behavior analysis.

The most notable among these is Iwata et al. (2000), who provided a detailed account of procedures used to train undergraduate students to conduct attention, demand, and play conditions of an FA (Iwata et al., 1982/1994) using a combination of written instructions, video modeling, and feedback. Consistent with the results of Sasso et al. (1992), Iwata et al. noted that training procedures could be completed in about 2 h (assuming that the written materials had been read prior to the start of face-to-face training). Interestingly, Iwata et al. (2000) observed that their participants were fairly accurate in implementing conditions after simply reading the written descriptions and instructions. Although these results could imply that learning to conduct an FA is a relatively simple process, several factors caution against this conclusion. First, the participants in the study were upper-level undergraduate psychology majors who had completed a course in behavior analysis. The ease of training observed by Iwata et al. probably was at least partially the result of participants’ prior knowledge of behavior analytic principles, which seemingly exceeded the knowledge teachers would have garnered from their training programs. Remarkably, many teachers fail to receive even the most basic information on managing problematic behaviors, much less on identifying how classroom variables affect student responding (Latham, 2002). Second, data on accuracy of performance were collected during role play situations with a graduate student assuming the role of a student/client. Accurate implementation in more naturalistic settings might have proved more challenging, and thus might have required additional training.

In an attempt to extend the findings of Iwata et al. (2000), Moore et al. (2002) showed that similar procedures could be used to train three general education teachers to implement attention and demand FA conditions. Consistent with the procedures of Iwata et al., the initial phase of the study required teachers to read materials pertaining to FA and answer questions with the researchers. Unlike Iwata and colleagues’ participants, however, teachers’ accuracy during this phase was relatively low (thereby supporting the hypothesis that prior exposure to behavior analysis might bolster the effectiveness of written training materials). With the addition of individualized feedback, however, performance of all three participants increased substantially and maintained during classroom probes.

Other studies also have shown that teachers could be quickly trained to conduct FA sessions. For example, Wallace, Doney, Mintz-Resudek, and Tarbox (2004) demonstrated that teachers could accurately arrange conditions after a 3-h workshop that included opportunities to role play each condition and receive feedback on performance. Similarly, Moore and Fisher (2007) showed that staff at a center for treatment of severe behavior disorders could be trained to conduct attention, demand, and play conditions via written materials, lecture, and video modeling. Although exact times spent in training were not reported, Moore and Fisher speculated that successful staff training could potentially be accomplished with video models in as little as 15 min, assuming the videos showed sufficient exemplars.

Although these studies have demonstrated effective strategies for training people who are not behavior analysts to conduct the experimental conditions of an FA, they have not addressed many of the other skills that are required for carrying out school-based FBAs. The FBA process requires a much broader repertoire, including selecting the appropriate assessment/analysis strategies to match available resources and competence, correctly carrying out selected strategies, appropriately scoring and graphing data, accurately analyzing data, and effectively using data to inform intervention selection. Therefore, additional research has been undertaken to address some of these issues.

One example is Pindiprolu, Peterson, Rule, and Lignugaris/Kraft (2003), who provided web-based, experiential cases as a training tool for preservice special education teachers, and then used pre- and posttests to evaluate the effects of the case study instruction on students’ knowledge and application of FBAs. Participants were taught to conduct FBA interviews, and design FAs based on their interviews. Different methods of teaching were used: reading materials only that summarized client information, reading the results of an FBA interview, and being able to conduct their own interview. Students in all three groups improved significantly from pre- to posttest, but no differences in effectiveness of the different teaching tactics among groups were observed. Further, differences in pre- and post scores for all groups revealed that mean scores for groups did not exceed 67% for declarative knowledge or 59% for application of skills. Therefore, although the improvements were statistically significant, the scores suggest that the students still failed to master much of the basic information pertaining to FBAs and the skills required to conduct them. This study suggests that teaching the analytic skills involved in designing effective FBAs (as opposed to conducting experimental session) may be more challenging than initially meets the eye.

Unlike Pindiprolu et al. (2003), who focused on teaching the assessment portion of FBA, Scott et al. (2005) examined the effects of FBA training on school staff’s abilities to identify effective interventions for problem behavior. The researchers provided FBA training to five staff members from four elementary schools. Training lasted 6 h, and included descriptions of procedures for both conducting FBAs and developing function-based interventions. Participants also practiced skills using three video case studies, both with the trainer and in small groups, and were provided feedback on their performance. Each participant subsequently was assigned the role of facilitator in their school’s intervention team, ensuring that at least one member of each team had been trained in conducting FBAs and linking interventions to FBA outcomes. The authors then reviewed the teams’ behavior plans for 31 students and compared the suggested strategies with those of experts who were asked to develop interventions based on each student’s case and the teams’ FBAs.

Both experts and teams selected a range of intervention strategies from a district-generated list (e.g., antecedent manipulation, instructional techniques, consequences for positive behavior and misbehavior), but that teams were much more likely than experts to select punitive and exclusionary intervention components, regardless of the identified function. Although intervention plans prior to FBA training were not evaluated, these results suggest that FBA training did not necessarily produce a bias toward reinforcement-based interventions. Scott et al. (2005) did not assess whether the hypotheses generated by the teams were reasonable given the data or whether the strategies selected matched the hypothesized functions of the behaviors, it is impossible to assess the effectiveness of their training strategy in teaching these two very important skills.

Dukes, Rosenberg, and Brady (2007) also evaluated the effects of FBA training on special educators’ knowledge of behavioral function and subsequent intervention selection. Teachers were trained over 3 full days, with the second and third training days separated by 6 weeks. Teachers, trained in groups of 45–100, were exposed verbal instruction, a written manual, case studies, and role plays. Training was specifically designed to teach teachers to identify functions of behaviors and then link functions to intervention selections. Several weeks after the completion of training, participants were given an assessment comprising five scenarios. Participants were asked to identify the likely function of the behaviors described in each scenario from a list of functions, and then to provide a description of interventions strategies that would likely result in “effective (i.e., rapid and semi-permanent) control of [the student’s] problem behavior” (p. 167) in an open-ended question format. In addition, the assessment required participants to answer five multiple choice questions about FBA strategies and purposes. Identical assessments also were sent to teachers who had not completed the training.

Although trained participants answered more questions about function correctly, they were no more likely than untrained participants to suggest interventions that matched behavioral function. It is interesting that this study employed a longer period of training than other studies (i.e., three 7-h days of training), yet participants still did not achieve one of the primary goals of the in-service. Although it is difficult to discern what might have accounted for these negative results (e.g., quality of training, treatment integrity, effects of 6 week delay), they nonetheless raise concerns about the outcomes produced by the training strategies commonly employed by behavioral researchers and practitioners alike.

In addition to the often discouraging results of studies aimed at training broad FBA skill repertoires, another important issue concerns measurement of learning outcomes. Specifically, it is unclear whether identifying functions from written scenarios and designing corresponding interventions is analogous to engaging in these behaviors in more authentic contexts. Van Acker, Boreson, Gable, and Potterton (2005) presented a compelling and disconcerting portrait of FBAs and behavior intervention plans (BIPs) in Wisconsin schools, finding that 70% of the FBAs/BIPs failed to identify or define the target behavior, 25% failed to identify a function for the behavior, and 46% proposed the use of aversive strategies as the sole means of changing behavior. Further, the results showed that school personnel with substantial training in the FBA process were no more likely to define target behaviors clearly or to design interventions to modify the physical or social context than those with no training. These findings clearly show the potential for disconnect between training and practice. On a more positive note, the authors found that FBA/BIP teams with at least one trained member were more likely to verify the hypothesized function through some sort of testing, to incorporate behavioral function into the design of the behavior intervention plan, to use reinforcement based strategies, and to plan for treatment monitoring. These latter findings bode well for the potential to train school personnel to identify functions and develop corresponding interventions, but there is still much left to do if we are to effectively and consistently train sufficient repertoires of FBA and intervention skills across a broad population of learners.

As noted, the results of some studies might suggest that teaching others, including teachers, to conduct FBAs is a relatively easy endeavor that takes minimal time and resources (e.g., Iwata et al., 2000; Moore et al., 2002; Moore & Fisher, 2007; Wallace et al., 2004). Perhaps this finding explains the propensity of some behavior analysts to agree to teach functional assessment and analysis to school personnel during relatively short in-services or workshops. Before making such agreements, they should recognize that these studies were designed to assess training methods for a very limited scope of FBA skills (i.e., arranging FA sessions), not for establishing broad FBA competencies. Clearly, training this relatively limited skill set does not address the skills required to collect and interpret FBA data.

Moreover, outside the realm of research, FA (and in particular, analogue analyses) are not likely to be recommended as viable FBA strategies in schools (Bambara & Kern, 2005; Chandler & Dahlquist, 2010). It is much more likely that FBAs will be conducted via informant and descriptive assessments (Van Acker et al., 2005), and much less is known about how to teach school personnel to collect these types of data in a valid and reliable manner (see Neef & Peterson, 2007) than is known about teaching FA. Although some researchers have attempted to provide evaluations of a broader scope of training (e.g., Dukes et al., 2007; Pindiprolu et al., 2003; Sasso et al., 1992; Scott et al., 2005), their contributions have produced mixed results that make it impossible to establish clear training guidelines. Current gaps in the existing literature also make it difficult to know whether a complex range of skills can be effectively taught and maintained, and if so, how much and what type of training is required to do so.

Given the relative paucity of information regarding the strategies needed to teach the full complement of FBA skills puts practitioners in somewhat of an ethical conundrum: Schools want effective training in FBA and intervention design, but our own literature makes it difficult to know exactly how meet these needs. Further, we want our science and technology to be accessible to others, but we want procedures to be implemented with integrity by those who are fully competent. Standard 3.0(d) of the BACB Guidelines for Responsible Conduct (BACB, 2011) states that “(b)ehavior analysts do not promote the use of behavioral assessment techniques by unqualified persons, i.e., those who are unsupervised by experienced professionals and have not demonstrated valid and reliable assessment skills.” Behavior analysts who provide training in FBA for teachers or other care providers should recognize that their efforts will not necessarily provide trainees with adequate, including valid and reliable, assessment skills. Even though a conservative interpretation of standard 3.0(d) might provide a basis for doing so, in our view it is pointless and inappropriate to accuse behavior analysts who provide such training of unethical conduct. However, it is appropriate to call attention to the need to develop and use empirically-validated training procedures that maximize the likelihood that trainees acquire the repertoire of complex and inter-related skills needed to use FBA successfully.

Potentially Effective Models for Collaboration to Increase FBA Competence

Given the current status of our training literature, practitioners should perhaps focus on training school personnel (or any other relevant stakeholders) to be good collaborators in the FBA process, as opposed to attempting to train a very complex skill repertoire with little evidence about which training methods are most effective. Peck Peterson, Derby, Berg, and Horner (2002) suggested a collaborative model for conducting home-based FBAs with family members who may have little background in behavior analysis. This model involves family members and behavior analysts assuming different roles during each stage of the functional behavior assessment process (i.e., problem identification and hypothesis development, hypothesis testing, design of intervention, evaluation and adjustment, and efficiency redesign).

The overall role of the behavior analyst in this process is to “improve the technology, expand the science, and make more effective the design of environments that reduce problem behavior and increase prosocial behavior” (Peck Peterson et al., 2002, p. 19). Complementing the behavior analyst’s role is that of the family, which provides “the context for the most efficient FA and ongoing intervention” (Peck Peterson et al., p. 19). Perhaps this model could be adapted to describe the appropriate roles of school personnel and behavior analysts in conducting functional behavior assessments in school settings. This adapted model is outlined in Table 13.1 and may provide the cooperative, on-site training model preferred by school personnel (Pindiprolu et al., 2007), as well as the collaboration between school districts, state departments of education, and institutes of higher education recommended by Shellady and Stichter (1999). Table 13.2 also provides information about collaboration between school personnel and behavior analysts for functional analyses and behavioral interventions.

The preceding discussion has focused on the use of FBA in school settings, but similar considerations apply in all circumstances where people are trained to use FBA and put that knowledge to use in an attempt to improve behavior. “Benefitting others” is one of the core ethical principles that guide the practice of psychology (Koocher & Keith-Spiegel, 1998) and of applied behavior analysis (Bailey & Burch, 2011). For example, Standard 2.0 of the BACB Guidelines (BACB, 2011) reads: “The behavior analyst has a responsibility to operate in the best interest of clients.” “Pursuit of excellence” is another core ethical principle (Bailey & Burch, 2011). Our discussion of FBA in school settings is intended to illustrate that there is substantial room for improvement with respect to how FBA is used in school settings and to offer strategies for increasing the likelihood that FBA eventually will be used to the maximum benefit of teachers and students, thereby approaching the excellence that all concerned individuals value.

Concluding Comments

“Doing FBA” is not ethical conduct. “Doing FBA” in a manner that produces maximum benefit and minimal harm for the people whose behaviors are of concern is ethical conduct and should be the goal of behavior analysts. It is, for example, not enough for members of an IEP team to conduct a poor FBA and design a weak intervention for a student with a developmental disability who is facing disciplinary action, although doing so might meet the requirements of IDEA. In a 1994 discussion of the ethics of using psychotropic drugs to manage behavior in people with developmental disabilities, Poling wrote:

It is critical that decisions concerning [medication] use are individualized and data-based to the fullest extent possible. Because we can never know a priori how a given person will respond to medication, we must always determine what the medication is intended to do and whether this goal is accomplished. Moreover, we must take care to ensure that observed benefits are evaluated relative to real and possible costs to the patient, and that all decisions are made in her or his best interests. If this is done, treatment is rational and ethical as well. (p. 171)

To capture the essence of the ethical use of FBA, “FBA-based intervention” can simply be substituted for “medication” in the foregoing passage.

No reasonable person argues that it is fundamentally wrong, hence unethical, for applied behavior analysts or others in the helping professions to try to determine why their clients emit inappropriate behaviors or fail to emit appropriate behaviors, and then use this knowledge to help the clients. From a conceptual perspective FBA is perfectly acceptable as a general approach for designing behavior-change interventions and from an empirical perspective it is a general approach of demonstrated value. As we point out, however, successful interventions can be designed in the absence of FBA data and collecting such data does not ensure that a treatment will be effective. Moreover, support for the contention that interventions based on FBA are generally more effective than alternative interventions is less than overwhelming and further research is certainly needed. At present, there is no compelling conceptual or empirical basis for claiming that ethical or effective behavior analysis always begins with FA or another form of FBA. To date, FBA has been used primarily in the context of developing interventions to decelerate inappropriate behaviors in people with developmental disabilities. FBA has rarely been used to delineate the variables responsible for the non-occurrence of desired responses or to ascertain why low-rate, high-intensity behaviors occur (Irwin et al., 2001). Moreover, the utility of FBA for understanding rule-governed behavior is unclear (Irwin et al., 2001). None of these considerations should be taken as criticisms of FBA, but they should serve as cautions against overenthusiastic and naïve endorsements. As Irwin, Ehrhardt, and Poling (2001) pointed out, “The logic and methods of functional assessment are evident in Skinner’s writings, and many early researchers and practitioners influenced by his ideas employed functional assessment [although it was not labeled as such] in designing interventions in school and other settings” (p. 173). Contemporary behavior analysts—including us—continue to use FBA to the great benefit of those they serve.

Certain applications of FBA, however, notably those involving FA of seriously harmful responses, raise interesting ethical issues and we attempted to illustrate some of these issues. Although some general guidelines were suggested, it is important to recognize that ethical treatment of clients is inevitably individualized treatment. As Johnston and Sherman (1993) emphasize in a discussion of the Least Restrictive Alternative (LRA) principle, a cornerstone for protecting people with disabilities, “to be an effective constitutional safeguard, the LRA must be a subjective and dynamic principle tailored to individual needs (Parry, 1985). Likewise, in determining the needs of [people with developmental disabilities], treatment decisions cannot be made in isolation from the individual’s personal preferences, values, and circumstances” (p. 112). This statement holds true regardless of whether FBA is or is not being consider as part of or is being used in the treatment.

In closing, we should acknowledge that framing a discussion in terms of ethical issues may render emotion-laden what would otherwise be innocuous points. It is, for example, one thing to say that it is better practice to arrange a few short FA sessions than to arrange many long ones, quite another to claim that a person who does the latter is unethical. We have attempted to avoid making ethical judgments and apologize in advance if our suggestions strike a reader as accusatory. Our hope was not to cause offense, but to call attention to the kinds of variables that behavior analysts and laypeople consider in determining whether or not a professional’s actions relevant to FBA are or are not “ethical.” As behavior analysts, we see this as a matter of stimulus control, not morality. In other words, there often may not be a “right” or “wrong” thing to do at certain points in time. Rather, specific stimulus conditions (e.g., type of curriculum being used with an individual, other behavioral supports and rules in place, type and severity of problem behavior displayed) frequently interact to create a variety of interesting dilemmas for the behavior analyst. The behavior analyst must constantly evaluate these stimulus conditions in order to determine the best course of action for completing an FBA in order to make decisions that comply with both the letter and spirit of the ethical codes of conduct guiding our field.