Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction to Behavioral Assessment

Assessment is an indispensable component in the treatment of child behavior disorders and can be a complex and lengthy process. Assessment results are often the basis for diagnosis and classification as well as for the selection of targets for intervention. Further, data from various assessment methods can assist in the design and evaluation of intervention efforts. The assessment process also helps to draw inferences about causal variables and assist in the functional analysis of problem behavior patterns. Behavioral assessment encompasses methods and concepts derived from behavioral construct systems and is most frequently identified with an emphasis on quantification of observable and minimally inferential constructs. The methods of assessment differ from traditional assessment methods in their structure, focus, specificity, level of interest, and underlying assumptions. A recent comprehensive volume, Diagnostic and Behavioral Assessment in Children and Adolescents by McLeod, Jensen-Doss, and Ollendick (2013), attests to the developments in the field and can be consulted for greater theoretical and methodological information.

This chapter is an overview of behavioral assessment methods that might be useful in the development and evaluation of interventions for children in the school setting. As such, the authors will review a number of methods, namely, interviewing, screening measures, other paper–pencil inventories, behavioral observation, analogue methods, archival records, and self-monitoring. The chapter will then further address developmental issues as well as issues related to reliability and validity in general. Further, suggestions are made for adaptation of methods for individualized use. One of the hallmarks of behavioral assessment is the idiographic nature of most methods: each can be tailored to specific situation, specific behaviors, and specific children.

Some additional principles of behavioral assessment that make it a unique approach are worth mentioning here as well. First, behavioral assessment is considered an ongoing process wherein data is collected at multiple points from intake and referral throughout the duration of treatment. Gathering of baseline data prior to the implementation of any intervention is recommended and can serve as a way to evaluate changes across time. If continuous data, i.e., daily or weekly, are not possible to obtain, repeated assessment is still encouraged. Secondly, we encourage data collection across settings. Rarely is a behavior pattern the same across and throughout different environments and with different people, and it is important to be thorough in this regard. Inconsistencies across settings help determine the mechanisms that elicit and maintain behavior problems. Thirdly, we suggest that any comprehensive assessment includes multiple informants (child, parents, teacher, and others) and multiple methods in order to obtain the most comprehensive picture of a child’s functioning. It is often that informants do not agree (parents often rate their child differently than the child rates him/herself) and that there are inconsistencies across methods (self-report data are often quite different than behaviors directly observed in the natural environment). Recent research has indicated that youth are more accurate informants of their internalizing symptoms and caregiver more accurately reports external behaviors (Penney & Skilling, 2012) and correlations between cross-informants remain only moderate (Althoff, Rettew, Ayer, & Hudziak, 2010). However, these inconsistencies are what are the most interesting and compelling aspects of the behavioral assessment process.

Functional Analysis

It is reasonable to consider the development of a functional analysis , the conceptually based integration of data from all pre-intervention assessments, to be the only foundation for an individualized intervention plan. The FA (functional analysis) is an explanation of a student’s target behaviors based on the synthesis of: (a) interacting behavioral, cognitive, and psychological causal factors; (b) associated behavioral assets and deficiencies; (c) situational sources of variance; (d) the context in which behavior occurs; and (e) any other mediating variables. The FA will help the practitioner determine whether to intervene, how best to select intervention strategies, and how to evaluate treatment impacts. Functional assessment provides accountability for treatment outcomes and increases the validity of clinical judgments. Practitioners should design a comprehensive assessment plan, based upon multiple informants, using multiple methods of assessment and with data collection continuing across time to test clinical hypotheses about “why” problem behavior occurs. Rather than relying on inferential or “best guess” answers to the question of “why?,” FA helps to uncover all of the variables presently maintaining the target behaviors and can lead directly toward intervention strategies most likely to be effective.

Interviewing

There are several types of clinical and diagnostic interviews that can be conducted with children and their family members. General clinical interviews are designed to obtain overall demographic and family information and information about social–emotional development, academic functioning, and the referral concerns. Often, each person interviewed has a different perspective on the problem behaviors and situations, and the interviewer should expect inconsistencies and even be intrigued by them. In a clinical setting, a general intake might be followed by a diagnostic interview of which there are three types: unstructured, semi-structured, and structured interviews. The unstructured approach allows for a more conversational approach to information gathering and helps develop an initial therapeutic alliance and obtain information on possible diagnoses; however, these interviews do not have much diagnostic accuracy.

Semi-structured interviews have a specific set of questions to be included, but the exact sequence is not predetermined. This allows for greater flexibility and for opportunity to gather other relevant information. A skilled interviewer will be able to cover all of the required sections of the Mental Status Exam , for example, but in a way that puts the student/family member at ease. Sometimes, these semi-structured interviews ask a series of questions about specific child/adolescent diagnostic categories such as ADHD or conduct disorder, and this information is then used to substantiate a diagnostic formulation. Other forms of semi-structured interviews require training to administer. These include: ISCA–Interview Schedule for Children and Adolescents , K-SADS–Schedule for Affective Disorders and Schizophrenia for School-Age Children , and DICA-R–Diagnostic Interview for Children and Adolescents–Revised . Phillips and Gross (2010) provide an excellent description and overview of diagnostic interviewing for children and adolescents under the auspices of structured interviewing (i.e., DISC-IV, CAPA, ChIPS), though these types of interviews are actually rarely used in clinical practice (Brunchmuller, Margraf, Suppier, & Schneider, 2011) and most likely are not part of the assessment protocol in school contexts.

Behavioral Interviewing

In contrast to many forms of interviewing in clinical psychology, the primary objective of behavioral interviewing is to obtain accurate information that will be helpful in formulating a functional analysis of the presenting problem (Haynes & O’Brien, 2000). Although unstructured, the focus is on describing and understanding the relationships among specific antecedent triggers, problematic behaviors, and reinforcing and maintaining consequences. The treatment provider should plan for a behavioral interview with the student, the parents, and perhaps the referral source to begin the assessment process. Kratochowill (1985) suggests that behavioral interviews follow the following four-step problem-solving format:

  1. 1.

    Problem identification: a specific problem is identified and explored and procedures are selected to measure target behaviors across time.

  2. 2.

    Problem analysis: conducted by assessing the student’s resources and the contexts in which the problematic behaviors are likely to occur and by gathering some historical information about problematic situations.

  3. 3.

    Assessment planning: the treatment provider helps establish an assessment plan to be implemented. This plan might include a number of behavioral assessment methods (questionnaires, self-monitoring, role-play assessments) and ongoing procedures to collect data relevant to assessment and intervention. Methods are selected such that data can be gathered across time and contexts in order to evaluate treatment outcomes.

Thus, the behavioral interview is focused not only on obtaining information within the interview session, but also on making plans to gather information on behavior outside the interview, in the environment in which the behavior naturally occurs and from the perspectives of others (i.e., teachers and parents). Certainly there are different approaches to interviewing children versus their parents/teachers as often, the child is not in agreement that there are any difficulties. There are developmental considerations for the child interview as often children are unable, due to a limited verbal repertoire, or unwilling to report on their own behaviors and/or internal states. Adults are usually the powerful agents in a child’s life and therefore may provide information from their own perspective. Further, the actual interview situation allows the interview to “observe” the student and family members interacting and relating to the interview context itself.

One simple format for guidelines for the behavioral interview is the “ABC” format . The interview should elicit specific descriptions of Antecedents or triggers of the problematic behaviors. A specific Problem List should be developed and centered on problem Behaviors defined in observable and measurable ways. Hypothetical constructs (i.e., “he is an angry child”; “she is so neurotic”) are translated into overt behavioral responses (“he has pushed others while standing in line”; “she continuously asks the teacher whether her work is correct”) so that baseline measurement can be obtained. If the target behaviors do include covert phenomena such as subjective emotional states (“worry,” “shame,” etc.) or distorted thoughts and interpretations of situations (“nothing ever works out for me”; “nobody cares about me”), then self-monitoring methods can be developed. Lastly, immediate and long-term Consequences thought to be shaping and maintaining the target behaviors are identified and earmarked for inclusion in intervention strategies.

Interviewing Parents

The parent interview is a key component in assessment and treatment planning and should occur soon after the referral of the student for services. In addition to obtaining key information about the background and current picture of the presenting problems, parents can provide critical information about situational antecedents and maintaining conditions that occur outside of the school context. The parent interview also allows for an early assessment of parenting, current behavior management strategies in use, and receptivity to school interventions.

An excellent example of behavioral interviewing is found in the work of Barkley (1990) who developed extensive interview protocols for the behavioral assessment of ADHD. One portion of the interview generates information on the nature of specific parent–child interactions that are related to the defiant and oppositional child behaviors often associated with ADHD. The interviewer reviews a series of situations that are frequent sources of difficulty between children and parents and solicits detailed information about particularly problematic situations. For example, parents may report that their child has temper tantrums, during which the child cries, whines, screams, hits, and kicks. A behavioral interview will be used as a first step in determining precisely what these behaviors look like when they occur, in which situations the behaviors occur (e.g., while the parent is on the telephone, in public places, at bedtime), and in which situations they do not occur (e.g., when the child is playing alone, playing with other children, at mealtimes). Additional information is then sought regarding the sequence of events, including the behaviors of the parents and the child that unfold during a tantrum. This type of situationally focused interview provides a detailed picture about how the parent perceives the antecedents and consequences that surround the child’s problem behaviors. Often, the interviewer will ask for a “recent example” of the target behavior problem and a detailed “ Critical Incident Analysis” , which includes the details leading up to the problem (i.e., everything that went on in the house that Tuesday when Jimmy refused to go to school), everything the child and other family members did in response, and what the eventual outcomes were for that particular day. These sequential details will help to flesh out the functional analysis . It is suggested that initial behavioral interviews be held with the referral source for the student. Most likely this will be teachers and parents who have expressed concerns. Specific information and initial functional hypotheses can be generated which should be confirmed by interviewing the student. Although the information obtained from multiple sources often seems inconsistent, discrepancies themselves are always of interest. The behavioral interview should result in (1) initial hypotheses about the occurrence and maintenance of the target behaviors and (2) a plan for a multi-method, multi-informant assessment process. Included in this plan will be ideas about repeated assessment to help determine treatment outcomes and further treatment planning.

Screening Measures

When a student has been identified for intervention services, it is difficult to know how best to assess and subsequently address their needs. It becomes imperative to gather data in the most accurate, concise, and noninvasive way possible to design and evaluate any potential treatment program. Choosing an initial screening measure that is most developmentally and diagnostically appropriate for both the student and the situation at hand can be complicated. The intent of this section is to outline some easily accessible measures that school psychologists can use to quickly and accurately obtain a diagnostically informative snapshot of an identified student.

It is often the case that despite having been identified as in need, a student’s pending diagnostic considerations may elude even the keenest observer. In these instances, it is worth using a broader-based tool to narrow the focus. The Behavioral Assessment Scale for Children , 2nd Ed (BASC-2) is a widely used and valid self-report assessment to measure behavior. As a multifaceted tool, the BASC boasts the ability to integrate the Self-Report of Personality (SRP) which is filled out by the student (or via an interview by a trained clinician), the Teacher Rating Scales (TRS) which can be completed by any educator who would have more than surface-level insight regarding the individual’s behavior, and the Parent Rating Scales (PRS) which should be completed by either or both parents. Assessors should be mindful that the BASC has male and female norms for the parent rating form, which is indicative that fathers typically rate their children less stringently (tend to minimize issues more) than mothers.

The ability to compare and contrast the different reports across several potential diagnostic categories, including propensity toward anxiety, depression, and a range of externalizing and internalizing behaviors, makes the BASC a strong tool for use in better narrowing the focus for diagnostic consideration. Even more compelling is its reliable and valid predictability regarding symptom picture across a wide age range. The BASC can be used from ages 2:0 through 21:11 for both the TRS and the PRS. Equally notable is the net for the SRP—ages 6:0 through college (Lane et al., 2009). It should be noted that the BASC can be used to guide treatment planning in schools and should not really be used as or considered to be a complete personality assessment. It is a reasonable alternative to the Achenbach System of Empirically Based Assessment (AESBA) with solid psychometric properties and should be used in this vein. The BASC also has the capacity to rate adaptive and maladaptive characteristics of an individual, much like the AESBA—a notable strength of the tool overall.

Feasibility remains an issue in assessment planning. Fortunately, within the school setting, this is likely less of an issue with the BASC. In other clinical settings, it often borders on impossible to enlist the participation of both the parent(s) and the teacher in addition to completing the inventory with the student. Within the schools, this tool is likely able to give all parties involved a voice. In this regard, there are some positives to note. Though the comparative nature of different perspectives on individual behavior yields more compelling diagnostic information, the SRP can still be used as an individual tool to gain insight regarding a student’s view of himself. The question of truthfulness in response, or even potential malingering, may arise. The BASC comes equipped with validity of response scales that are used to determine whether the data are valid for interpretation. . If there is a question as to whether or not a student in need of services may even be able or willing to complete this questionnaire in the first place, a clinician can use the SRP as an interview tool and ask the student questions via structured interview. However, it is important to note that this is likely to only hold true in extreme circumstances (i.e., a child simply does not want to do it themselves). One of the primary purposes in the development of this powerful tool was for use in Special Education Evaluations. In this capacity, it is inherently sensitive to those who may not be able to adhere to its prescribed completion. Regardless of the methodology, this measure appears to yield strong data useful in guiding direction for both clinicians and diagnosticians. It should be noted that the BASC is not a diagnostic tool; however, it yields symptom pictures that can suggest diagnostic categories consistent with DSM-IV-TR classifications. It should be used to guide and inform, not to firmly categorize. Maintaining mindfulness in this capacity is a useful tool for clinicians to carry with them through the use of most behavioral monitoring and screening.

School psychologists and other professionals may also decide that rather than diagnostically focusing the screening of an identified individual, a more categorical assessment of behaviors that are both adaptive and maladaptive is indicated. The Strengths and Difficulties Questionnaire (SDQ; Goodman & Goodman, 2009) is a brief measure that can be used with multiple informants. The SDQ quantifies externalized behaviors by asking about 25 different attributes—some positive, others negative. It can be completed in an average of 5 min, making it quite appealing for use with children exhibiting oppositional behaviors or attentional deficits. This measure can be used with individuals ages 4–16 and can be given as an interview if the student is unable to complete the form on his or her own. It has also been used as a predictor of the presence of potential psychiatric disorders. More information on clinical utility and questionnaire practicality can be found on the publisher’s website: http://www.sdqinfo.org/a0.html.

Another tool that can be given to parents and teachers is the Disruptive Behavior Rating Scale (Barkley, 2012). These 41- and 26-item paper and pencil inventories, respectively, are scored using a 0–3 Likert scale indicating whether certain behaviors interfere with functioning in school and/or in the home within the last 6–12 months. Similar to other measures, these rating tools are appropriate for youngsters aged 6–18 years.. These inventories are not particularly diagnosis specific, but rather, they can be used to supplement screening measures that solely focus on the identified student. Gathering information on behavior across settings is an integral piece of the puzzle, yielding crucial data regarding the support and perspectives of significant others across settings (Barkley & Murphy, 2006).

Common referral problems exhibited by adolescents are anxiety and depression. Viable and widely used screening and assessment tools have been the various Beck inventories. The Beck Anxiety Inventory (BAI; Beck & Steer, 1990) is a brief paper and pencil screening tool designed to help differentiate among emotional, behavioral, and physiological symptoms in individuals struggling with both anxiety and depression. It is a self-report measure that can be used as both a tool for the clinician to inform diagnosis and act as a potential intervention in and of itself. This particular clinical tool brings awareness of generalized symptoms of anxiety to an individual’s attention. In this way, it can provide just enough education for the willing participant to know that some new behaviors or experiences are worth noticing. Interestingly, Leyfer et al. (2006) purport that it assesses and screens for panic symptomatology. Individuals with symptoms similar to panic disorder have been found to endorse significantly higher scores on the BAI than those with other anxiety disorders (Beck & Steer, 1990), though the measure itself can still be used to both screen for and assess symptoms of globalized anxiety before and after intervention. While it cannot provide enough information to definitively diagnose an anxiety disorder, the BAI is a useful start to assess any child’s behavior reflective of anxiety.

Its sister screener, the Beck Depression Inventory II, is also a paper and pencil measure for rating self-reported symptoms of depression through a 21-question scale based on symptom frequency and severity rated from 0 to 3. This measure detects the dichotomous or extremely positive or extremely negative thinking in some individuals with severe depression. So while a tool like the BDI-II can be useful for those who are honestly self-reporting, it also has a built in security net for that will reflect a true depression by means of either under- or overreporting symptoms (Beck, Steer, & Brown, 1996).

The Beck Youth Inventory (BYI-II) may be warranted in cases where impressions are not as clear as to which diagnosis the individual may exhibit. Essentially, the BYI-II has been validated as a screener for emotional and social impairment in five areas: depression, anxiety, anger, disruptive behavior, and self-concept. Each domain is assessed through its own separate inventory of 21 questions each, 105 questions total (i.e., BDI and BAI are both 21 items long). While the screening process itself will likely take longer for a student to complete, a full BYI-II will yield a symptom profile across each of these domains. The profile can then be compared to profiles similar to individuals who had been diagnosed with different sets of symptoms (i.e., depression, oppositional defiant disorder, attention-deficit/hyperactivity disorder). It is important to note that the use of this assessment tool has shown benefits as a screening measure, an outcome measure, and equally as importantly as a method for monitoring ongoing progress in treatment. Providing a level of buy-in exists and there is adherence to self-monitoring needed to complete an assessment at this length (five domains of functioning, average of 21 questions per area), the BYI-II can be a powerful, useful tool.

The intricacies of each useful screening tool that our profession may choose to employ are beyond the scope of this chapter. Often there may be concerns about a student’s ability to effectively problem-solve and use planning skills in social and/or academic situations. When a more specific analysis of behavior is warranted, there exists an array of choices for professionals. The Behavior Rating Inventory of Executive Functioning (BRIEF) is a targeted and simple self-report paper and pencil survey of behaviors accompanying disorders of executive function, such as ADHD. While not a diagnostic tool, the BRIEF can be used to quantify difficulty with focus and attention. Parents and teachers can be surveyed as well. Much like the BASC-2 in this way, the BRIEF can be used as part of a larger battery or as its own screening measure, which yields data that helps the student.

In general, what should be gleaned here is that there are a number of broad-based tools in addition to more narrowly focused tools that can be used at different points throughout the screening and treatment processes alike and for different purposes. Let us consider, for example, a child who is beginning to display infrequent yet severe angry outbursts in the classroom. This behavior could potentially be a marked qualifier of several diagnoses (i.e., attention-deficit/hyperactivity disorder, oppositional defiant disorder, Asperger’s disorder, major depressive disorder). Clinicians may consider a BYI-II for use at the initial intervention stage to gain a more clear understanding of the breadth of symptom picture. It may turn out that this individual has been experiencing symptoms concordant with a depressive diagnosis. Without proper screening, it is likely that this behavior would have been classified as externalizing rather than a product of a truly internalizing disorder such as depression. Throughout what may now be a more efficient and focused treatment protocol, the clinician could ask the student to fill out a BDI-II weekly or perhaps even every other week as a means of evaluating intervention effectiveness. The more effective the screening, the more sure a clinician can be that he or she is providing the best care for the individual. An interesting consideration here is the hesitancy that many professionals feel when asking students to engage in frequent assessments, as there may be a sense of guilt, as if their time is being wasted and could otherwise be spent in a more effective way. Frequent monitoring can be a genuinely effective tool for informing care in an environment where time is often of the essence because it provides a basis for making adjustments to treatment in progress.

In sum, there is an increased awareness in the field for data collection for use in not only justifying treatment strategies, but also for providing concrete data to evaluate outcomes and to inform future practice. Without a proper screening assessment to gather baseline information regarding symptom presentation and maintenance, data in this capacity may be skewed, inaccurate, or simply not sufficient to provide diagnostic qualifiers. With this in mind, any of the aforementioned inventories can be used as such, as long as they are executed and maintained appropriately to each situation.

Parent Measures

Sometimes, it is not the child that requires intervention. It may be determined that although a child’s behavior has warranted intervention, the parent or parents are likely at the root of an externalized presentation. Truthfully, there are many times when parents are and in fact may need to be referred for treatment. Parents struggling with depression, for example, tend to not seek services. They instead typically bring their children for various treatments without acknowledging their own need, thus not acknowledging the impact their potential pathology may have on the functioning of their child. One construct worth considering is the perceived level of parental competence and confidence. The Parenting Sense of Competence Scale (PSOC) (Johnston & Mash, 1989) is a 16-item paper and pencil survey designed to assess individual perception of parenting skill. This measure can yield a quick and accurate snapshot of sense of confidence and satisfaction with parenting. Scoring simply consists of tallying Likert-style responses which then correspond to one of the three acuity ranges: low, moderate, or high. The PSOC is particularly useful for assessing parental resources to make effective or positive decisions regarding their child’s behavior, as well as to give the clinician enough information to guide intervention strategies for use with the child. A copy can be found and reproduced here:

http://www.afterdeployment.org/sites/default/files/pdfs/assessment-tools/parenting-confidence-assessment.pdf.

If it is suspected that there is a high level of parental and/or familial stress contributing to the identified child’s behavior, the school psychologist may consider administering the Parenting Stress Index , Fourth Edition (PSI-4) (Abidin, 1995). The PSI is a screening measure available in both pencil and paper format and electronically. It can aid in identifying and evaluating the level of stress in a family system by focusing on parent characteristics and child characteristics, while highlighting situational and demographic aspects of life stressors. This 120-item inventory is designed for parents of children from 0 to 12 years of age and can be completed in approximately 20 min. A validity scale (Defensive Responding) is available designed to indicate whether the parent completing this tool is responding in a defensive manner regarding them or their child. Should a clinician suspect that a high level of parental stress is likely influencing both their own and the child’s pathology, this tool may be used to better guide treatment in a way that can provide support for the child by validating that circumstances out of one’s control (i.e., parental behavior, financial/living situation).

Clinicians might also consider aspects of family functioning that may be intertwined with a student’s presenting problems. Pritchett and colleagues (Pritchett et al., 2011) provide a comprehensive overview of measures that target parent–child relationships, parental practices and discipline, parental beliefs, marital functioning, and general family dynamics and that might be considered in the assessment process. The Parenting Scale (see Salari, Terreros, & Sarkadi, 2012) is one of the most widely used measurements of parental discipline. This 30-item self-report inventory consists of three subscales: laxness, overreactivity, and verbosity. Laxness identifies a parent’s permissive and inconsistent parenting style. Its counterpart, overreactivity, highlights aspects of parenting that are harsh or punitive. Hostile parenting, which is more extreme, examines the extent to which a parent might resort to hitting, cursing, or insulting their child. This scale can be useful in identifying gaps or lapses in style that parents may inadvertently or intentionally be resorting to in their parent–child relationship. A free copy of this scale can be found here:

http://www.pti-sf.org/yahoo_site_admin/assets/docs/PS_English.242164902.pdf.

Behavioral Observation: Direct Behavior Ratings

One of the hallmark strategies of behavioral assessment has been direct observation of behavior problems and their correlates in their natural environment. This allows for precision assessing discrete behaviors in one or more of the natural settings as they occur. This method requires that behaviors of interest are operationally defined in terms that can be observed, so that the frequency and duration of the behaviors, as well as the mechanics of where and when the behavior occurs, can be documented. The collection of data can be completed by an external and objective observer or a participant observer, someone already in the natural environment (e.g., a teacher), and should be conducted across many observations in order to gain a reliable estimate of target behaviors.

Often a place to begin would be with an ABC recording : a more descriptive narrative recording of antecedents, behaviors, and consequences occurring for the target student in the natural environment. Figure 2.1 is an example of this type of ABC narrative.

Fig. 2.1
figure 1

Narrative ABC record

This recording can be rich in detail and allow a focus on multiple behavioral sequences prior to treatment planning. Although this method requires little actual observer training, it may be time consuming and not amenable to observations of multiple students. Anecdotal recording is a briefer narrative account describing a single incident of a student’s behavior that is of interest to the observer, i.e., physical fight with a peer. Often written after the incident, the anecdotal record describes (a) what happened, (b) how it happened, (c) when and where it happened, and (d) what was said and done in response. This narrative is certainly less time consuming and helps focus on behaviors of interest and no special training is needed. It is suggested that a standard incident form be used and collected throughout the school year.

Once an observer has completed this preliminary observation, initial functional hypotheses can be developed relative to target behaviors and the antecedent and consequent events thought to be related to the maintenance of these behaviors across time. Home-based or school-based observations are designed to more naturalistically capture particular child behaviors of interest within the context of routine daily activities. Usually, a trained observer watches and codes any number of a student’s behaviors during predetermined structured observation sessions. This will help to establish baseline frequencies of target behaviors and correlates and will help to develop treatment strategies. The main two methods of direct observation used by school personnel are Time Sampling and Event Sampling . Both first require complete behavioral definitions of each behavior to be observed in the natural environment. These definitions must be objective, clear, and reliable across observers. In Time Sampling the observer records the frequency of the behavior’s occurrence over specific time intervals, i.e., during 15 min of snack time. This is an easy way to measure the occurrence of high-frequency, easily observable behaviors for one or more children. However, context-specific information (i.e., antecedents and consequences) is not obtained. In Event Sampling , the observer waits for and records a specific preselected behavior and the subsequent events and associated behaviors (i.e., temper tantrum). This can be used to study low-frequency behaviors as well as setting events, which may have direct implications for treatment planning. The Event Sampling method will help to further the functional assessment of specific target behaviors each time they occur in a far more detailed fashion than Time Sampling. However, the burdens of both of these direct observation methods for either the participant teacher or an outside observer are many and may not be easily implemented in a school setting. Most behavioral intervention research studies have relied upon direct observation of behaviors in the natural environment to examine fluctuations in responding during various treatment conditions repeatedly across time. This is referred to as idiographic time-series measurement rather than snapshot measurement at any single point in time and is the hallmark of the behavioral assessment approach. This will provide continuous feedback across time to aid in treatment decisions.

The emphasis on the response to intervention (RTI) model highlights the necessity for a greater need for easy and frequent monitoring methods, perhaps daily, so that intervention efforts can be designed, implemented, and evaluated more efficiently. Often, in the school setting, there are limited resources for such methods of data collection across multiple observation times. DBR ( Daily Behavior Rating) refers to a class of behavioral observation methodologies that can be used to document the effects of a behavioral and/or academic intervention. The recent advancement of DBR research is the analysis of the assessment potential of these methods. DBR, like systematic direct observation, may also fulfill educational accountability standards because the data provide valid and reliable information about the effects of behavioral interventions (Schlientz, Riley-Tillman, Walcott, Briesch, & Chafouleas, 2009). DBR methods require that behaviors to be observed are specified and operationally defined as with SDO methods. Ratings or Likert-type scores are then entered at the end of some predetermined observation period (e.g., 5 min, 1 h, half day, or daily) in specific settings. The data can be easily charted and summarized to share with parents, counselors, school psychologists, or administrators.

Riley-Tillman, Kalberer, and Chafouleas (2005) provide a comprehensive overview of one such DBR, the DBRC: Daily Behavior Report Cards used to rate both academic (such as on task, hand raising, work completion) and social (such as disruptive or aggressive) behaviors. The parameters they note about the DBRC are similar to the recommendations for conducting direct observations:

  1. 1.

    The behavior of interest is operationally defined.

  2. 2.

    The observations should be conducted under standardized procedures to ensure consistency in data collection.

  3. 3.

    The DBRC should be used in a specific time and place with a predetermined frequency .

  4. 4.

    The data can be scored and summarized in a consistent manner across raters, settings, and even across students.

Also included in their review is a conceptual flowchart model which will help determine the appropriateness of using the DBRC to monitor student behaviors that are not severe or frequent enough to warrant immediate intervention. Behaviors are clearly defined and each is given a Likert-type scale rating. For example, if hand raising was a target behavior, the assessor would rate either a “1” (0 times), “2” (1–2 times), “3” (3–4 times), “4” (5–6 times), or 5 (7+ times). The range would be based upon baseline observations and the goals of the intervention. The assessor, usually the teacher, would complete this either at a specific time point each day or at multiple predetermined data points across the day. As such, multiple behaviors can be assessed via a single DBRC. A case example that illustrates the method is available from Riley-Tilman et al. (2005). An available resource in the development of student specific monitoring tools is found at Intervention Central: the Report Card Generator (http://www.jimwrightonline.com/php/tbrc/tbrc.php).

DBRs have several advantages when compared to other methods of direct behavioral observation. First, a natural participant, the teacher, is used as the observer which may result in less reactivity from those observed than the use of an outside observer which is often required when using other direct observation methods. Second, DBRs are quite socially acceptable among teachers: over 60 % of the teachers contacted via a national database reported using DBRs periodically (Chafouleas, Riley-Tillman, & Sassu, 2006).

Analogue Assessment

Since it is often impossible or impractical to observe students’ behavior in the natural environment, analogue assessment methods have been developed to help understand the functional relationships associated with target behaviors and to obtain baseline levels of responding prior to intervention implementation. Analogue assessment provides the opportunity to directly observe the students’ behavior in a contrived setting that approximates the natural environment (Gold & Marx, 2006). The assessment situation can be standardized and designed to elicit the behaviors of interest in a far more efficient way than naturalistic observation. The method relies on the assumption that behavior in a contrived environment approximates what would occur naturally. In addition to observation, alternative methods include the students’ responses to audio or videotaped scenarios and role-play enactments usually with a confederate child or adult. Most recently, researchers and clinicians have incorporated virtual reality simulations of feared situations to help in assessment, but few reports extend this to child samples. According to Mori and Armendariz (2001), analogue methods offer potential advantages such as tracking multiple behaviors simultaneously, accommodating variability across behavioral domains, ensuring the target behavior of interest, and being less intrusive than naturalistic observation.

Analogue methods have most commonly been used to assess fears, phobias, academic functioning, and social behavior. The Behavioral Avoidance Test (BAT) requires the student to enter a room that contains the feared object (snake, dog, the dark, etc., either real or fake) and to approach and interact in tasks that increase the anxiety potential. Behavioral measures of avoidance (proximity to the feared object, time spent in presence of object, etc.) and ratings of internal distress provide the assessment data. This can also be extended to include other situations that provoke intense anxiety, such as heights, enclosures, injections, and even school situations. These situations can be either contrived (i.e., dogs) or natural (i.e., elevators) but can be standardized across assessment time points for comparative purposes. Silverman and Serafini (1998) include the use of a graphical “fear” thermometer to obtain a subjective and relative rating of fear while in the contrived situation. Reported some time ago, a unique approach was developed by Glennon and Weisz (1978) for the assessment of separation anxiety in preschoolers. Using the Preschool Observation Scale of Anxiety (POSA), observers recorded 30 indicators of anxious behavior as they watched a preschooler complete tasks from several cognitive assessments with and/or without the mother. The coding of anxious behaviors represents a detailed topography of how anxiety is exhibited in young children.

In terms of academic functioning, it is easy to see how a simulated educational or testing situation could be set up to assess a student’s frequency of attentional shifts, off-task behaviors, out-of-seat movements, etc., all indicative of ADHD or other learning difficulties. Early assessment attempts included an analogue playroom setting with a movement grid across the floor to determine locomotor activity in hyperactive children. Direct assessment of movement patterns at a pre- and then post-assessment point would help to determine effects of treatment components on motor behavior. Barkley (1990) developed an ADHD coding system to be used in an analogue academic setting. A child would be placed alone in a playroom and asked to complete a packet of math problems within 15 min. Further, the child was instructed not to leave the table or touch the toys. Observers recorded via a one-way mirror the time the child was off task or out of seat and how often s/he vocalized and played with objects. This was revised into the Restricted Academic Task (RAT) by Fischer (1998) which standardized the coding system for interval recording of the following behaviors: engaged in task, off task, fidgeting, task-relevant vocalization, task-irrelevant vocalization, and out of seat. Fischer reports the use of this assessment probe as a way to help optimize medication dose for ADHD children and as an adjunct to parent and teacher reports of treatment outcomes. It may be more practical to videotape the child in the academic situation for later coding as well as for the child, treatment provider, and parent to have a visual record of behavioral change across time. The coding systems described are usually time consuming and complex requiring observer training and thus may be impractical for the school environment.

Children’s social behavior might best be assessed using analogue methods to capture a description of the specific behaviors of interest and their functional relations. Shyness and social withdrawal can be observed in a contrived play setting in which familiar and then unfamiliar peers and/or adults can be included or in situations similar to those encountered every day. Behavior of interest might include shared play, verbal interactions, spontaneous comments, playing by self, or removal from others. The Behavioral Assertiveness Test for Children (BAT-C; Bornstein, Bellack, & Hersen, 1977) was developed for pre–post-assessment of children’s social skills programs. The format includes social scenarios to which a child responds. The scenes requiring assertive behavior (accepting help, giving and receiving compliments and negative assertion) are introduced by a narrator, followed by a prompt from a confederate child or adult. The behaviors recorded include six categories of verbal behavior and four categories of nonverbal behavior in addition to overall assertiveness. Scenarios particular to a given child’s social difficulties could be developed and used as an assessment probe prior to intervention and then as an outcome measure. A similar format was developed in the pre–post-assessment of adolescent anger management training wherein the scenario and the prompts were typical antecedents to angry and aggressive outbursts for target youth (Feindler & Ecton, 1986). Responses to these scenarios were videotaped and later coded for behaviors specific to anger management training (Figs. 2.2 and 2.3).

Fig. 2.2
figure 2

Sample role-play scripts

Fig. 2.3
figure 3

Role-play coding sheet

Wakschlag and colleagues (Wakschlag et al., 2005, 2008) have developed the Disruptive Behavior Diagnostic Observation Schedule (DB-DOS). Designed as an analogue method to accompany typical parent interviews, the DB-DOS is a 50-min structured laboratory observation involving both parent and examiner along with the child. Problems in both behavior regulation and anger modulation are elicited by performance tasks, and the target child’s natural behavior is coded. Performance tasks may include simulations, tasks with the child alone, or tasks where the parental behavior is scripted to systematically elicit the full range of behaviors relevant to a particular diagnosis. The DB-DOS includes diagnostic observation, which is structured to allow for a wide range of clinically relevant behaviors to be observed, and clinical judgment may be used to rate behaviors on a continuum of atypicality, ranging from normative variation to clinically concerning. The DB-DOS demonstrates good inter-rater and test–retest reliability as well as strong predictive and concurrent validity . Their multi-informant, multi-method research has indicated that the DB-DOS is reliable and valid in terms of its utility for discriminating clinical levels of disruptive behavior in young children (McKinney & Morse, 2012). Other analogue assessments used in the examination of disruptive behaviors of older children include simulations such as a computer pinball competition with an alleged peer, to study cheating, responses to provocation, and aggressive behaviors.

Lastly, clinicians have reported on the use of an analogue paradigm to measure aspects of parent–child interaction and in particular child compliance to adult instructions. The Compliance Test (CT) was developed to determine interaction patterns between parents and their disruptive or noncompliant children. Usually, the parent and child are placed in a playroom situation with a variety of toys and containers. After a period of habituation, the parent issues a standard set of instructions related to task completion as well as cleanup, and observers note not only the child’s compliance/noncompliance but can also record the parent’s behaviors. The CT has been used to evaluate the outcomes of parent training programs and has shown good validity with other methods of noncompliance assessment (Filcheck, Berry, & McNeil, 2004). It would be easy to extend this paradigm for the assessment of teacher–child compliance issues in an educational setting. What might also be helpful in further understanding the variables that maintain children’s disruptive behavior, the school clinician could easily observe the parent and child completing a simple task together (such as a puzzle) and then cleaning up. A wealth of information about compliance and reinforcement contingencies is available in a relatively short observation of the dyad in this analogue situation. Some have extended this to include watching and coding (Parent–Adolescent Interaction Coding System ) a verbal interaction between parents and adolescents over a “hot topic” to get a fuller understanding of the communication dynamics with a conflicted family (Robin & Foster, 2002).

Since analogue situations are developed specifically for a particular assessment target and participant, it seems easily adapted to any age child. For the young child who may not be able to complete a questionnaire assessment or who is not yet able to self-reflect and accurately report on their experience, observation of natural behavior in a contrived setting might be the best possible assessment. When incorporating role-play scenarios or more simulated settings (i.e., the BAT), the reliability and validity of the assessment does depend on the child’s capacity for mental representation. Since students are asked to respond “as if” they were in the natural environment, their understanding of and flexibility with pretense and imagination will influence their responses.

Some have written about the methodological limitations and the psychometric concerns with analogue assessment (Mori & Armendariz, 2001), and others continue to examine issues of reliability and validity (Filcheck et al., 2004; DiLorenzo & Michelson, 1983). Overall the results are mixed for role-play assessments; the psychometric outcomes depend on how much the behavior in the analogue situation reflects real behavior. Often what children say they would do and what they actually do in a real-world situation do not correspond. There are demand characteristics in contrived settings that may exert influence on the child’s behavior. For example, most children know that the simulated feared environment (such as a darkened room) is made safer because of the person conducting the assessment. Further, since all analogue assessments only sample actual behavior, a question of adequate sampling arises. Do the chosen role-play scenarios or the compliance instructions represent enough of a range for a particular child to truly provide an accurate rating of the behaviors of interest? A recent study sought to examine the representativeness of parent–child analogue tasks usually used in the measurement of noncompliance. Rhule, McMahon, and Vando (2009) asked a community sample of mothers who were observed in 4 parent–child compliance tasks about their experience, and the majority rated the interaction as comparable to what would go on at home. The children however were young (4–6 years old) and might experience fewer behavioral constraints even in an analogue setting.

In general, analogue assessment methods have long been developed and employed in the assessment of a range of children’s difficulties in both clinical and academic endeavors. Extremely flexible and able to be individualized, the use of simulated situations and/or role-play scenarios is recommended to help directly observe specific behaviors and their functional relations, to test clinical hypotheses, and to evaluate outcomes of intervention efforts. We suggest the use of analogue measures at least at pre- and posttreatment, but perhaps midway as an assessment probe as well to supplement questionnaire data. If the student’s behavior is videotaped, a visual comparison across time will be instructive and rewarding for the student, their parents, and for the treatment provider. Lastly, these role-play techniques can easily be included in an initial interview with a youth and his/her parents to support the other information being gathered.

Self-Monitoring

This unique method of behavioral assessment involves a student’s observation of their own behaviors and/or experiences and the recording of information on aspects of importance for intervention planning and evaluation. According to Silverman and Ollendick (2005), self-monitoring has often been viewed as a more efficient and easier way to accomplish the same goals as direct observations—that is, to identify and quantify symptoms and behaviors, to identify and quantify controlling variables, and to evaluate and monitor treatment outcome. The method is direct in that the data are gathered at the time of the occurrence; however, self-monitoring is far more subjective than other methods as the observer and observed is one and the same person. Self-monitoring can help to identify functional relationships impacting a student’s problem behaviors in real time and across multiple contexts.

In general, the student records the occurrence of the target behavior or aspects of the behavior of interest (frequency , intensity , duration ) or other variables hypothesized to influence the target behavior on some kind of data sheet, recording card, or recording device. Often the treatment provider will develop a specific form for record keeping which is turned in on some regular basis (i.e., daily, weekly, at each meeting). Just about any behavior that can be clearly defined can be monitored, and the format can be adjusted according the student’s cognitive level. Data can be recorded either continuously from baseline to the end of an intervention program or at specified time as a sample of behavior. Most clinicians would agree that self-monitoring is a highly flexible and efficient method to gather data on low frequency, less observable behaviors and on internal experiences, especially thoughts and feelings. Cohen and colleagues (Cohen, Edmunds, Brodman, Benjamin, & Kendall, 2013) provide a number of specific suggestions for implementation of self-monitoring so that the student and treatment provider can work together to gather data to track target behaviors and associated variables, to plan and implement interventions, and to provide feedback and outcome data. They describe this type of collaborative empiricism, characteristic of current CBT approaches, in a case illustration of self-monitoring with an anxious 11-year-old boy (Cohen et al., 2013).

Clearly there are developmental considerations when designing a self-monitoring procedure. Often synonymous with self-recording and self-observation, this assessment method does depend upon the child’s cognitive and emotional capacities. More complex recordings assume that the child has achieved a meta-cognitive level that involves the ability to think about one’s own thinking. Further, learning to monitor one’s own behavior (reactions, experiences) is a key requisite to self-regulation and affect regulation. By about age 8, children have an understanding of multiple emotions of varying intensities and of opposite valences and by about age 10 can invoke affect regulation strategies. Often these related thoughts and feelings are at the very core of interventions for anxious, angry, or depressed students. Daly and Ranalli (2006) have developed a unique method for helping students to record data on their behaviors even if they cannot read. Their creative development of “ Countoons ” (see Fig. 2.4) shows the flexibility of the self-monitoring approach, and they offer detailed steps for the treatment provider to follow in the development of a Countoon strategy. Generally, younger children will need a simpler recording format, while older children are able to include more information and even expand on their experiences in a diary entry format.

Fig. 2.4
figure 4

A blank countoon for handraising versus yelling

There are a number of methodological limitations in the use of data collected via self-monitoring procedures. Self-monitoring is a behavior and as such is susceptible to a number of sources of reactivity and influence. Behavior change itself can result from the recoding of behavior as a student’s attention is drawn to the variables of interest. This unfortunately inserts a level of inaccuracy in the data collection, but inadvertently can result in a prompt for an alternative behavior or newly acquired skill. For example, when recording an anger trigger on the Hassle log (see Fig. 2.5) often used in anger management protocols (Feindler, 2012), the student can then opt to invoke a newly learned alternative response (deep breath, walk away) and thus record a less intense anger experience. This renders self-monitoring an intervention strategy in addition to being an assessment method.

Fig. 2.5
figure 5

Hassle log example

McGlynn and Rose (1998) suggest that the valence of the target behaviors to be monitored may impact accuracy and reactivity: negatively valenced behaviors might be recorded less accurately by students less motivated for treatment or concerned about consequences of reporting. Youth may not be accurate in monitoring their own behaviors, but they may still exhibit a desirable behavior change so choice of behaviors to self-monitor requires careful consideration.

Self-monitoring is certainly limited by the student’s ability and level of motivation to understand and comply with the instructions and the procedures. Greater compliance is usually associated with the student’s involvement in the design of the data process: students that pick their own target behaviors and help to make up the data sheet (and PDA options can be quite useful) participate more eagerly in data collection. Checklists (see Hassle Log : Fig. 2.5) are often easier to complete and can be designed in such a fashion as to subtly teach the components of a CBT approach to treatment. In this data sheet, students first record setting events (i.e., “where are you?”), then record the triggering events (“What happened”), and then record their behavioral responses, feelings, and thoughts (“What did you do?”). This represents a written sequence of events upon which affect regulation and/or alternative responses can eventually be invoked. Pretreatment use of a self-monitoring strategy, often done to establish baseline levels of functioning, can also be used as an early compliance probe. Since many interventions for children and adolescents rely on completion of homework assignments, it is best to get an early assessment of compliance level. Radley and Ford (2013) have provided helpful guidelines for developing self-monitoring intervention for educators creating individual behavior change programs for specific youth.

Overall, it is evident that self-monitoring is a valid and relevant assessment method for intervention for a variety of child and adolescent behavior problems. The clear advantages include:

  1. 1.

    It is inexpensive, highly flexible, and completely individualized.

  2. 2.

    It can be adapted for all students regardless of developmental level.

  3. 3.

    It allows for continuous monitoring of fluctuations in behaviors of interest and in evaluating treatment outcome.

  4. 4.

    It can help to determine functional relationships and thus aid in case conceptualization and treatment formulation.

  5. 5.

    It provides immediate feedback on behaviors/variable of interest to the student and the treatment provider.

  6. 6.

    Data collection (and perhaps a visual graphing) can provide a clear picture of improvement that will be rewarding to all.

  7. 7.

    It can facilitate communication with both teachers and parents in an ongoing fashion.

Even with all the advantages of engaging a potentially problemed individual in self-monitoring strategies that have been outlined thus far, one rather significant obstacle to success necessitates acknowledgment: buy-in. Imperative to the success of this style of intervention is the likelihood that the student will actually engage in the process in a useful way. In this regard, there are two elements that must be considered when attempting to increase the chance that a student will buy-in to a self-monitoring intervention: feasibility and utility .

Many if not all of the “traditional” self-monitoring tools that have been researched and validated are pencil and paper style interventions. While on the surface this may seem trivial, the likelihood that an adolescent who is displaying externalizing or internalizing behaviors in school and whose primary concern is how he/she preserves social standing with a peer group, drawing attention to oneself by pulling a thought record out in the middle of class, may not make it to the list of priorities. In cases such as this, the utility of more technologically based tools may bear some consideration. Now, more than ever, students are inundated with a culture of immediate access to information and an excess of that information (Osit, 2008). To ignore the potential utility that engaging children and adolescents using devices they are already using in their daily lives seems almost neglectful. Several computer and smartphone-based apps have been developed throughout the last several years, some of which may be particularly useful in this capacity:

  1. 1.

    CBT Pad—a free, computer-based app that mimics the traditional thought record used in several CBT-style treatments. Individuals can customize the record to include precipitating events, thoughts triggered, consequences, challenges involved, and actions that followed. Each of these sections has further qualifiers that allow the student to give as much detail as necessary for who, what, where, and when the sought-after behaviors occurred. It also generates an ongoing chart/graph to easily identify patterns over time.

  2. 2.

    Notepad—this free iPhone app comes standard and is already installed on each device. For students who find that paper/pencil tools are invasive and call attention to them, use of this digital writing tool may eliminate arousal of suspicion from peer groups and increase adherence and likelihood that the data collected will be accurate and immediate.

  3. 3.

    Panaganos—smartphone app that includes a mood tracker with “what am I doing,” “who is with me,” “where am I” elements that can then chart out results over time by graph or by location. With GPS-enabled smartphones (which most are), triggering events recorded by location may add another element to pattern detection.

  4. 4.

    DBT Diary Card—though a steeper priced app, this integrative smartphone program is modeled after the tracking card used to monitor behaviors and urges to act throughout dialectical behavior therapy (DBT). With engaging visuals, the app allows students to keep record of any target behavior including “urges” (the feeling preceding an impulsive or maladaptive action) and any alternative behaviors engaged in instead. Of note is that if the school psychologist or counselor working directly with the student also has a smartphone with this app, the two can connect, and the counselor will automatically be sent any self-monitoring data that the individual logs. This facet of the intervention not only has the potential to increase adherence, it also creates a sense of accountability that may impact continued and diligent use. Utility of this particular app should be considered strong with the more highly motivated self-monitorer.

Of course, should things like cell phone and computer use in school be an issue with either classroom or administrative policy, caution should be used before considering these options. Inviting the potential for more negative attention is not the goal. Additionally, as the world of apps is ever growing, be mindful of not using those that use “therapy” in either the title or the purpose. Several apps profess to be replacements for therapy or at the very least a helpful supplement to therapy. This is not the goal for use of self-monitoring in school. The goal is to simply have tools that are easily accessible to increase buy-in and adherence.

Other resources: As treatment manuals continue to be developed and worksheet incorporated for children, publishing websites have made materials more and more accessible. Readers are directed to Oxford University Press website: Treatments That Work for downloadable tools:

www.oup.com/us/companion.websites/umbrella/treatment/…/mforms/.

Examples include:

  1. 1.

    A children’s daily logbook for school refusal treatment.

  2. 2.

    A worry record for treatment of youth anxiety.

  3. 3.

    A feelings chart . Practitioners can download these self-monitoring forms for particular children and/or settings or adapt them for specific interventions. Further forms are available for parent self-monitoring of their own responses to their children, and these might even be adapted for teachers as well.

Archival/Permanent Product Data

There are numerous sources of archival and continuous data available in any educational setting. These data are those that are routinely collected as a part of already existing programs and/or administrative policies and may easily be incorporated into any intervention effort. Baseline data is usually available for many weeks, even months and years prior to an intervention, and data will continue to be collected following any treatment program. These typical school records may include: absences, detentions, suspensions, demerits, expulsions, rule violations, class cuts, academic data (both local and statewide), grades in classes, homework completion, error rates, test scores, teacher evaluations, yearly reports, IEP data, nurses notes, fines or other response-cost measures from a behavior management system, etc. Such data are easily collected by teachers as well as by treatment providers and are both time and cost-efficient (Chafouleas, Christ, Riley-Tillman, Briesch, & Chanese, 2007). Any source of information that can be quantified can be charted across time for visual analysis of baseline stability and behavioral changes. For mental health personnel, we recommend keeping data on setting and keeping appointments, being on time, behavior during sessions (cooperation, compliance with tasks, verbal participation, eye contact, etc.), homework completion (i.e., self-monitoring or other task assignment), and contacts with family members and other professionals involved with the student. For some older students, there may also be community agencies involved with the target student who might be able to provide data that is already being collected, once permission has been obtained. Police and probation departments and other community organizations the student might be involved in (sports teams, scout clubs, etc.) might provide information about attendance and appropriate social behavior.

In an extremely large study across 2,500 elementary schools, McIntosh, Frank, and Spaulding (2010) examined the use of standardized office discipline referrals (ODR) to identify risk based on the time of year and the referral in order to determine responses to intervention. Although their data is promising in terms of efficient collection of archival data, the authors indicate that schools must adopt clear definitions for ODRs, use standardized ODR forms, and provide training of personnel in the use of this assessment. The eventual efficiency of such data collection must be weighed against the initial “costs” of implementation.

It might seem tempting to obtain archival data over the course of treatment as the data already exist and require no measures to be administered and scored. However, usually, archival data represents NOT student actual behavior, but rather a recording of an adult overseeing student behavior (teacher, administrator, parent, probation officer, etc.). As such, these data represent complex processes of student behavior (i.e., some rule infraction), observation and recording of consequent events, and a notation of some outcome. Caution in interpretation of such data sources is warranted. Student absence from school can be the result of multiple events and cannot be ascribed solely to the student’s motivations. Since the working alliance is crucial to effective treatment, the student should be made aware that data from these sources may be collected and can be used in understanding the impact of treatment. In particular, we suggest collaborating with especially adolescents in obtaining data to inform and evaluate treatment.

Summary

In this chapter we have attempted to provide the practitioner with an overview of behavioral assessment principles and methods that are most suited to the assessment of student problem behaviors in the school environment. A general review of interviewing, self-report, self-monitoring, direct observation, analogue assessment, and the use of permanent product data has included examples of the methods as well as resources the practitioner can use to access the assessment tools and forms. Others have written extensively about evidence-based assessment protocols for various disorders (see, e.g., McKinney & Morse, 2012 for details about the assessment of disruptive behavior disorders) and provide step-by-step guidelines for baseline and repeated assessments. Most recently, Ebesutani and colleagues (2012) have responded to the burden of an ideal comprehensive assessment in terms of time and cost and have suggested a “real-world” protocol for the practitioner. The remainder of the chapters in this edited volume focus on school-based interventions for specific presenting problems and include a section on disorder-specific assessments.