Keywords

Chapter Highlights

  • Program evaluation can help protect clients’ and families’ well-being, justify costs , and monitor effectiveness.

  • Program evaluation is only as effective as its ability to follow several critical standards/guidelines in its process, which are utility , feasibility, propriety , and accuracy .

  • The primary focus of program evaluation performance is on needs assessment , feasibility study, process evaluation , outcome evaluation , and cost–benefit analyses.

  • Needs assessment measures the gap between what is and what could be.

  • For family therapy in residential treatment, feasibility studies examine the areas of therapeutic, financial, and systemic feasibility.

More than 50,000 children in the United States alone are placed into residential treatment programs annually (Vaughn 2005; Warner and Pottick 2003). Estimates that are more recent suggest an even greater increase in enrollment (up to 80,000) in residential care annually (Substance Abuse and Mental Health Services Administration [SAMHSA] 2012), and the need for treatment extends beyond just these clients. Using a nationally representative sample, Merikangas et al. (2010) found that 49.5% of adolescents will be diagnosed with a mental health disorder at some time over the course of their lifetime. Additionally, 27.6% are diagnosed with a disorder causing severe impairment (Merikangas et al. 2010, p. 984). Mental illness and behavioral disorders in children and adolescents exact a serious toll, affecting five million American children and their families, and costing 10.9 billion dollars per year (Davis 2014).

Failure to properly address these issues can compound problems, resulting in costly and long-term treatment issues. For example, in an effort to address serious and alarming behavioral health needs (e.g., substance abuse, overdose deaths, and child abuse) the State of New Mexico enacted several waves of intervention over the past 20 years (Program Evaluation Unit, Legislative Finance Committee 2014). However, many of these interventions were not effective and lacked processes that could inform decision-makers and provide appropriate oversight. The result has been several ineffective (and sometimes abusive) treatments at a cost of over half a billion dollars passed on to state taxpayers (Program Evaluation Unit, Legislative Finance Committee 2014).

Although program evaluation is typically used by internal stakeholders to examine an existing program (Williams-Reade et al. 2014), the implications of effective evaluation extend far beyond internal program functions. At a micro-level (involving the client and the client system) and macro-level (involving an entire healthcare system), program evaluation helps protect the clients’ and families’ well-being, justify costs , and monitor effectiveness. Treatment providers receive accurate appraisals of their program by analyzing processes and outcomes. Funding sources ensure their money is well spent through accurate cost–benefit analyses. Consultants gain insights on program effectiveness, them to better match families with programs. When done correctly, program evaluation can benefit all stakeholders in the landscape of residential treatment.

Program evaluation is “a systematic study using research methods to collect and analyze data to assess how well a program is working and why” (United States Government Accountability Office 2012, p. 3). The W. K. Kellogg Foundation’s Evaluation Handbook (2004) outlines several key components of evaluation. These are to “strengthen projects, use multiple approaches, and design evaluation to address real issues, create a participatory process, allow for flexibility, and build capacity” (pp. 2–3). Program evaluation allows for a deeper understanding of the specific responses to treatment provided in an agency . For example, a myriad of studies demonstrate the effectiveness of substance abuse treatment. However, the modality and program often vary in effectiveness depending on the population served, the precipitating factors leading to treatment, challenges arising from dual diagnoses, etc. It is important to effectively match clientele with treatment. “The surest way to make this determination is through rigorous evaluation of treatment modalities, treatment programs, and patient outcomes” (McCaffrey 1996, para 3).

It should be noted that program evaluation is only as effective as its ability to follow several critical standards/guidelines in its process— utility , feasibility, propriety , and accuracy (Center for Disease Control [CDC] 1999; Williams-Reade et al. 2014). The standard of utility relates to the worth of evaluation findings. Proper utility in evaluation ensures that findings are useful to stakeholders . On the other hand, a poorly timed evaluation that produces irrelevant information demonstrates weak utility, and is relatively useless to stakeholders. The standard of feasibility refers to the realistic, prudent, diplomatic, and cost effective nature of the evaluation (CDC 1999; Williams-Reade et al. 2014). Proper feasibility ensures that program evaluation is not overly burdensome to human and financial resources , whereas weak feasibility results in evaluation that is irresponsible, tactless, or impracticable. The standard of propriety is the assumption of appropriate ethical and legal practices . Proper propriety in program evaluation entails decent, respectful, and apt evaluative measures, whereas weak propriety results in potentially offensive, irregular, or illegal evaluative measures. The standard of accuracy implies that the evaluation must “demonstrate scientific rigor and convey appropriate information” (CDC 1999; Williams-Read et al. 2014, p. 285). Proper accuracy ensures that evaluations are objective, correct, and applicable, whereas weak accuracy results in misleading or inapplicable evaluation. Sound evaluation design, valid and reliable information, and justified conclusions and decisions all relate to the accuracy of the evaluation. When applied diligently, the standards of utility , feasibility, propriety , and accuracy guide evaluators to worthwhile results.

The purpose of this chapter is to provide a general understanding of the rationale, purposes, and methods of program evaluation . Specific attention will be given to the role of family therapy in residential treatment. The primary focus of this discussion on program evaluation will be on needs assessment , feasibility study, process evaluation , outcome evaluation , and cost–benefit analyses. Each evaluation type will be discussed in detail, with particular attention paid to how these types of evaluation are currently being used in the field of residential treatment. Proper program evaluation of residential treatment programs results in the promotion of practices that lead to proper treatment, as well as the maintenance of a program’s productivity and profitability.

Types of Program Evaluation

Program evaluation can take on several different forms, each with a unique purpose. The five major categories, into which most all other types of evaluation fit, are (a) needs assessment , (b) feasibility studies, (c) process evaluation , (d) outcomes evaluation, and (e) cost–benefit analyses (Rossi et al. 2003; United States Department of Health and Human Services [DHHS] 2010). There is critical overlap between a client’s evaluation process through a program and the timing of each evaluation (see Fig. 24.1). Each type of program evaluation is utilized to answer various and specific questions relating to the treatment process. When used effectively evaluation can delineate functional and dysfunctional aspects of a program, demonstrate effectiveness to funders, identify program strengths and weaknesses, and contribute insight into family therapy in residential treatment (Gass 2014). The timeline in Fig. 24.1 illustrates when the types of program evaluation are actually implemented, which are used to search for knowledge and answers during that time in the program. Note the overlap between certain forms of program evaluation.

Fig. 24.1
figure 1

Program evaluation timeline

The timeline above provides context for when each of the following evaluation types are most relevant. Much of this is intuitive, but preparation and planning are essential for proper evaluation (Gass 2014). What follows is a brief introduction to the five categories of evaluation, posing relevant questions for each. In essence, answering the following questions is the process of program evaluation .

Needs Assessment

  • What level of care does the client need?

  • How can we determine the right program for a specific client?

  • How can we reach the desired outcome for families?

Feasibility Study

  • Is family therapy technically feasible?

  • Is it financially feasible?

  • Can we construct a family system that will answer its organizational needs?

Process Evaluation

  • Does the process meet accreditation standards? Is it evidence-based? How is risk properly managed?

  • Does the actual process align with the intended process?

  • Is the program working for the client? What alterations are necessary for the client’s success?

Outcomes Evaluation

  • Did the client experience success in the program?

  • Has the client maintained positive change over time?

  • What are the success rates?

Cost–Benefit Analysis

  • What direct and indirect costs are involved in the treatment decision?

  • What is the benefit, effect, utility , and efficiency of treatment?

  • In consideration of the above, is the program worth the cost?

Needs Assessment : What Are the Clinical Objectives?

Needs assessment is intended to measure the gap between what is and what could be. What is refers to the present state of affairs and what could be refers to the desired target state that a family would like to reach. This type of evaluation is not always included in texts pertaining to program evaluation . However, client assessment is important in the landscape of residential treatment as it can inform the greater evaluation process (Ellis et al. 1984). An initial assessment also determines a client’s appropriate level of care. If residential placement is considered the most prudent course of action, clients and their families need help in specific program selection. Some considerations are quality, restrictiveness, and appropriateness. Another possible consideration involves past outcomes with similar clients. Results of a needs assessment are used to understand the context of the enrollment and to establish a treatment plan . In short, needs assessment determines an individual or family’s current level of functioning, and provides direction on how to reach desired outcomes (Ellis et al. 1984).

The first step in a client’s treatment is contact with an admissions representative. There is an initial screening or determination of fit between the client and the residential program. This process informs both consumer and provider of the needs of the client, allowing for the initial evaluation to begin. After screening , there is typically a series of early evaluations of both client and family needs. A conversation with a clinician (perhaps the clinical director or the client’s individual therapist) will ensue. This may happen before or upon arrival. Once enrolled in the program, ongoing assessment guides treatment.

One example of a screening tool utilized in determining client need is the Youth Outcome Questionnaire (or Y-OQ). This is a 64-question self-report diagnostic tool designed for parents to assess their child’s level of function/dysfunction (OQ Measures 2014). There is also a self-report form of the Y-OQ written for either adolescent or young adult clients themselves. These surveys are useful as both initial and ongoing assessment tools , collecting information from both the parents’ and the clients’ perspectives. The questionnaire presents a critical items scale, examining certain risk factors . High scores on this scale denote that a client’s needs are best met in a residential setting (OQ Measures 2014). Following the initial assessment, the Y-OQ can also be used mid-treatment or post-discharge , to evaluate client process and outcome, respectively.

When deciding on residential treatment, close analysis of client needs is imperative for accurate placement. There are myriad modalities and program models that cater to the wide variety of needs exhibited by former, current, and future clientele. Psychological and academic testing by trained professionals is an additional level of needs assessment . Professionals such as educational consultants (see Chap. 6), who are well versed in program options and specialties, are also a valuable resource in matching clients with the most appropriate program.

Once enrolled, needs assessment becomes a matter of treatment planning . Upon their son or daughter’s arrival at many residential treatment programs, parents complete a Y-OQ. At this juncture in the treatment process, the Y-OQ provides information to begin treatment planning. The Y-OQ can also identify parent–child reporting discrepancies, which is potentially useful in understanding the family system (OQ Measures 2014). One particular advantage of the Y-OQ is that it can be offered at intake, mid-point, discharge , and post-discharge (OQ Measures 2014). Using the Y-OQ at these intervals can inform every part of practice , providing longitudinal data to improve a client’s treatment and analyze their post-treatment experience.

Another needs assessment utilized in family therapy treatment planning is the Family Assessment Device , or FAD. This self-survey examines family function levels based on six sub-scales : problem solving , communication , roles , affective responsiveness, affective involvement, and behavior control (Ryan et al. 2005). The FAD also provides a general functioning measurement. Administered to every family member over the age of 12, the 60-question FAD is a comprehensive assessment of the family system (Ryan et al. 2005; Yingling 2012). The FAD was built on the McMaster Model of Family Functioning , which assumes an interrelated family system lies at the core of family success or failure (Ryan et al. 2005; Yingling 2012). Clinicians using the FAD follow the belief system that the client is the family system and the issues are only explained through familial transaction and interaction (Ryan et al. 2005; Yingling 2012). Initially designed as a screening tool, the FAD has also been utilized as a multiphasic instrument, alongside the Y-OQ, measuring process and outcome, as well as intake needs.

As one can see, needs assessment is vital in identifying what level of care a client’s needs, what program will provide the best fit, and what presenting aspects to address while in treatment. The Y-OQ and FAD are two specific examples of needs assessment instruments. Although self-reported measures are subjective in nature , they can provide potent information, especially when utilizing multiple sources of data (e.g., parents/guardians , clients, siblings). Cross-referencing can uncover poignant family dynamics that must be addressed in order to establish family success. Needs assessment , in general, is a critical component in evaluating family therapy, as it sets the stage for appropriate and effective treatment.

Feasibility Study : How Will It Be Facilitated, Funded, and Managed?

For family therapy in residential treatment in particular, feasibility studies examine the areas of therapeutic, financial, and systemic feasibility (Gass 2014). Special attention is often paid to the best use of resources (e.g., staff, equipment, finances, space, and time). Therapeutic feasibility examines how particular interventions or modalities will be chosen, implemented, and tracked. Financial feasibility refers to funding for staff, materials, space, and additional services . Systemic feasibility needs include professional training, management, collaboration, and oversight. In a departure from solely examining client-centered needs, feasibility studies focus on programs, looking specifically at facilitation, funding, and management. Feasibility studies consider the factors of long-term intervention, looking to determine capability and lasting effectiveness.

Client success depends upon many factors. One of them is facilitation, or the type of therapeutic modality or intervention employed in their treatment. Selection, implementation, and documentation of treatment will have a profound effect on efficacy. Prescriptive treatment design is ideal, as certain forms of treatment will better match certain presenting symptoms. Correlations between therapeutic modality and symptoms are well researched. Clinicians increase efficacy by considering historical treatment effectiveness with similar clients. Evaluating the process of selection and implementation can inform stakeholders of the prescriptive nature of their subjected program. Both selection and implementation are reinforced through documentation. Whether approaches prove successful or not, the knowledge provided in assessment is valuable. Treatment must be documented, not only from an ethical standpoint, but also to inform associated treatment professionals and to track client progress. Having systems for selection, implementation, and documentation in place will provide structure that enhances therapeutic efficacy.

An assessment of financial feasibility is a programmatic cost evaluation. Questions pertinent to finances include the following: Who will conduct family therapy? Are they internal or external to the organization? What resources will they need? Where will they meet their clients? Are there any additional services necessary for families or their children? Will the cost of family therapy be included in tuition , or will it be an additional cost? A thorough program design will generate clarity in terms of financial feasibility.

Along with therapeutic and financial assessments, the systemic needs of family therapy must be addressed. Questions of management, certification, oversight, and collaboration all relate to the feasibility of a program. In one residential program the authors are familiar with, the family therapists primarily work remotely, flying in to attend parent workshop weekends. As licensed psychotherapists , they serve in this role as adjunct staff. Family therapists work with the client’s individual therapist to provide holistic family therapy that synchronizes the client’s process with their parent’s parallel process (see Chap. 7). Family therapists also coordinate with operations staff to schedule family workshops . Due to the specific program design, it is common that clients will be on adventure therapy experiences, so scheduling and transportation are integral in the planning process. Physical space where the workshop will take place is another consideration. The Clinical Director, to ensure quality and consistency , typically oversees the work of family therapists. As evidenced above, organizational considerations are paramount in the overall treatment process and integral in the feasibility conversation.

Process Evaluation : How Does It Work?

Process evaluation measures program propriety , fidelity, and effectiveness (Gass 2014). Measuring propriety, or adherence to standards of conduct, can include accreditation assessment, risk management analyses , and the level of congruence with program models. Measuring fidelity, or the accuracy of implementation to program design, helps ensure consistency between intended and actual programming. Measuring process effectiveness entails mid-treatment assessment, informing decisions about maintenance, alteration, or termination of treatment. Findings from fidelity, propriety , and effectiveness studies are used to increase program consistency , excellence, and potency.

Stakeholders of all kinds stand to benefit from propriety assessment. Consumers look for quality programs, program directors compete for clientele, and organizations are charged with ensuring appropriate therapeutic conduct. Even evidence-based practices and adequate risk management depend on the propriety of process. Accreditation and licensure are two types of comprehensive evaluation. Depending on the type of program, different accreditation or licensure standards apply. Program directors may join an organization aligned with similar programs. Some professional organizations require members to adhere to certain standards and professional practices through a systematic accreditation process. For example, members of the Outdoor Behavioral Healthcare Council (OBHC) have an accreditation process specific to their organization. Designed with input from their own membership and with support from outside experts, members of the OBHC created industry standards that ensure a focus on best practices , the health and best interests of clients, and the promotion of quality, well-managed programs. Such a process examines the way risk is managed, the types of clients served or excluded, and the related therapy provided.

Fidelity and effectiveness are also important factors in process evaluation . One useful tool for measuring fidelity and effectiveness is a logic model (Kellogg 2004). When establishing a program model, they provide a clear vision to guide implementation. Once the vision is made tangible, fidelity can be assessed through comparison between a program model and actual implementation. To assess effectiveness, such models provide a road map for reaching desired outcomes. A basic logic model for a program is outlined below:

  • Needs/Input/Resources  → Strategies /Activities → Immediate Outcomes/Output → Intermediate Outcomes → Final Outcomes/Impact

An example of a well-developed logic model is that of the Soltreks Wilderness Therapy Program (see Fig. 24.2). Founded in 1997 by Lorri Hanna and Doug Sabo, this wilderness program offers prescriptive adventure expeditions for adolescents, young adults, and older adults who are seeking healthy growth and change (Gass et al. 2012). Note the potential for fidelity and effectiveness study in the figure recreated below.

Fig. 24.2
figure 2figure 2

Logic model of the Soltreks Wilderness Therapy Program

A logic model can measure program fidelity in several ways. For example, a survey of Soltreks’ staff might uncover that they are consistently working on family roles , but are inconsistently encouraging parent and sibling participation in assignments. These are two standard categories of intervention outlined in the logic model , part of the intended process. Because the vision of the program is so clearly laid-out, assessment of fidelity is easier to conduct. With knowledge of the discrepancy between intention and implementation, Soltreks’ leadership was provided with a clear direction for reestablishing fidelity (Gass et al. 2012).

In another example regarding effectiveness measurement in program evaluation , a client was demonstrating a mixed degree of success and failure in meeting his established immediate outcomes. Using the logic model , the client’s therapist examined the areas contributing to these mixed results and adjusted treatment accordingly. The Y-OQ and the FAD are useful tools in process evaluation as well. As mentioned previously, using the Y-OQ and the FAD as multiphasic tools allows a therapist to reflect on the variable success of particular interventions . The purpose of evaluating process effectiveness is to guide ensuing treatment. Moving forward, therapists synthesize information gathered from the past, the client’s readiness in the present, and the desired future outcome. Evaluating the process of going from here to there helps to operationalize the therapeutic process for all involved.

Outcome Evaluation : How Well Did It Work?

Outcome evaluation gauges the extent to which clients achieve therapeutic objectives. Post-treatment functioning, recidivism statistics, and consumer satisfaction are all examples of variables used to evaluate program outcomes. Such assessment, if utilized effectively, leads to program improvements and informed policy decisions. There are certain instruments, like the Y-OQ and the FAD, that help determine both global and specific functioning in the individual and family. Recidivism statistics and associated analyses can offer a broader scope of effectiveness, helpful in assessing data from programs, modalities, or entire symptom populations. There are also measures of consumer satisfaction, like the National Association of Therapeutic Schools and Programs (NATSAP) Parent-Discharge Questionnaire (PD-Q) , used to track parent satisfaction levels following their child’s treatment (NATSAP 2015).

As mentioned earlier when highlighting needs assessment and process evaluation , the Y-OQ and FAD can also be used to measure outcomes. For example, Y-OQ and FAD scores are commonly collected by many residential programs at intake, discharge, and 6- and 12 months post-discharge. Durability measurement, like 12-month post-discharge data, increases validity and improves understanding of the client’s long-term success. The NATSAP data pool allows individual programs to compare the outcomes of their clients to the national average. Figures 24.3 and 24.4 depict examples of comparative data (from an anonymized NATSAP member program) that reflects the NATSAP average. Notice the rise in dysfunctional reporting at the 6-month interval. This rise might be alarming to consumers, clients, and program staff alike. However, the levels decline once again, approaching the low exhibited at discharge. Such information is valuable to all stakeholders, helping to explain trends in outcomes reporting.

Fig. 24.3
figure 3

Youth outcome questionnaire (Y-OQ) self-report results indicating functional change occurs at discharge and continues to show positive trends after one year of treatment

Fig. 24.4
figure 4

Family assessment device (FAD) scores indicating significant change in treatment that is maintained for at least one year after discharge

The two previous graphs include student-reported Y-OQ and FAD scores.Footnote 1 The clinical cutoff defines clinically significant dysfunction. As seen in the figures, post-discharge measurement provides important context, at both 6- and 12-month intervals. The client and family may have experienced the most growth while the client was enrolled in therapy, but the change appears to be lasting, as reported at 12 months post-discharge. The above graphs depict global functioning, demonstrating a decrease in overall dysfunction in the identified client (Y-OQ) and the family as a whole (FAD).

The Y-OQ and the FAD are useful in evaluating specific outcomes as well. Examining self-reported symptom severity in particular areas can help identify potential weak points for program staff to address. In cases of inconsistent behavioral functioning , those areas that remain below target may take priority in future program planning. For example, perhaps a client and their family, through the FAD, report better functioning in behavior control , but remain the same in affective communication (Ryan et al. 2005). These findings will empower program staff to address such discrepancies when working with future clients.

Recidivism statistics are often used as an outcome measurement of behavioral effectiveness as well, examining the extent to which clients repeat maladapted behaviors. For example, Calley (2012) examined the risk factors for recidivism and found that offender type was a significant factor in predicting recidivism. More specifically, outcome statistics suggest that the applied treatment is more effective for one type of offender (sex offenses) than others (violent or substance-involved). Statistics on recidivism can also aid in policy decision-making.

A comparative study of treatment programs for juvenile offenders in Georgia showed considerable outcome discrepancy depending on treatment modality. Figure 24.5 was created using data from this study and demonstrates this discrepancy. For a 3-year comparison of Behavior Modification through Adventure (BMtA) , other therapeutic programs (OTP), and a youth boot camp model (YDC), the BMtA program yielded a lower recidivism rate than the other two programs (Gillis et al. 2008). In this way, recidivism statistics can help policymakers decide on funding allocation based on program effectiveness.

Fig. 24.5
figure 5

A 3-year comparison of different treatment programs and their outcomes

Finally, there are outcome measurements of consumer satisfaction. Using the NATSAP Parent Questionnaire at Discharge (PQ-D) and 6 months post-discharge (PQ-PD) researchers assess parental satisfaction regarding their child’s treatment (NATSAP 2015). Parent satisfaction is largely a function of expected therapeutic outcomes (e.g., overall family growth, affective and behavioral change, and skills acquisition). With the PQ-PD, parents are asked a series of questions regarding the functioning of their child. After reviewing such topics as school GPA, involvement with the legal system, and recently prescribed medication, parents are asked how satisfied they are with their child’s treatment (NATSAP 2015). This type of outcome evaluation incorporates both objective and subjective data for measuring and comparing objective client outcomes and subjective consumer perceptions of effectiveness.

Cost–Benefit Analysis : Is It Worth the Financial Burden?

Cost analyses seek to answer three basic questions: What are the costs ? What are the benefits? Is it worth the money? The costs can seem simple at first, examining tuition , travel expenses , additional testing, etc. However, there typically are hidden costs incorporated in treatment and evaluation (see Chap. 20), many of which are associated with the decision to postpone or avoid therapy or evaluation altogether. The second factor relating to cost–benefit analyses are the effect, utility , and efficiency of a program. These factors represent the process and outcomes of a program, compared through a monetary denominator. The conclusion is whether the therapeutic outcomes are valued more than the monetary inputs. For parents and policymakers alike, selection among various treatment options is largely dependent upon the cost–benefit evaluation.

Tuition , travel expenses , academic and psychological evaluation, and secondary placement considerations account for the bulk of what families and insurance companies spend on residential treatment. Tuition alone can be financially burdensome. For example, residential treatment for a client struggling with an eating disorder costs an average of $956 per day, with an average stay costing $79,348 (Evaluating Residential 2006). Some programs have parent workshops, weekends, or visits. These are often impactful interventions , but may be cost prohibitive for a family already stretched by the price of tuition . These direct costs are perhaps more readily apparent in a family’s treatment decision-making , and can be calculated during a program evaluation .

There are indirect costs savings (also referred to as benefits) related to effective treatments that are also considered. Aos et al. (2006) created a system of understanding the indirect aspect of cost analyses. With regard to a cohort of non-sexual juvenile offenders, the cost of recidivism per offender, per offense, was $61, 985. When examined in another way, this number represents the tax revenue saved for each previous offender who did not recidivate. Aos et al. (2006) used five categories in calculating this figure: savings in police costs , criminal filings and conviction processes, prison costs, crime victims in terms of monetary out of pocket costs , and savings by crime victims in terms of quality of life issues. For parents of non-offender at risk youth, impending hospital, judicial, or post-treatment costs may factor into a current evaluation.

On the other side are the effects, utility , and efficiency provided in treatment. Like those examined through the FAD, this may relate to an increase in family functioning . Effects or changes in parent–child relationships may provide greater understanding or openness toward one another (e.g., better affective communication) . Utility relates to the practical ways in which therapy can aid a family system . This could be in the form of skill building, goal setting, consensus on placement decisions, or as simple as acceptance of one another. Efficiency relates to the process of these aspects, done in the most productive way at the lowest cost. Improvement in functioning can then be monetized and combined with the types of data described above in the final cost–benefit calculations.

The final question, then, is whether the effects of treatment are worth the price. The effects of treatment outcomes on a family system, or society at large, are hardly explained in monetary terms. However, effective, comprehensive evaluation of costs and benefits will yield the most accurate assessment. Policymakers often have more time to run such comprehensive evaluations, but often face challenging bureaucratic forces resisting change in therapeutic or penal modality.

Treatment decisions for families may also entail other challenging factors. For many families such decisions are made in a time of crisis , not allowing for comprehensive investigation. In any case, experts are incredibly helpful. Education consultants are familiar with programs, their typical clientele, processes, and outcomes. Guidance counselors may also know what options are available for youth in their school system. Researchers are available for testimony concerning macro decision-making. The bottom line is that the tools to make the most cost-wise decision for consumers, policymakers, and supporting professionals are available. Although important, a thorough description of how to conduct cost–benefit analysis is beyond the scope of this chapter. For those who are interested in learning more, Christenson and Crane (2014) provide an in depth description of how to conduct cost–benefit analysis with family therapy .

Utilization of Findings

It is the purpose of evaluation to incorporate related findings, whether directly, conceptually, or persuasively, into the treatment decisions facing stakeholders (Williams-Reade et al. 2014). Program staff directly implement findings when they alter a client’s course of treatment based on Y-OQ or FAD mid-stay data. Policymakers conceptually utilize findings when they consider cost–benefit analyses, and marketers persuasively apply findings when creating promotional materials. These represent three ways to utilize evaluation findings. Perhaps the greatest use, however, is the increase in oversight, leading to better programs, better policy, and healthier clients.

Conclusion

Program evaluation is the intersection of research and practice . The behavioral health sector lags behind the broader health industry in its use of evidence-based practices. If done effectively, evaluation can provide evidence for program outcomes, benefitting all involved. The basic types of evaluation are needs assessment , feasibility study, process evaluation , outcomes evaluation, and cost analysis. Evaluations serve providers, consumers, policymakers, funding sources , and consulting professionals. By evaluating our current and future programs, we are ensuring the healthy growth of our clients, programs, and field at large.