Introduction

A major thrust of mental health policy during the first decade of the twenty first century has been the implementation of evidence based practice (EBP). The National Evidence-Based Practice Implementation Project (National Project) demonstrated that high fidelity implementation was a realistic goal for the five practices studied: assertive community treatment (ACT), supported employment (SE), integrated dual diagnosis treatment (IDDT), family psychoeducation, and illness management and recovery (McHugo et al. 2007; Drake et al. 2009; Marty et al. 2008). Although implementation was the result of a complex set of variables, the role of the front-line supervisor or team leader emerged as a particularly important factor for each of the practices except family psychoeducation (Marshall et al. 2008; Rapp et al. 2010).

Prior to 2000, supervisors were rarely included in implementation theory and research. The last decade has witnessed increased attention to this particular position and its link to successful implementation (Fixsen et al. 2005; Marshall et al. 2008; Moser et al. 2004; Brunette et al. 2008; Rapp et al. 2008). Some studies suggest that lack of supervisory leadership or quality supervision is a barrier to implementation (Marshall et al. 2008; Brunette et al. 2008; Rapp et al. 2010). Areas of supervision that have been noted to enhance or contribute to successful implementation have varied widely and include: group supervision (Becker et al. 2007; Gioia and Dziadoxz 2008), measuring and using client outcomes to improve performance ((Marshall et al. 2008; Rapp et al. 2008; Moser et al. 2004; Drake et al. 2005; Becker et al. 2007), quality improvement techniques including process and program monitoring (Rapp et al. 2008; Moser et al. 2004; Bond et al. 2008; Sheidow et al. 2008; Drake et al. 2005), field mentoring & skill development (Rapp et al. 2008; Fixsen et al. 2005; Wieder and Kruszynski 2007; Blakely and Dziadosz, 2007; Miller et al. 2006; Becker et al. 2007), and supervisor mastery of EBP skills to provide quality training and supervision (Brunette et al. 2008; Moser et al. 2004).

This body of work, while establishing the importance of the supervisor role, is quite varied in the formulation of that role and its critical components. It is also vague about specific supervisory behaviors that contribute to successful implementation of an EBP. The purpose of this study was to identify the critical behaviors of supervision for the successful implementation of evidence-based practices in adult mental health. To ascertain the active ingredients of effective EBP supervision, this study surveyed identified experts in the three evidence-based practices (ACT, IDDT, SE) who work with supervisors to support implementation.

Background

Four domains related to key supervisor roles emerged from the qualitative analysis of barriers and strategies in the National Project. First, supervisors played a pivotal role in enhancing the EBP skills of practitioners. For successful implementation of an evidence-based practice to occur, practitioners delivering the service or practice must be able to master the skills of the EBP (Moser et al. 2004; Blakely and Dziadosz 2007). A common, yet ineffective, method used to teach skills of a practice has been through a training program that includes didactic presentation and exercises that allow for practicing and integrating the material learned (Bero et al. 1998; Davis et al. 1992). The primary mechanism for skill development within an EBP though is typically provided through expert trainer/consultants who assist agencies with implementation by training, coaching and giving feedback on the EBP. However, it has been found that consultants cannot sustain a high level of this activity over time and that without a quality supervisor who continues to provide the training, coaching and feedback, the mastery of the new skills are unattainable and unsustainable (Fixsen et al. 2005; Wieder and Kruszynski 2007). One of the key elements found in the EBP literature for enhancing skill development has been when a supervisor who is proficient in the EBP can provide staff with continuous practice of the skill in vivo through observation, modeling, and ongoing feedback on EBP skills (Miller et al. 2006; Fixsen et al. 2005; Marshall et al. 2008; Wieder and Kruszynski 2007; Moser et al. 2004; Blakely and Dziadosz 2007; Rapp et al. 2008; Becker et al. 2007; Gioia and Dziadoxz 2008).

Second, team meetings are a critical part of the EBP practices of ACT, IDDT, and SE. Each of these practices has team meetings as a part of the model, although none specify the relevant supervisory behaviors. Supported employment prescribes weekly, client-based group supervision where strategies are generated and job leads are shared (Becker and Drake 2003). In ACT, team meetings are to occur at least 4 days a week to review each client’s current status (Drake et al. 2005). IDDT requires a multidisciplinary team (Drake et al. 2005).

Third, a variety of dimensions of quality improvement were identified. This included fidelity measurement and the use of fidelity data (Moser et al. 2004; Drake et al. 2005; Rapp et al. 2008; McHugo et al. 2007), identifying structural barriers to EBP implementation and mounting efforts to alter them (Marshall et al. 2008), and realigning or replacing staff (Rapp et al. 2008). As Panzano and Herman (2005) state: “sustaining the effective and faithful use of these practices by assessing fidelity to the practice models” and encouraging the use of clinical quality improvement approaches in mental health agencies was a key strategy (p. 251).

Fourth, outcome monitoring is a critical part of organizational feedback and a powerful supervisory tool for improving performance (Marty et al. 2008). The mechanisms for use of outcome monitoring include: clearly defining client outcomes for the evidence-based practice, having a management information system that collects and aggregates the relevant outcomes in a meaningful and timely manner, disseminating the information to program leaders and staff, and the ability to understand and interpret the data to improve performance (Marty et al. 2008; Poertner and Rapp 2007).

These four supervisory domains became the framework used in this study. To develop specific items within each domain, three major sources were used: (1) the results of the qualitative studies from the National Project; (2) a review of the literature in each domain; (3) the literature on the client-centered model of social administration (Rapp and Poertner 1992; Poertner and Rapp 2007) and supervision. The latter was used in at least two states in the National Project, was particularly relevant to the mission of EBP implementation, and was particularly helpful in identifying specific behaviors in each supervisory domain.

Methods

Sample

We defined an expert as a professional who has been active in the implementation of an evidence-based practice as a consultant/trainer or researcher/leader of EBP implementation efforts at the state or national level. Criteria for inclusion in the study also required that the expert be part of an evidence-based practice implementation effort that requires a supervisory structure within the practice and that uses the defined fidelity scale in their implementation efforts. Three of the evidence-based practices met this requirement—Integrated Dual Diagnosis Treatment, Supported Employment, and Assertive Community Treatment. We excluded the evidence-based practices of Family Psycho-education, Illness Management Recovery because typically the use of supervisors in these practices is not as well established as the other evidence-based practices.

The experts were identified through national researchers working with state consultants and trainers throughout the country to assist in implementing evidence-based practices. In addition, several of the Centers for Excellence were asked to further identify experts based on the definition of expert. Forty-five experts were identified and used in the sample. Attempts were made to obtain experts from diverse locations. Of the 45 experts identified, 11 were identified as primarily ACT experts, 23 IDDT, and 11 in supported employment. Although each expert was identified as an expert in one predominant area, there were some experts that had expertise in multiple evidence-based practices.

Instrument

The survey was developed by a group of evidence-based practice consultant & trainers at the University of Kansas (KU), School of Social Welfare, Office of Mental Health Research & Training who assist programs implementing IDDT, supported employment, and Strengths Model case management. Consultant/trainers were either trained or familiar with the client-centered management model. In addition, the consultant/trainers used their years of experience working with supervisors to implement EBP programs to develop the supervisory components and activities. The identification and refinement of items included review by various colleagues in other states implementing various EBPs.

The supervisory behaviors are clustered in four groupings: team meetings, staff skills, continuous quality improvement, and monitoring and use of outcomes. There are a total of forty supervisory behaviors on the survey with 30 supervisory items and 10 distracter items. Distracter items were interspersed within each section. Distracter items were items that are not part of the model, but are found in the literature or typically seen in practice. There are 11 supervisory items with four distracter items in the Team Meeting cluster. The Staff Skills cluster has 6 supervisory items with two distracter items. There are 7 supervisory items with two distracter items for Continuous Quality Improvement, and there are six supervisory items with two distracter items for the area of Monitoring and Using Outcomes.

Procedures

The survey was approved by the KU Human Subjects Committee. Experts were contacted by e-mail and asked to participate in the study, provided an informed consent statement, and given a link to the survey. They were asked to identify each supervisory behavior as to how important it is for the supervisor to do the supervisory practice in order to facilitate the implementation of an evidence-based practice. The survey asked for the individual to rate each supervisory practice on a seven point scale (not important to extremely important).

Results

Sample Characteristics

Thirty-seven experts completed the survey for a response rate of 82%. Of those that completed the survey, seventeen (47%) consulted and trained in multiple states and 19 (53%) primarily consult and train in one specific state. The majority of experts (57%) are state consultant/trainers associated with a university or state government. Twenty-one percent of the experts are consultant/trainers employed by a mental health agency, 14% are EBP implementation researchers, and 8% identified themselves as having multiple professional roles (e.g. consultant/trainer at a MHC and a researcher).

The majority of the experts had a master’s degree (70%), 14% held a Ph.D., and 8% held a bachelors degree. The remaining 8% included MDs and Psy.D. The mean years experience as a consultant/trainer was 6 years and the mean years of experience in supervising others was 12 years.

Experts completing the survey identified their area of expertise. Thirty-five percent identified having expertise in integrated dual diagnosis treatment, 22% identified having expertise in supported employment, 14% in assertive community treatment, and 29% in multiple EBP practices. Of the 11 experts identifying themselves as having expertise in multiple EBP practices, 11 were experts in IDDT, 8 in supported employment, and 7 in ACT. Three expert responders identified themselves as experts in areas that were not initially identified for the study (e.g. primary care integration, trauma treatment, Strengths Model case management, and illness management and recovery).

Experts practiced in the United States as well as the Netherlands. States represented were Ohio (12), Indiana (4), Illinois (4), Connecticut (2), Michigan (2), Hawaii, North Dakota, Iowa, Maryland, Vermont, and Oregon. Five people identified themselves as practicing as a consultant/trainer in multiple states. Two responders are consultant/trainers in the Netherlands.

Expert Ratings

Mean scores and standard deviations were computed run for all supervisory items and for each section of practice. Table 1 shows the means and standard deviation for the total responders and for responders of each specific EBP practice. A specific expert’s rating may be counted in multiple practice areas if the responder-expert identified themselves as having expertise in more than one practice.

Table 1 Importance ratings on supervisory behaviors impacting implementation of EBP

The total mean score for all responders for all practices for The Supervisor Facilitates and Leads Team Meetings was 5.97 (SD .51). This was the lowest mean score of all the components. The Supervisor Build and Enhances Skills had a total mean score of 6.20 (SD .66). The Supervisor Monitors and Uses Outcomes of the EBP had a total mean score of 6.27 (SD .83), and the total mean score for The Supervisor Leads Continuous Quality Improvement Activities was 6.25 (SD .81).

Each of the items under the four major components were rated important to very important (5.0 or greater) with the exception of two of the items for the supervisor facilitating and leading team meetings—“the supervisor teaches staff members to make clear and concise presentations” (4.94) and “the supervisor requires staff to distribute completed EBP assessment and goal plans for staffing” (4.94).

Expert responders in Supported Employment had the highest mean score in each supervisory activity/component. Integrated Dual Disorder Treatment expert responders had lowest mean scores in each supervisor activity/component.

Distracter Items

Table 2 lists the distracter items with mean scores and standard deviations. Each of the distracter items means fell below 6.0 and total mean for the distracter items was 4.4 (SD .85) suggesting that the experts discriminated between items.

Table 2 Distracter items

Qualitative Question

An open-ended question at the end of the survey asked experts if there were any other supervisory behaviors crucial to the implementation of the evidence-based practice. Twenty-three participants (62%) responded to the qualitative question. Often, respondents gave multiple ideas for additional behaviors. There were 48 individual additional ideas as to what is crucial to implementation. Fourteen responses were not supervisory behaviors (e.g. supervisors must be compensated adequately) and 11 of those 14 non-behavior responses were traits, beliefs or attitudes of the supervisor (e.g. infectious enthusiasm, client-centered, etc.). Five responses were elaborations or redundancy on supervisory behaviors identified on the survey. Of the remaining responses, no two experts identified the same additional supervisory behavior. The additional supervisory behaviors identified appeared to fall into five categories: (1) behaviors of the supervisor external to their team (e.g. building relationships with treatment providers within the agency and external stakeholders), (2) the supervisor’s own personal learning and supervision, (3) Imparting a vision of the EBP to their team, (4) the supervisor’s supervisory style or practice (e.g. “creating an environment…that provides refreshing energy and team support”, using motivational strategies and stages of change with staff), and (5) helping staff with non-EBP, but related activities (monitoring and preventing burnout, protecting staff time from non-EBP responsibilities, helping staff organize their workload). There were several responses that had to do with specific monitoring of the EBP related to IDDT (e.g. monitoring stage appropriateness, monitoring intervention plans, using established checklists for measuring staff proficiency).

Discussion

There was substantial agreement among experts as to the importance of the supervisory behaviors and the four components of supervisory practice that facilitate the implementation of evidence-based practices. All four of the components (facilitating team meetings, building skills, quality improvement activities, and monitoring and using outcomes) had a total mean of 5.99 or higher on a seven point scale. Twenty-four of the thirty items (80%) had a mean of 6.0 “very important” or higher. A score of 6.0 or “very important” is defined as a critical aspect of supervisory behavior for impacting the implementation of evidence-based practice. Furthermore, endorsement for the items was found across EBP practice, although IDDT rated the items marginally lower in all four categories.

The findings of this study provide a basis for an array of activities that can enhance the implementation of EBP’s. With the critical elements of supervisory behavior defined, development can proceed on training manuals and programs, and various tools to help supervisors fulfill their roles. Examples could include protocols for operating team meetings and conducting case reviews, field mentoring, and case documentation reviews. Model job descriptions for EBP supervisors and tools for conducting performance evaluations could be more precisely written. This study also opens up possibilities for future research on the role of supervisors in EBP implementation. A particularly important study would seek to assess the relationship between supervisory behavior and performance (i.e. fidelity scores and client outcomes).

Limitations

This study has several limitations. First, like other expert surveys (Evans and Bond 2008; Marty et al. 2001; McGrew et al. 1994; Walker and Bruns 2006), importance ratings were skewed toward the positive. The relatively small difference in means of the items makes discrimination between “criticalness” of elements difficult. The noticeably lower ratings for the distracter items in all four categories demonstrated that respondents did make discriminations and therefore results cannot be explained by an acquiescence response set. Research linking specific elements to outcomes would be an important next step in the process of dismantling interventions to ascertain the effective ingredients (Scott and Sechrest 1989). Second, the use of purposeful sampling means that questions about generalizability can be raised. For example, it is likely that we omitted other people who could be viewed as “experts” who may hold different views. It is also possible that there are other “experts” that we were not able to identify.