Introduction

The gap between the development and subsequent effective delivery of evidence-based practices (EBPs) in allied healthcare settings is becoming increasingly recognized as an important implementation process to be studied (Aarons et al. 2011; Proctor et al. 2009). Effective implementation and sustainment is critical for EBPs to translate into the intended benefits for patients. Although research has identified individual provider factors related to EBP implementation success (Aarons 2004), there are also numerous organizational factors that are likely to have an even greater impact on the implementation of EBPs (e.g., Beidas et al. 2015). Such organizational factors include organizational culture, climate, and leadership (Aarons and Sommerfeld 2012). Of the aforementioned factors, leadership has been repeatedly identified as one essential component of organizational context that influences organizational change such as the implementation of new innovations (Aarons et al. 2014; Bass and Avolio 1990).

Research on leadership and implementation is nascent, with the focus being primarily on general leadership constructs (e.g., transformational leadership; Aarons and Sommerfeld 2012; Michaelis et al. 2010). However, research in other contexts have considered leadership focused on the achievement of a specific strategic outcome. One such example is in the customer service literature, where strategically-focused customer service leadership has been shown to create a strong customer service climate, which in turn is associated with higher customer satisfaction (Schneider et al. 2005). Furthermore, a recent meta-analysis (Hong et al. 2013) demonstrated that such “service-oriented leadership” had stronger relationships with service climate than measures of general leadership. A similar strategic leadership approach can be applied to the effective implementation of EBPs in the form of implementation leadership (Aarons et al. 2014). The complexity involved with implementing EBPs can be incredibly challenging for leaders, and require skill sets that differ from and complement the skills needed for leading clinicians in delivery of care as usual. Leaders may be faced with specific implementation challenges such as being knowledgeable about and communicating the benefits of utilizing the new practice, allocating various resources and supporting staff in EBP implementation, and being proactive and perseverant in the implementation process.

Answering the call for the identification of implementation constructs and development of brief and pragmatic implementation measures (Martinez et al. 2014; Proctor et al. 2009), Aarons and colleagues (Aarons et al. 2014) developed the Implementation Leadership Scale (ILS) to assess specific leader behaviors that actively support effective implementation through the promotion of a strategic climate for implementing EBPs. The ILS was initially tested in a sample of mental health clinicians working in 93 different outpatient mental health programs in Southern California, who rated their primary leader. Factor analyses provided support for a 12-item scale with four subscales: (1) Proactive leadership: the degree to which the leader anticipates and addresses implementation challenges; (2) Knowledgeable leadership: the degree to which a leader has a deep understanding of EBP and implementation issues; (3) Supportive leadership: the degree of the leader’s support of followers’ adoption and use of EBP; and (4) Perseverant leadership: the degree to which the leader is consistent, unwavering, and responsive to EBP implementation. Confirmatory factor analyses supported the second-order factor structure in which the subscales served as indicators for an overall implementation leadership latent construct. Convergent and discriminant validity of the ILS was also supported in the clinician sample.

The purpose of the present study was to examine the ILS factor structure and psychometrics for first-level leader (i.e., those who supervise direct service providers) self-ratings of implementation leadership. First-level supervisors are particularly influential in supporting new innovations as these leaders are on the frontline working directly with EBP providers as they integrate the EBP into their daily work with clients (Priestland and Hanig 2005). Because the original ILS scale development and validation was conducted with mental health clinician data, it is important to determine if the instrument’s psychometric characteristics hold with first-level supervisor self-reports. Such information is critical for comparing ratings across sources, whether for research or applied purposes. We hypothesized that the factor structure, reliability, and validity of the ILS would be supported with first-level supervisors’ (leaders’) self-reports.

Method

Participants

Participants were 136 mental health supervisors (i.e., leaders) from 31 different mental health programs organizations in California (n = 87) and Pennsylvania (n = 32). Of the 136 eligible participants, 119 completed the measures that were used in these analyses (87.5% response rate). The average age of participants was 45.2, and the majority were female (75.6%). Participants reported an average of 13.9 (SD = 7.7) years of experience in mental health services and 5.9 (SD = 4.5) years tenure with their respective agency. Of the participants, 68.9% identified as Caucasian, 7.8% African-American, 7.8% Asian-American, 16.0% “other” and 16.0% were Hispanic. The majority held a Master’s degree (84.9%). While approximately 9.0% of participants held Ph.D., M.D. or equivalent degrees, 0.8% had some graduate work, 1.7% were college graduates, 2.5% had some college experience, and 0.8% indicated having a high school diploma.

Data Collection Procedures

Data were collected in California and Pennsylvania and the studies were approved by the Institutional Review Boards of San Diego State University and the University of Pennsylvania, respectively. Participation was voluntary and informed consent was obtained from all participants. Details of the data collection for the two samples are presented below.

California Data Collection

This data collection occurred as part of a larger National Institutes of Health (NIH) study focused on implementation measure development. The research team first obtained permission from agency executive directors or their designees to recruit leaders and their followers for participation in the study. Eligible leaders were identified as those that directly supervise staff in mental health treatment teams. Data collection was completed using online surveys or in-person (paper-and-pencil) surveys. For online surveys, each participant received a link to the web survey and a unique password via email. For in-person surveys, participants were provided the paper form of the survey and those agreeing to participate, completed the survey at their team meetings. The survey took approximately 20–40 min to complete. Participants were provided incentives ($30 US) following survey completion.

Pennsylvania Data Collection

Measures for the present study were included in a larger study of behavioral health system change (Beidas et al. 2015). Agency executives were provided with information about the study and agreed upon procedures for recruiting participants. The research team scheduled a two-hour visit at each agency and data were collected using paper-and-pencil surveys. Research staff handed out surveys to all eligible participants and ensured completion before providing an incentive. When in-person data collection was not feasible, surveys were left with eligible staff and participants mailed them back to the research team or an online survey option was provided. As the measures collected were a part of a larger survey, the survey took approximately 60 min to complete, Participants were provided incentives ($60 US) following survey completion.

Measures

Implementation Leadership Scale (ILS) (Aarons et al. 2014a)

The ILS includes 12 items scored on a 0 (‘not at all’) to 4 (‘to a very great extent’) scale. The ILS includes 4 subscales, each consisting of 3 items each: Proactive Leadership (α = 0.95), Knowledgeable Leadership (α = 0.96), Supportive Leadership (α = 0.95), and Perseverant Leadership (α = 0.96). The total ILS score (α = 0.98) was created by computing the mean of the four subscales. The complete ILS measure and scoring instructions can be found in the “additional files” associated with the original scale development study (Aarons et al. 2014b).

Implementation Climate Scale (ICS) (Ehrhart et al. 2014)

The ICS includes 16 items scored on a 0 (‘not at all’) to 4 (‘to a very great extent’) scale. The ICS includes 6 subscales, each consisting of 3 items each: Focus on EBP, Educational Support for EBP, Recognition for EBP, Rewards for EBP, Selection for EBP, and Selection for Openness. The total ICS score (α = 0.92) was created by computing the mean of the six subscales.

Evidence-based Practice Attitudes Scale (EBPAS-15) (Aarons 2006)

The EBPAS-15 includes 15 items scored on a 0 (‘not at all’) to 4 (‘to a very great extent’) scale. The EBPAS-15 includes 4 subscales, Requirements (3 items), Appeal (4 items), Openness (4 items), and Divergence (4 items). The total EBPAS-15 score (α = 0.69) was created by reverse-coding the Divergence subscale, then computing the mean of the four subscales.

Statistical Analyses

Using Mplus statistical software (Muthén and Muthén 1998–2012), we conducted a confirmatory factor analysis (CFA) of ILS leader self-ratings specifying the same factor structure as previously found for follower ratings. Analyses adjusted for the nested data structure (leaders nested in programs) using maximum likelihood estimation with robust standard errors (MLR), which appropriately adjusts standard errors and Chi square values. Additionally, examination of skewness revealed that some items showed minor departures from normality that was also addressed through the use of MLR estimation to adjust for non-normality. Although missing data were minimal, any missing data were addressed through the use of full information maximum likelihood (FIML) estimation. Model fit was assessed using several empirically supported indices: the comparative fit index (CFI), the root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR). CFI values greater than 0.95, RMSEA values less than 0.06, and SRMR values less than 0.08 indicate acceptable model fit (Hu and Bentler 1999). Consistent with the ILS development study (Aarons et al. 2014a), the higher-order model was tested to evaluate the four factor model with each subscale as an indicator of the overall implementation leadership latent construct. We also examined the internal consistency reliability of each subscale and the total scale using Cronbach’s alpha. Convergent and discriminant validity were assessed by computing Pearson Product Moment Correlations of the ILS total scale scores with ICS and EBPAS-15 total scores.

Results

The hypothesized second-order factor model demonstrated acceptable fit (χ 2(50) = 96.944, p < .001; CFI = .960; RMSEA = .089; SRMR = .050). Although the model met the recommended cutoffs for both the CFI and SRMR, it slightly exceeded the cutoff for RMSEA. However, Hu and Bentler (1999) have recommended cautious interpretation of RMSEA with smaller sample sizes and specifically recommended using a combination of CFI and SRMR in such situations. Therefore, we deemed the second-order model to have acceptable model fit. First-order factor loadings ranged from 0.83 to 0.96, second-order factor loadings ranged from 0.74 to 0.95 (all factor loadings were statistically significant p’s < .001). Internal consistency reliabilities were excellent: Proactive Leadership (α = 0.92), Knowledgeable Leadership (α = 0.96), Supportive Leadership (α = 0.93), Perseverant Leadership (α = 0.93) and the ILS total score (α = 0.95). As expected, the ILS total score had a high correlation with ICS total score (r = .72), indicating convergent validity. The ILS total score and EBPAS-15 total score resulted in a low correlation (r = .24), thus supporting divergent validity.

Discussion

This study provides support for the higher-order factor structure of the ILS for leader self-ratings. Leader self-ratings can provide important insight into how leaders perceive their own leadership behaviors when compared with peer or follower ratings. Organizations can use the ILS as a tool for leaders to assess their own leadership for EBP implementation at any stage of the implementation process as outlined in the Exploration, Preparation, Implementation and Sustainment (EPIS) implementation framework (Aarons et al. 2011). Furthermore, the first-level leader self-evaluations can be used as a metric for more formal leadership interventions geared at employing leaders with the tools and knowledge necessary for creating a climate for implementation (Aarons et al. 2015a). Such data can be used in comparison with provider ratings in order to provide insight to leaders about the degree to which their own perspective of their implementation leadership is aligned with that of their followers and superiors. Alignment is also important as discrepancy between leader and follower ratings can affect organizational context (Aarons et al. 2015b).

Some limitations of the present study should be noted. First, this study was conducted with mental health organizations. Generalizability of these findings should be examined through replication in other health and allied health service sectors where EBP implementation occurs such as nursing and substance use disorder treatment. Moreover, future research could examine whether the ILS factor structure holds for higher level leaders (e.g., agency executives) to ensure construct validity across hierarchical levels. Finally, future research should examine the relative validity of self-ratings versus ratings from other sources in predicting implementation outcomes.

Conclusions

This study demonstrated consistency in the factor structure and psychometrics of the ILS in a leader sample, suggesting that further tests of the generalizability of the measure and its relationship with ratings of implementation leadership from other sources as well as implementation outcomes are warranted. Leadership and organizational change interventions to improve the implementation and sustainment of EBPs should be further developed and include validated measures such as the ILS in order to test whether improvements in these constructs advance implementation science and improve the public health impact of implementation initiatives (Aarons et al. 2015b).